entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 10
200
| authors
list | primary_category
stringlengths 5
18
| categories
list | text
stringlengths 2
817k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2306.03900v1
|
20230606175812
|
Model Spider: Learning to Rank Pre-Trained Models Efficiently
|
[
"Yi-Kai Zhang",
"Ting-Ji Huang",
"Yao-Xiang Ding",
"De-Chuan Zhan",
"Han-Jia Ye"
] |
cs.LG
|
[
"cs.LG"
] |
Quantum Computation and Simulation using Fermion-Pair Registers
Soonwon Choi
July 31, 2023
===============================================================
Figuring out which Pre-Trained Model (PTM) from a model zoo fits the target task is essential to take advantage of plentiful model resources. With the availability of numerous heterogeneous PTMs from diverse fields, efficiently selecting the most suitable PTM is challenging due to the time-consuming costs of carrying out forward or backward passes over all PTMs.
In this paper, we propose , which tokenizes both PTMs and tasks by summarizing their characteristics into vectors to enable efficient PTM selection. By leveraging the approximated performance of PTMs on a separate set of training tasks, learns to construct tokens and measure the fitness score between a model-task pair via their tokens. The ability to rank relevant PTMs higher than others generalizes to new tasks. With the top-ranked PTM candidates, we further learn to enrich task tokens with their PTM-specific semantics to re-rank the PTMs for better selection. balances efficiency and selection ability, making PTM selection like a spider preying on a web. demonstrates promising performance in various configurations of model zoos.
§ INTRODUCTION
Fine-tuning Pre-Trained Models (PTMs) on downstream tasks has shown remarkable improvements in various fields <cit.>, making “pre-training → fine-tuning” the de-facto paradigm in many real-world applications.
A model zoo contains diverse PTMs in their architectures and functionalities <cit.>, but a randomly selected PTM makes their helpfulness for a particular downstream task vary unpredictably <cit.>.
One important step to take advantage of PTM resources is to identify the most helpful PTM in a model zoo — estimating and ranking the transferabilities of PTMs — with the downstream task's data accurately and efficiently.
Which PTM is the most helpful? A direct answer is to enumerate all PTMs and evaluate the performance of their corresponding fine-tuned models. However, the high computational cost of the backward steps in fine-tuning makes this solution impractical.
Some existing methods estimate proxies of transferability with only forward passes based on the target task's features extracted by PTMs <cit.>.
Nowadays, a public model zoo often contains hundreds and thousands of PTMs <cit.>. Then, the computational burden of forward passes will be amplified, let alone for the time-consuming forward passes of some complicated PTMs.
Therefore, the efficiency of searching helpful PTMs and estimating the transferability should be further emphasized.
In this paper, we propose , the SPecification InDuced Expression and Ranking of PTMs, for accurate and efficient PTM selection.
In detail, we tokenize all PTMs and tasks into vectors that capture their general properties and the relationship with each other.
For example, two models pre-trained on NABirds <cit.> and Caltech-UCSD Birds <cit.> datasets may have similar abilities in birds recognition, so that we can associate them with similar tokens.
Then the transferability from a PTM to a task could be approximated by the distance of their tokens without requiring per-PTM forward pass over the downstream task.
The success of depends on two key factors. First, how to obtain tokens for tasks and PTMs? The token of the most helpful PTM should be close to the task token w.r.t. some similarity measures. Then, will a general task token weaken the selection ability since it may ignore specific characteristics of a PTM?
In , we learn to construct tokens with a general encoder and measure the similarity between tokens with a Transformer module <cit.> in a supervised learning manner.
We estimate the rankings of PTMs in the model zoo for some historical tasks using rank aggregation.
By leveraging the approximated supervision, we pull task tokens close to the top-ranked PTM tokens and push unhelpful PTM tokens away based on the transformer-measured similarity.
We expect that the ability to tokenize and measure similarity could be generalized to unseen tasks.
The difference between 's token-based PTM selection with forward-based strategy is illustrated in <ref>.
The tokens generated by general encoders significantly reduce the PTM search time and improve the search performance.
If the budget allows, we can extract features of the downstream task by carrying out forward passes over a part of (the top-k ranked) PTMs, revealing the specific relationship between PTMs and the task. We equip our with the ability to incorporate PTM-specific tokens, which re-ranks the PTMs and further improves the selection results.
In summary, is suitable for different budget requirements, where the general and task-specific tokens make a flexible trade-off between efficiency and accuracy, given various forward passes.
<ref> illustrates a comparison of PTM selection methods w.r.t. both efficiency and accuracy.
Our contributions are
* We propose a novel approach to tokenize tasks and PTMs, which is able to rank PTMs in a model zoo given a downstream task efficiently and accurately.
* learns to tokenize and rank PTMs on a separate training set of tasks, and it can incorporate task-specific forward results of some PTMs when resource budgets allow.
* The experiments demonstrate that effectively ranks PTMs and achieves significant improvements on various model zoo configurations.
§ RELATED WORKS
Efficient PTM Search with Transferability Assessment. Whether a selected PTM is helpful could be formulated as the problem measuring the transferability from the source data pre-training the PTM to the target downstream task <cit.>. The current evaluation of transferability relies on a forward pass of the PTM on the target task, which generates the PTM-specific features on the target task.
For example, NCE <cit.>, LEEP <cit.>, LogME <cit.>, PACTran <cit.>, and TransRate <cit.> estimate negative conditional entropy, log expectation, marginalized likelihood, PAC-Bayesian bound, mutual information to obtain proxy metric of transferability, respectively.
Several extensions including 𝒩-LEEP <cit.> with Gaussian mixture model on top of PTM features, H-Score <cit.> utilizing divergence transition matrix to approximate the transferred log-likelihood, and <cit.> exploring correlations between categories of the target task.
Auxiliary information such as source clues <cit.> and gradients of PTMs when back propagating with few steps <cit.> are also investigated.
Although the transferability assessment methods avoid the time-consuming fine-tuning, the forward costs over PTMs also become heavier given diverse and complicated pre-trained model zoos.
Relatedness of Task. Whether a PTM gains improvements after fine-tuning on the downstream task has been verified to depend on the relatedness between tasks both theoretically <cit.> and empirically <cit.>. The relatedness could be measured through various ways, such as fully fine-tuning <cit.>, task vectors <cit.>, example-based graphs <cit.>, representation-level similarities <cit.>, and human prior knowledge <cit.>.
Instead of utilizing a pre-defined strategy to measure the relatedness, construct tokens of PTMs/tasks in vector forms and learns a similarity between them on historical tasks.
Learning to rank predicts the orders of objects usually with a score function <cit.>, and the experience on a training set could be generalized to unseen data <cit.>.
Additional learned metrics or embeddings further improve the ranking ability <cit.>.
The task relatedness can also be modeled as a learning-to-rank problem, where the preference over one PTM over another could be learned from historical rankings of PTMs. However, obtaining the supervision on the training set requires complete fine-tuning over a large number of historical tasks, which either come from a time-consuming transfer learning experience <cit.> or the output from some specially selected transferability assessment methods <cit.>.
We propose a strong and efficient approximation of the PTM ranking supervision on the training set tasks, and a novel token-based similarity is applied.
§ PRELIMINARY
We describe the PTM selection problem by assuming all PTMs are classifiers, and the description could be easily extended to PTMs for other tasks, , regression. Then we discuss several solutions.
§.§ Selecting PTMs from a Model Zoo
Consider we have a target classification task ={(_i, y_i)}_i=1^N with N labeled examples, where the label y_i of each instance _i comes from one of the C_ classes.
Instead of learning on directly, we assume there is a model zoo ℳ = { f_m = _m ∘_m }_m=1^M containing M PTMs. A PTM f_m could be decomposed into two components. _m is the feature extraction network producing d_m-dimensional features. _m∈^d_m × C_m is the top-layer classifier which maps a d_m-dimensional feature to the confidence score over C_m classes.[We omit the bias term for simplicity.]
PTMs in ℳ are trained on source data across various domains. Their feature extractors _m have diverse architectures, and the corresponding classifiers are pre-trained for different sets of objects. In other words, d_m and C_m' may differ for a certain pair of m and m'.
A widely-used way to take advantage of a PTM f_m = _m ∘_m in the target task is to fine-tune the feature extractor together with a randomly initialized classifier over . In detail, we minimize the following objective
f̂ = ∘ = f = ∘min∑_i=1^N ℓ(^⊤(_i), y_i | _m) ,
where is initialized with _m.
The fine-tuned f makes prediction with max_c∈[C]_c^⊤( ).
[C] = {1,…, C} and _c is the cth column of .
Then, we can rank the helpfulness of PTMs based on the performance of their fine-tuned models. In other words, we obtain f̂_m following <ref> based on the mth PTM f_m, then we calculate the averaged accuracy when predicting over an unseen test set of (the higher, the better), ,
__m → = 𝔼[𝕀(y =max_c∈[C] f̂_m())] .
__m → is also named as the transferability, measuring if the feature extractor _m in a PTM could be transferred well to the target task with fine-tuning <cit.>. 𝕀(·) is the indicator function, which outputs 1 if the condition is satisfied. Given _ = {__m →}_m=1^M, , the transferability for all PTMs, then we can obtain the ground-truth ranking of all PTMs in the model zoo for task and select the top-ranked one.
In the PTM selection problem, the goal is to estimate the ranking of all PTMs for a task using _ = {__m →}_m=1^M. The evaluation criterion is the similarity between the predicted _ and the ground-truth _, typically measured by weighted Kendall's τ_w <cit.>. We omit the subscript when it is clear from the context.
§.§ Efficiency Matters in PTM Selection
One direct solution to PTM selection is approximating the ground truth _ by fine-tuning all the PTMs over , where a validation set should be split from to estimate <ref>. Since fine-tuning PTM contains multiple forward and backward passes, the computation burden is astronomical.
A forward pass of a certain PTM's extractor _m over generates the features Φ_^m = {_m(_i)∈ℝ^d_m}_(_i, y_i)∈, which is lightweight compared with the backward step.
The feature reveals how examples in are distributed from the selected PTM's view, and a more discriminative feature may have a higher transfer potential.
As mentioned in <ref>, the existing transferability assessment methods estimate __m → based on the PTM-specific feature Φ_^m and target labels {y_i}_i=1^N <cit.>. Precise estimation requires a large N, which means we need to collect enough examples to identify the most helpful PTMs from a model zoo.
While the previous forward-based transferability assessment methods reduce the time cost, selecting among M PTMs in the model zoo multiplies the forward cost M times, making the estimation of computationally expensive.
Moreover, since forward passes for complicated PTMs take longer, selecting a PTM efficiently, especially given a large model zoo, is crucial.
§ MODEL SPIDER
In , we propose to tokenize PTMs and tasks regardless of their complexity, allowing us to efficiently calculate their relatedness based on a certain similarity measure over their tokens. These tokens capture general properties and serve as a specification of a model or task, demonstrating which kinds of tasks a model performs well on or what kind of models a task requires.
In this section, we first introduce the process of obtaining tokens by learning from a training set of tasks, and the ability to rank PTMs could be generalized to downstream tasks. We then describe the token encoder, the token-wise similarity measure, and an efficient way to generate supervision during token training.
Finally, we discuss how can be flexible in incorporating forward pass results of top-ranked PTMs to further improve the token's semantics and the ranking's quality.
§.§ Learning to Rank PTMs with Tokens
In , we learn the model tokens {_m}_m=1^M, task tokens ( ), and the similarity measure sim(·, ·) in a supervised learning manner based on a separate training set .
The training set does not contain overlapping classes with the downstream task .
Specifically, we randomly sample training tasks {_i} from . For a given training task _i, we assume that we can obtain the ground-truth ranking __i = {__m →_i}_m=1^M over the M PTMs, indicating the helpfulness of each PTM. We will discuss the details of obtaining the supervision __i later. We then select PTMs for _i based on the similarity between the task token ( _i ) and those M PTM tokens {_m}_m=1^M. We expect the higher the similarity, the more helpful a PTM is for the given task.
We use Θ to denote all learnable parameters and optimize Θ with a ranking loss, which minimizes the discrepancy between the rank __i predicted by the similarity function and the ground-truth __i:
min_Θ∑__i∼ℓ_rank(__i={sim(_m, ( _i )) }_m=1^M, __i) .
Given ∈ℝ^M, we use an operator dsc(·) to index the elements of in a descending order, , ∀ m < l, we have _ dsc( m )⩾_ dsc( l ). dsc( m ) is exactly the index of the PTM with mth largest ground-truth score.
Based on this, we use the following ranking loss:
ℓ_rank(, ) = ∑_m=1^M - log( exp(_ dsc( m ))/∑_l = m^Mexp( _ dsc( l ))) ,
<ref> aims to make the whole order of the predicted __i similar to the ground-truth __i. So the similarity between the task token and the token of a higher-ranked PTM indicated by __i should be larger than the similarity with lower-ranked PTM tokens.
The underlying intuition is that if a PTM performs well on certain tasks, it is likely to generalize its ability to related tasks. For example, if a PTM excels at bird recognition, it may effectively recognize other flying animals.
For a downstream task , we generate its task token with (), and identify the close PTM tokens with the learned sim(·, ·).
Objective <ref> also works when the number of examples in a task is small. By learning to rank PTMs for sampled few-shot tasks, can rank helpful models even with limited training data. We will show this ability of in <ref>.
§.§ Tokens for PTM Selection
We encode the general characteristics of tasks and PTMs via two types of tokens.
Model Token. Given a model zoo with M PTMs, we associate a PTM f_m with a token _m ∈ℝ^d. _m encodes rich semantics about the aspects in which f_m excels. Models pre-trained from related datasets or those with similar functionalities are expected to have similar tokens.
Task Token. A C_-class task ={(_i, y_i)}_i=1^N contains a set of instances and labels. We would like to tokenize a task with a mapping (·), which outputs a set of vectors ( ) ∈ℝ^d × C_, one for each class.
We implement with one additional frozen encoder ψ with an equivalent parameter magnitude as the PTMs in the model zoo. ψ is pre-trained by self-supervised learning methods <cit.> and captures the semantics of a broad range of classes.
In detail, we extract the features of all instances in the task and take the class centers as the task token:
( ) = {1/|𝕀( y_i = c )|∑_(_i, y_i) ∈[ ψ( _i ) ·𝕀( y_i = c ) ] }_c ∈[ C ] .
The task token expresses the characteristics of a task, , those tasks with semantically similar classes may have similar sets of tokens.
Model-Task Similarity. The helpfulness of a PTM w.r.t. a task, , the transferability score, could be estimated based on the similarity of the model-task token pairs __m → = sim(_m, ( )), and the PTM selection is complemented by embedding the model and tasks into a space and then identifying close PTM tokens for a task.
In , the sim(·, ·) is implemented with a one-layer Transformer <cit.>, a self-attention module that enables various inputs.
The Transformer consists of alternating layers of multi-head self-attention, multi-layer perceptron, and layer norm blocks.
We set the input of the Transformer as the union set of model and task tokens =[_m, ( ) ] ∈ℝ^ d ×( 1 + C ), then the similarity __m → between model and task tokens is:
sim(_m, ( )) = FC( transformer()[0]) ,
where [0] is the first output of the Transformer, , the corresponding output of the model token. We add a Fully Connected
(FC) layer to project the intermediate result to a scalar.
Learnable parameters Θ, including {_m }_m=1^M, FC, and weights of the Transformer, are trained via objective in <ref>.
§.§ Accelerating Training for Model Spider
The training of in <ref> requires a large number of (task _i, PTM ranking __i) pairs. Although we could collect enough data for each task, obtaining the ground-truth PTMs rankings, , the helpfulness order of PTMs for each task, is computationally expensive. In addition, using some proxies of __i may weaken the ability of the . We propose a closer approximation of the ground-truth __i, which efficiently supervises sampled tasks from .
Approximated Training Supervision.
We take advantage of the fact that existing PTM selection methods rely on the PTM-specific features Φ__i^m to estimate the transferability score w.r.t. _i and produce diverse scores. In other words, a PTM will be placed in different positions based on the scores provided by various methods such as NCE <cit.>, LEEP <cit.>, and LogME <cit.>. Based on their “relatively good but diverse” ranking results, an intuitive approach to estimate the ground-truth __i is to ensemble their multiple ranking results into a stronger single order.
Given {__i^1, __i^2, …} as multiple predicted rankings over M PTMs for a sampled task _i, , the order sorted by the estimations of transferability via various methods, we take advantage of Copeland's aggregation method <cit.> to ensemble the orders: __i = {t̅__m →_i}_m=1^M = RankAgg({__i^1, __i^2, …}).
Copeland's aggregation compares each pair of ranking candidates and considers all preferences to determine which of the two is more preferred. The output __i acts as a good estimation of the ground-truth supervision __i. The aggregated __i is more accurate than a particular transferability assessment method, which improves the quality of the supervision in ranking loss in <ref>.
Sampling Tasks for Training. We assume that the training data contains a large number of classes with sufficient data. To sample tasks for training, we randomly select a set of classes from and choose a subset of their corresponding examples. Benefiting from the supervision estimation approach RankAgg, we are able to obtain the aggregated ranking for any sampled task.
Training Complexity. The training phase in is efficient. First, we pre-extract features {Φ_^m}_m=1^M for with all PTMs in advance. Then only the computational burden of base transferability assessment methods, rank aggregation methods, and the optimization of top-layer parameters are involved.
Furthermore, training tasks with the same set of classes share the same __i.
§.§ Re-ranking with Efficiency-Accuracy Trade-off
The learnable model token captures the PTM's empirical performance on various fields of training tasks, which decouples the task token from the PTM.
Each model token implicitly expresses the field in which the PTM excels, so the PTM selection only requires a task token to express the field in which the task is.
In contrast to the general task token (_i), PTM-specific features Φ__i^m for a subset of PTMs provide rich clues about how those PTMs fit the target examples, which are also used in related transferability assessment approaches <cit.>.
We claim that given specific features with a subset of PTMs when the budget is available, our can re-rank the estimated PTM order and further improve performance.
Specifically, we extract the PTM-specific task token _m ( ) ∈ℝ^d_m × C_ with the specific features Φ^m_ of the mth PTM as <ref>.
To take account of different values of d_m due to the heterogeneity of PTMs, we learn a projection ∈ℝ^d_m × d for the mth PTM to align the dimensionality of _m ( ) with the model token.
We then replace the general task token ( ) via the specific one _m^⊤ _m ( ) when calculating the similarity with the token _m of the mth PTM.
The specific task token may facilitate obtaining more accurate estimations.
During the training process, we dynamically select a partial set of PTMs and incorporate the specific tokens into the sampled tasks. Thus, the same Transformer module in <ref> can deal with the new type of tokens.
To differentiate the general and specific tokens, we learn two additional d-dimensional embeddings as prompts. The prompts are added to the input tokens, allowing the transformer to utilize token-type context for a better ranking process.
Notably, _m ( ) depends on Φ^m_, and the pre-extracted PTM-specific features for all training tasks make the construction of these specific tokens efficient.
§.§ A Brief Summary of Model Spider
learns to rank PTMs with their tokens for a given task, which balances efficiency and accuracy.
During the training, we sample tasks where PTM tokens and the transformer-based similarity are learned. In particular, to enable the model-task similarity to incorporate PTM-specific features, we replace some of the inputs to the transformer with enriched tokens.
We pre-extract PTM-specific features for all training tasks, then the estimated ground-truth and the specific tokens could be constructed efficiently.
During deployment, we first employ a coarse-grained PTM search with general tokens. Then we carry out forward passes over the target task only for top-k ranked PTMs, where the obtained PTM-specific task tokens will re-rank the PTMs by taking the distributed examples with PTM's features into account.
§ EXPERIMENTS
We evaluate on two benchmarks: a model zoo comprising heterogeneous models pre-trained from the same and different datasets. We analyze the influence of key components in and visualize the ability of a PTM using spider charts based on the learned tokens.
§.§ Evaluation on a Single-Source Model Zoo
Setups. We follow <cit.> and construct a model zoo with 10 PTMs pre-trained on ImageNet <cit.> across five architecture families, Inception <cit.>, ResNet <cit.>, DenseNet <cit.>, MobileNet <cit.>, and MNASNet <cit.>.
We evaluate various methods on 9 downstream datasets, Aircraft <cit.>, Caltech101 <cit.>, Cars <cit.>, CIFAR10 <cit.>, CIFAR100 <cit.>, DTD <cit.>, Pet <cit.>, and SUN397 <cit.> for classification, UTKFace <cit.> and dSprites <cit.> for regression.
Baselines. There are three groups of comparison methods. First are creating a proxy between PTM-specific features and downstream labels, such as H-Score <cit.>, NCE <cit.>, LEEP <cit.>, 𝒩-LEEP <cit.>, LogME <cit.>, and PACTran <cit.>.
The second are based on the downstream inter-categories features like OTCE <cit.>, Label-Feature Correlation (LFC) <cit.>, and GBC <cit.>.
Following <cit.> and <cit.>, we equivalently modify NCE and H-Score to the general model selection application.
Evaluations. For the standard evaluation, we follow the official train-test split of each downstream dataset and utilize all the training samples. In few-shot evaluation, we consider if can select useful models with limited labeled examples under privacy and resource constraints.
We sample 10 examples per class from the training set as a “probe set” and report the average results over 30 trials. The full results, along with 95% confidence intervals, are presented in the appendix.
Training Details of Model Spider. We implement the ψ with the pre-trained Swin-B <cit.> to extract the task tokens. is trained on 832 sampled tasks from the mix of 6 datasets, , EuroSAT <cit.>, OfficeHome <cit.>, PACS <cit.>, SmallNORB <cit.>, STL10 <cit.> and VLCS <cit.>. utilizes specific features from the top-3 ranked PTMs (out of 10) for downstream tasks, resulting in a 3-4 times speedup.
Results of Standard and Few-Shot Evaluation. For the standard evaluation shown in <ref> and <ref>, outperforms other baselines across datasets, except for Aircraft, which ranks top-2. It also demonstrates superior stability and outperforms all the existing approaches in few-shot scenarios, as displayed in the lower part of <ref>. Consistently ranking and selecting the correct PTMs, achieves the highest mean performance among all methods.
r0.47
Performance comparison of regression-conducted approaches with the same model zoo and weighted τ_w measurement as in <ref>. The downstream task is dSprites and UTKFace.
2*Dataset 4cMethods for Regression Tasks
4cH-Score LogME GBC Ours
dSprites 0.106 0.612 -0.283 0.679
UTKFace 0.075 -0.156 0.052 0.364
§.§ Evaluation on a Multi-Source Model Zoo
We construct a large model zoo where 42 heterogeneous PTMs are pre-trained from multiple datasets.
Setups. PTMs with 3 similar magnitude architectures, , Inception V3, ResNet 50, and DenseNet 201, are pre-trained on 14 datasets, including animals <cit.>, general and 3D objects <cit.>, plants <cit.>, scene-based <cit.>, remote sensing <cit.> and multi-domain recognition <cit.>.
We evaluate the ability of PTM selection on Aircraft <cit.>, DTD <cit.>, and Pet <cit.> datasets.
Training Details. We use the same task token extractor as in <ref> with 4352 training tasks sampled from the mix of the above datasets for pre-training the model zoo.
Analysis of Multi-Source Model Zoo. With many PTMs in the model zoo, we first set k=0 and select PTMs based on general tokens. We visualize the results in <ref>, with each subfigure showing the transferred accuracy using the selected PTM with fine-tuning and the predicted ranking score. A better-performing method will show a more obvious linear correlation. The results demonstrate that achieves the optimum in all three datasets. Furthermore, a visualization of efficiency, the averaged performance over all datasets, and model size on this benchmark with standard evaluation is shown in <ref>. The different configurations of k balance the efficiency and performance in PTM selection, which “envelope” the results of other methods.
These results confirm that performs well in complex scenarios, highlighting its ability to select heterogeneous PTMs in a large model zoo.
§.§ Ablation Studies
We analyze the properties of on some downstream datasets, following the evaluation of a single-source model zoo in <ref>.
Will RankAgg provide more accurate ground-truth during training?
As discussed in <ref>, is trained on historical tasks and we utilize RankAgg to approximate accuracy ranking. We investigate if this approximation offers better supervision and if using previous model selection methods like H-Score or LogME without aggregation is sufficient. The results in <ref> include CIFAR10 and averaged results over eight classification datasets. It is evident that RankAgg provides stronger supervision during 's training.
r0.43
The weighted τ_w of variants when the training supervision is approximated by different methods. “Mean” denotes the averaged performance over 8 downsteam datasets in <ref>.
Method CIFAR10 Mean
w/ H-Score <cit.> 0.386 0.642
w/ LogME <cit.> 0.695 0.689
w/ RankAgg (Ours) 0.845 0.765
Will more PTM-specific features help?
As mentioned in <ref>, is able to incorporate PTM-specific features — the forward pass of a PTM over the downstream task – to improve the ranking scores.
When no specific features (k=0) exist, we use the general token to rank PTMs (most efficient).
In <ref> (a), we show that τ_w increases when receives more PTM-specific features. It balances the efficiency and accuracy trade-off.
§.§ Interpreting Model Spider by Spider Chart
An interesting by-product of is that we can visualize the ability of a PTM with a spider chart, which demonstrates which fields the PTM is good at. We cluster the datasets in our multi-source model zoo into six major groups. Then, we approximate a PTM's ability on the six types of tasks with the averaged similarity between a PTM to the tasks in the cluster. The larger the similarity, the better the PTM performs on that task. In <ref> (b), we find a PTM pre-trained on AID dataset works well on medical and remote sensing tasks, and a PTM pre-trained on NABirds dataset shows strong ability on birds and animal recognition. The spider chart will help to explain the application scenarios of a PTM and help PTM recommendations.
§ CONCLUSION
The proposed learns to rank PTMs for existing tasks and can generalize the model selection ability to unseen tasks, even with few-shot examples.
The two-stage pipeline in enables it to fit the resources adaptively.
A task is matched with PTMs efficiently based on their task-agnostic tokens if the resource is limited. While there is a sufficient resource budget, limited forward passes are carried out over the candidates of top-ranked PTMs, which re-ranks the candidates via incorporating the detailed fitness between the task and the selected PTMs.
The learned tokens help construct a spider chart for each task, illustrating its relevance with all PTMs.
The tokens for models and tasks act as a kind of specification that matches the main design in Learnware <cit.>.
plainnat
Supplementary Material
We provide details omitted in the main paper.
* <ref>: Workflow of , encompassing the construction of model-task tokens, training, and testing, with the “how to” and “answer” format.
* <ref>: Experimental setups and implementation details of , especially the two types of pre-training model zoos utilized in the experimental section.
* <ref>: Additional experimental results conducted along different dimensions of robustness analysis.
* <ref>: Additional datasets descriptions and other details mentioned in the main text.
* <ref>: Discussions and future exploration of .
§ DETAILS AND DISCUSSIONS OF MODEL SPIDER
In the method section of the main text, we elucidate the comprehensive workflow for training and testing the deployment of . This process encompasses three main steps, including (1) the extraction of task tokens, (2) the extraction of model tokens, and (3) the construction of a training scheme that assesses the ranking of matching between model-task tokens, thereby establishing the ground-truth rank of the model zoo for a given task. Once these three steps have been accomplished, the subsequent phase entails training the by leveraging the extracted tokens in conjunction with the ranked ground-truth information.
In essence, the testing and deployment strategy employed by the framework epitomizes a balance between flexibility and efficiency. By employing a fixed feature extractor ψ to acquire tokens pertaining to downstream target tasks, the trained undergoes a singular inference pass, generating an output quantifying the similarity between each model token and the downstream task token. It then accomplishes the task of ranking the PTMs.
In the forthcoming sections, we elaborate on the details in the form of “how to do it” questions.
The training process of is illustrated in Algorithm <ref>, while the sampling procedure for training tasks is elaborated in detail in <ref>. Additionally, in <ref>, we expound upon the training strategy of PTM-Specific task tokens. Analogously, the testing process of is presented in Algorithm <ref>, and in <ref>, we provide a comprehensive exposition of the entire deployment workflow for ranking pre-trained models.
§.§ How to construct model tokens and task ones
This section supplements the details of <ref> and <ref>, , the construction of the model-task tokens, including the enriched PTM-specific ones.
PTM token. The dimension of PTM token, , the d of ∈ℝ^d is implemented as 1024. It is a learnable parameter that is optimized with the training process.
Task token. The ψ is implemented by a pre-trained Swin-B-based EsViT <cit.> (linked at <https://github.com/microsoft/esvit>), self-supervised learning on the ImageNet-1K <cit.> with batch size 512. In our experiments, this encoder acts as a wide-field feature extractor and is fixed without updating. The shape of task token ( ) ∈ℝ^d × C_ varies with the number of categories of downstream tasks.
As mentioned in <ref>, task tokens enriched by the PTM-specific features are obtained through the forward pass of a PTM. We use another fully connected layer to project the PTM-specific feature to align with the model token.
§.§ How to sample the training tasks of Model Spider
We sample tasks for training from additional datasets that are disjoint from the downstream tasks. These additional datasets possess notable differences and encompass diverse domains. Notably, does not require substantial additional data for training.
We sample the training tasks from a diverse pool of datasets. The number and size of the mixed datasets are controlled within a certain range. For more details, please see <ref>.
§.§ How to see the relationship between RankAgg and Model Spider
We claim that RankAgg proposed by us cannot be considered as a direct baseline method. Firstly, RankAgg involves a substantial computational overhead when used as a stand-alone method for ranking PTMs. This is primarily due to the time and memory requirements of computing the base selection methods. Using RankAgg directly as a baseline would introduce a significant computational burden.
However, we introduce RankAgg as an approximate ground-truth method for pre-computing in the training part of . It is more efficient compared to full parameter fine-tuning.
Actually, aims to demonstrate its broad generalization capacity by leveraging RankAgg to process an independent set of mixed data that has no overlap with the test data. This independent evaluation showcases the effectiveness of in a real-world scenario and emphasizes its ability to handle diverse data efficiently. RankAgg itself does not play a role during the test execution of .
§.§ How to efficiently approximate the training ground-truth of Model Spider
This section complements <ref>, wherein the training and ranking of the model zoo across multiple datasets are discussed. However, obtaining the ranking for all historical tasks through brute force is computationally expensive. To mitigate this issue, we introduce a rank aggregation method denoted as RankAgg, which serves as an approximation of the ground truth ranking.
Existing PTM selection methods rely on the PTM-specific features Φ_^m to estimate the transferability score. Different methods may have diverse score values — a PTM will be placed in different positions based on the scores provided by various methods.
We empirically observe that some popular approaches such as NCE <cit.>, LEEP <cit.>, and LogME <cit.> show “good but diverse” PTM ranking orders, so an intuitive approach to improving the transferability estimation quality is to ensemble their ranking results to a stronger single order.
As mentioned in <ref>, given {_1, _2, …, _A } as multiple rankings over the same set of M PTMs for a target task , , the order sorted by the estimations of transferability via various methods, we take advantage of Copeland's aggregation method <cit.> to ensemble the orders.
= {__m →}_m=1^M = RankAgg({_1, _2, …, _A }) .
Copeland's aggregation compares each pair of ranking candidates and considers all preferences to determine which of the two is more preferred as illustrated in <ref>.
Taking model m, m^' as an example, we define the majority relation to express the one-on-one dominance between these two models. Precisely, assuming that A_m approaches rank model m above model m^', , _i, m > _i, m^' with A_m × such _i, while the remaining A_m^' ones do the opposite. Note that A_m + A_m^' = A. The m >_𝕄 m^' just in case A_m > A_m^', and correspondingly m =_𝕄 m^' indicates A_m = A_m^'. In summary, we define the aggregation score for model m as:
__m → = #{ i | m >_𝕄 i } + 1/2#{ i | m =_𝕄 i } ,
where #{·} is the size of the set. The aggregation score for a model is the number of others over which they have a majority preference plus half the number of models with which they have a preference tie.
In our implementation, we aggregate the results of NCE, LEEP, LogME, and H-Score.
RankAgg can become quite time-consuming when calculating PTM ranking scores for the entire dataset, mainly due to the substantial overhead of computing the base selection methods.
In our experimental setup, we integrate the RankAgg method as a module during the training phase, enabling us to pre-compute the rankings for each task. The RankAgg may raise the computational burden if employed directly as a testing baseline.
Therefore, we employ RankAgg for the sampled few-shot tasks to balance ranking accuracy with efficiency and only use it in the training part. Note that learns based on the RankAgg results, but is deployed independently of both it and other baseline methods. Since RankAgg summarizes the PTM generalization capability on differentiated tasks spanning multiple domains, our model derived from the pre-aggregated rankings can learn the PTM ranking ability on a broader range of unseen tasks.
§.§ How to learn the similarity of model-task token
This section elaborates on <ref>, , the learning process of , especially the Transformer based estimation.
The Transformer-based module of model-task similarity.
The model-task token is concatenated as a sequence of features. The Transformer based module naturally fits and takes such input. Concretely, in operation, transformer(·) is formalized as:
transformer( ) = + α( 𝚀,𝙺,𝚅=)
= + softmax(W^𝚀·(W^𝙺)^⊤/√(d)) W^𝚅 .
we apply linear projections on the query, key, and values using W^𝚀, W^𝙺, and W^𝚅, respectively. The similarity between prototypes is measured by the inner product in the transformed space, which results in larger weights of the attention head α. Here d is the size of every attention head. The output of the corresponding position of the model token is forwarding passed through a learnable MLP and then obtains the fitness estimated score of PTM selection.
The learnable parameters in Model Spider. To learn a PTM ranker, we optimize M model tokens {_m}_m=1^M, the fully connected layer projection heads of the PTM-specific task tokens Φ__i^m (mentioned in <ref>) and the transformer-based model-task similarity evaluator sim(·, ·), which is the main mapping and estimation module (mentioned in <ref>).
§.§ How to re-rank with PTM-specific task tokens
As described in <ref> of the main text, we initially extract generic features using a fixed ψ and conduct with the invariant task token across all PTMs. These features are used to generate a coarse-grained ranking by comparing the similarity between each task token and the model token. However, this ranking is solely based on a standardized task representation and does not account for the specific task-related information for each individual PTM.
Hence, we propose the re-ranking strategy specifically targeted at the top-k PTMs. During the testing phase, we leverage the coarse-grained ranking and perform inference on the downstream task with these top-k PTMs. Such PTM-specific task tokens are worked to update their similarity with the downstream task, as outlined in Algorithm <ref>. Notably, in the third line of the algorithm, we conduct a re-ranking based on the revised similarity scores obtained through this process.
§.§ How to deploy Model Spider for testing
For a novel downstream task, we employ the generic feature extractor ψ to extract the task token. We then evaluate the similarity between each PTM in the model zoo and the given downstream task using the learned model token and a transformer-based . If computational resources are available, we can leverage the results from the previous round to enhance the ranking process. Specifically, we can select the top-k PTMs from the previous ranking, extract their features, and apply the re-ranking approach as described in <ref>.
§ EXPERIMENTAL SETUPS AND IMPLEMENTATION DETAILS
In this section, we introduce the experiment setups and implementation details, including constructing the pre-trained model zoo and training as well as deploying .
§.§ Single-source heterogeneous model zoo
Construction of the model zoo.
We follow <cit.> and construct a model zoo with 10 PTMs pre-trained on ImageNet <cit.> across 5 families of architectures available from PyTorch.
Concretely, they are Inception V1 <cit.>, Inception V3 <cit.>, ResNet 50 <cit.>, ResNet 101 <cit.>, ResNet 152 <cit.>, DenseNet 121 <cit.>, DenseNet 169 <cit.>, DenseNet 201 <cit.>, MobileNet V2 <cit.>, and NASNet-A Mobile <cit.>. The model zoo spans PTMs of multiple parameter quantities.
These pre-training models cover most of the supervised pre-training models the researchers employ.
The downstream tasks.
There are 9 downstream tasks from various fields, including Aircraft <cit.>, Caltech101 <cit.>, Cars <cit.>, CIFAR10 <cit.>, CIFAR100 <cit.>, DTD <cit.>, Pets <cit.>, and SUN397 <cit.> for classification, UTKFace <cit.> and dSprites <cit.> for regression. We use official train-test splits on each dataset and calculate the estimation scores for the baseline approaches on the training part.
Transferred accuracy ranking of PTMs (ground-truth) after fine-tuning downstream tasks. We follow <cit.> to obtain the ground-truth transferability score as well as the rankings = {__m →}_m=1^M (M=10) with careful grid-search of hyper-parameters. Specifically, we grid search the learning rates (7 learning rates from 10^-1 to 10^-4, logarithmically spaced) and weight decays (7 weight decays from 10^-6 to 10^-3, logarithmically spaced) to select the best hyper-parameter on the validation set and compute the accuracy on the downstream test set.
The training and computation of such a ground truth necessitates a substantial investment of over 1K GPU hours, imposing significant financial and computational burdens. Consequently, the feasibility of accomplishing this task within the constraints of training is rendered unattainable.
Sampling details of training tasks.
We sample the training tasks from a diverse pool of datasets. The datasets considered for sampling include EuroSAT, OfficeHome, PACS, SmallNORB, STL10, and VLCS.
To ensure a representative training set, we randomly sample 832 tasks from all datasets. Each task is distributed across 2 to 4 mixed datasets and consists of 100 categories, and for each category, we randomly select 50 examples. In cases where the number of categories or examples to be sampled exceeds the specified limits, we select the maximum allowable value.
Discussions. This model zoo covers several classical structures commonly used in deep learning. The number of model parameters ranges widely, with large application potential. Still, there is also a situation where PTMs with larger scales tend to perform better in classification tasks and regression ones, making certain rankings always better on some datasets.
§.§ Multi-source heterogeneous model zoo
Construction of the Model Zoo. As mentioned in the main text, we construct a large model zoo where 42 heterogeneous PTMs are pre-trained from multiple datasets in different domains, including animals <cit.>, general and 3D objects <cit.>, plants <cit.>, scene-based <cit.>, remote sensing <cit.> and multi-domain recognition <cit.>. The concrete datasets are Caltech101 <cit.>, Cars <cit.>, CIFAR10 <cit.>, CIFAR100 <cit.>, SUN397 <cit.>, Dogs <cit.>, EuroSAT <cit.>, Flowers <cit.>, Food <cit.>, NABirds <cit.>, PACS <cit.>, Resisc45 <cit.>, SmallNORB <cit.> and SVHN <cit.>.
The models' structures are 3 similar parameter-magnitude architectures, , Inception V3 <cit.>, ResNet 50 <cit.> and DenseNet 201 <cit.>. The setting of the multi-source heterogeneous model zoo includes significantly more pre-training data than the single-source heterogeneous one described above.
We pre-train the models with 3 structures on 14 datasets mentioned above (3 × 14 = 42, initialized from the weights of the corresponding ImageNet pre-trained models).
The downstream tasks. We select 3 representative datasets as the downstream test tasks and conduct the PTM selection methods on them. Concretely, they are Aircraft <cit.>, DTD <cit.> and Pets <cit.>. As outlined in the following description, we obtain the transferred fine-tuning accuracy (ground-truth) with an equivalent level of hyper-parameters search strategies.
Transferred accuracy ranking (ground-truth). Similarly, we adopt downstream supervised learning with optimizing by cross-entropy loss. We meticulously conduct a grid-search of hyper-parameters, such as optimizers, learning rates, and weight decays (2 optimizers as SGD or Adam, 6 learning rates from 5 × 10^-2 to 10^-4, and 3 weight decay values from 5 × 10^-4 to 10^-5, batch size of 128, and the maximum epoch of 100).
For the multi-domain dataset, like PACS <cit.>, we set the test set to the same domain as the training set to reveal the in-domain performance. For the rest, we use the official train-test splits. We build the model zoo with around 5K GPU hours (on NVIDIA V100 GPUs). Similarly, when dealing with the expanded model zoo, the utilization of rigorous training methodologies to acquire the requisite ground truth for training is eschewed.
Sampling details of training tasks. The sampling process for the multi-source heterogeneous model zoo is consistent with the single-source one mentioned above.
In this case, we use the following datasets as the auxiliary set, , Caltech101, Cars, CIFAR10, CIFAR100, Dogs, EuroSAT, Flowers, Food, NABirds, PACS, Resisc45, SUN397, and SVHN.
We randomly sample 4352 tasks for training.
Discussion. The availability of a multi-source heterogeneous model zoo introduces a wider array of models with varying structures, effectively covering a broader scope of domain knowledge. Consequently, this heightened diversity presents an increased difficulty in accurately ranking PTMs. Particularly, when a substantial gap exists between the characteristics of downstream tasks and the major PTMs, the ranking accuracy of some baseline methods undergoes a precipitous decline.
§ ADDITIONAL EXPERIMENTAL RESULTS
§.§ Ablation studies on simpler ψ and less training tasks
We deploy additional experience with weakened conditions to verify the robustness of . In <ref>, we first introduce an attenuated simpler ψ, the additional encoder except for the PTMs in the model zoo. We import the tiny format pre-trained Swin-Transformer from EsViT (about this, please refer to <ref> for more details). It has about half the number of parameters. The results show that although attenuated ψ has only half of the parameters, it can still assist in expressing task tokens.
We then halve the training tasks to verify the significance of the training part diversity. We find that except for the performance degradation of the DTD dataset, the others are still flush with performance. learns the characteristics of different PTM ability dimensions well despite the absence of training tasks.
r0.47
The weighted τ_w of variants when the training objective is implemented by different loss functions. “Mean” denotes the averaged performance over 8 datasets.
Method CIFAR10 Mean
w/ MSE 0.558 0.526
w/ ListMLE <cit.> 0.777 0.735
w/ ℓ_rank (Ours) 0.845 0.765
§.§ Ablation studies on the influence of training loss
As stated in the main text, the learning process of incorporates a ranking loss. To assess the efficacy of this selection, alternative regression or ranking loss functions, such as mean square error (MSE) and ListMLE <cit.>, are employed as replacements. The outcomes, presented in <ref>, clearly demonstrate that the presented ranking loss function surpasses the other alternatives in terms of both effectiveness and robustness. Notably, when alternative loss functions are utilized, the overall performance of experiences a substantial decline. These findings underscore the indispensable role of the ranking loss function within the framework of .
§.§ Ablation studies on the different shots of RankAgg and other baselines
We conduct an ablation analysis to compare RankAgg with several baseline methods on Aircraft and Caltech101 datasets with respect to the τ_w of the PTM ranking. We examined the variation of these metrics and their corresponding confidence intervals (in 95%) as the number of samples per class (shot) increased. The results, depicted in the provided <ref>, are based on the average values and confidence intervals obtained from 30 randomly sampled sets for each shot. Due to computational constraints, certain baseline methods were omitted from the analysis. Notably, our findings reveal that the rank aggregation strategy effectively consolidates diverse perspectives on PTM ranking and consistently surpasses the performance of baselines across almost all shots.
§.§ Ablation studies on the dynamically incremental model zoo
When encountering new PTMs during the model selection task, the previously trained model token in can be dynamically learned and updated. We employ an incremental learning approach <cit.> to address this challenge. Specifically, we sample 25% target tasks where the PTM ranking is closest to the average of all and insert the approximated accuracy of the new PTMs on them.
This newly constructed ranking ground-truths include the correlation between old and new model tokens, reducing the influence of imbalanced incremental data.
We performed ablation studies to investigate the behavior of as the pre-trained model zoo dynamically expanded. Our analysis focused on how can could quickly adapt to newly added PTMs and integrate them into the ranking process. The results in <ref> demonstrate that as the size of the model zoo increased from 3 to 6 and then to 10, demonstrated the ability to incrementally learn the recommended ranking for the new additions to the model zoo. The incrementally learned ranking for the entire PTM zoo exhibited slightly lower accuracy than the results of direct training on all PTMs. Nonetheless, consistently maintained an excellent level of performance.
§.§ Confidence intervals for few-shot setting in Table 1 of the main text
We include the confidence intervals (in 95%) for the few-shot experiments in the respective section of Table 1 for the main text. These intervals were obtained through 30 repeated trials, providing a robust estimate of the performance variability in a few-shot manner.
§.§ Illustration of re-ranking with PTM-specific task token
In <ref>, we discuss the learnable model token, which captures the empirical performance of a PTM across various training tasks. This training scheme serves to decouple the task token from the forward pass of each PTM. Compared to the task token guided solely by general features, the PTM-specific task token provides more informative clues. By constructing it with the forwarding pass of PTM, we can incorporate the source PTM's adaptation information for downstream tasks. Our approach allows for the re-ranking of estimated PTM rankings using PTM-specific task tokens. Since more forward pass consumes more resources, further improves performance and provides a dynamic resource adaptation option with PTM-specific features.
Illustrated in <ref> is an example of model re-ranking in the context of a heterogeneous multi-source model zoo. The , after extracting PTM-specific task tokens, accomplished a more precise PTM ranking. We re-construct the PTM-specific task token on the Dogs dataset pre-trained. Our investigation focuses on the Aircraft downstream dataset, and intriguingly, we discover that PTMs trained on multi-scenario multi-target datasets possessed inherent advantages when applied to the aircraft domain. This advantage can be attributed to their generally strong recognition capabilities for diverse targets. Remarkably, even models pre-trained on the Food dataset demonstrated exceptional performance on the Aircraft dataset. Despite the notable dissimilarities between the Food and Aircraft datasets, we conjecture that the Food-pre-trained models not only exhibit proficiency in recognizing multiple targets, encompassing various food items but also harbor latent potential for fine-grained recognition within the food domain. Consequently, these PTMs transfer their fine-grained recognition capacity to the aircraft domain. In contrast, the Dogs dataset, characterized by a narrow focus on a single biological species, impedes successful transfer to the Aircraft task.
The substantial disparities between the datasets pose a significant challenge for conventional baseline methods, often failing to prioritize the Food-pre-trained model. However, successfully learns to rank the Food-pre-trained one and, through a meticulous screening process followed by result re-ranking, identifies that the Caltech101-pre-trained model outperforms the Dogs-pre-trained one due to its superior multi-target recognition capabilities, thereby exhibiting enhanced transfer performance.
§ MORE DETAILS
§.§ Comparison of the time consumption and memory footprint (details in Figure 1(c))
Figure 1(c) shows the average efficiency performance comparison over 5 baseline approaches and . The k=0, k=3, k=6, k=36, and k=42 correspond to inference w/o PTM-specific features, w/ 3, 6, 36, and 42 ones.
Following <cit.>, we measure the wall-clock time (second) and memory footprint (MB) with code instrumentation.
§.§ Datasets Description
We show the datasets description <ref> with some examples <ref> covered in this paper.
§ DISCUSSIONS
There are two promising directions of . First, exhibits the unique characteristic of not relying on the forward pass of the model zoo, thereby enabling the evaluation of task compatibility with classical machine learning models.
Then, could be applied to the case when we use other criteria in addition to fine-tuning performance to measure the fitness between a model and a task.
|
http://arxiv.org/abs/2306.03389v1
|
20230606040620
|
Phase perturbation improves channel robustness for speech spoofing countermeasures
|
[
"Yongyi Zang",
"You Zhang",
"Zhiyao Duan"
] |
cs.SD
|
[
"cs.SD",
"eess.AS"
] |
Origin-Destination Network Generation via Gravity-Guided GAN
Yong Li
============================================================
In this paper, we aim to address the problem of channel robustness in speech countermeasure (CM) systems, which are used to distinguish synthetic speech from human natural speech. On the basis of two hypotheses, we suggest an approach for perturbing phase information during the training of time-domain CM systems. Communication networks often employ lossy compression codec that encodes only magnitude information, therefore heavily altering phase information. Also, state-of-the-art CM systems rely on phase information to identify spoofed speech. Thus, we believe the information loss in the phase domain induced by lossy compression codec degrades the performance of the unseen channel. We first establish the dependence of time-domain CM systems on phase information by perturbing phase in evaluation, showing strong degradation. Then, we demonstrated that perturbing phase during training leads to a significant performance improvement, whereas perturbing magnitude leads to further degradation.
Index Terms: speech recognition, human-computer interaction, computational paralinguistics
§ INTRODUCTION
Speech generative systems have been progressing rapidly in recent years <cit.>. The state-of-the-art speech generative model VALL-E <cit.> can even mimic a person's voice with only 3 seconds of speech. If misused by criminals, these deep generative algorithms could aid in spoofing attacks. Therefore, the research community has been developing speech countermeasure (CM) systems for distinguishing synthetic speech from human natural speech. For real-world applications, spoofing attacks could be generated by algorithms and transmitted through communication channels that are novel to the CM system. Therefore, CM systems need to generalize to unseen synthetic attacks and channel variations <cit.>.
Such generalization ability has been studied by the CM community, especially driven by the ASVspoof challenge series <cit.>. In ASVspoof2019, the spoofing attacks in the evaluation set are created with different generative algorithms than the training set, enforcing the evaluation on the generalization ability to unseen attacks. Raw-waveform-based CM systems demonstrated especially great performance compared to frequency-magnitude-based CM systems.
In ASVspoof2021 <cit.>, the issue of channel robustness was investigated by introducing channel variation on the evaluation set of ASVspoof2019. The results show strong performance degradation when evaluating on unseen channels. There have been some solutions provided by the participants of ASVspoof2021 <cit.> that mainly use empirical methods to mitigate the problem, such as data augmentation and feature engineering.
In this paper, we propose to alleviate the channel robustness issue of state-of-the-art CM systems through phase perturbation in training. This proposal is developed based on two observations and our hypotheses for explaining the observations: 1) State-of-the-art CM systems are time-domain systems; We hypothesize that they rely on phase information to detect synthetic spoofing attacks, and 2) Communication channels often employ lossy compression codecs that are designed to only encode magnitude information; We hypothesize that they alter phase information in speech, making phase-aware CM systems difficult to generalize to unseen channels. Based on these hypotheses, we propose to perturb the phase when training phase-aware CM systems. We believe that this will make such systems less reliant on phase information, hence more robust to channel variations.
We design a set of experiments to test our hypotheses. By training three state-of-the-art time-domain CM systems and perturbing phase during evaluation, we discover that all CM systems' performance degrades as the phase perturbation amount increases. We also perturb magnitude during evaluation for comparison, and observe that better performing CM systems show a stronger degradation effect when the phase is perturbed compared to magnitude, suggesting that the performance boost in time-domain CM system is likely due to better utilization of phase information, yet such utilization becomes overfitting when phase is perturbed during evaluation.
Then, we employ the best performing CM system AASIST <cit.> and perturb phase during training, and evaluate on data from unseen transmission channels. We observe a significant performance improvement, where the best configuration demonstrates a 26.2% relative improvement on the equal error rate (EER). As a comparison, magnitude perturbation during training degrades performance in all settings. This suggests that the lossy compression in channel effects indeed retains magnitude information but corrupts phase information, and the phase perturbation in training is an effective strategy toward mitigating the channel robustness issues of time-domain CM systems. To our best knowledge, this is the first study that investigates the effects of phase and magnitude perturbation on CM systems.
All code, audio samples, and trained model
are open-sourced[<https://yongyi.dev/phase-antispoofing>].
§ CM SYSTEMS RELY ON PHASE INFORMATION
In this section, we study the dependence of raw-waveform-based CM systems on phase information. We select three state-of-the-art CM systems with top performance on the ASVSpoof2019LA dataset <cit.>: RawNet-2
<cit.>, RawGAT-ST <cit.>, and AASIST <cit.>.
§.§ Dataset
We employ ASVspoof2019LA <cit.>, which is the logical access (LA) subset of the ASVspoof2019 challenge. It contains bonafide speech from the VCTK corpus <cit.>, with a vast variety of attacks. We follow the same training, development, and evaluation split as the ASVspoof2019 protocol. For the spoofing attacks, the training and development sets contain the same 6 attacks, and the evaluation set contains 13 unseen attacks.
§.§ Experimental setup
Phase perturbation. During evaluation, we randomly perturb the phase at different amounts.
For an utterance with perturbation amount n, we extract its phase and magnitude spectrograms. The phase ϕ of every bin in the phase spectrogram is randomly reassigned a value in the range of [ϕ-n/2, ϕ+n/2], while the magnitude spectrogram is unchanged. The phase perturbed utterance is then synthesized using the original magnitude and the perturbed phase spectrograms. We select four perturbation settings, with phase perturbation n as 1/2π, π,3/2π and 2π, respectively.
Magnitude perturbation. As a comparison, we also conduct a set of experiments with magnitude perturbed. For an utterance with a perturbation amount m dB signal-to-noise ratio (SNR), we add white noise to the utterance to make its SNR equal to m dB, then extract its magnitude spectrogram.
The magnitude-perturbed utterance is then synthesized from the noisy magnitude spectrogram and the original utterance's phase spectrogram. If the synthesized audio waveform exceeds the maximum limit, we apply normalization to prevent clipping. We selected five perturbation settings for magnitude: 10 dB, 5 dB, 0 dB, -5 dB and -10 dB SNR. Figure <ref> visualizes both phase and magnitude perturbation processes, while Figure <ref> shows an example utterance with all perturbation settings. A setting without perturbation is also introduced as the baseline.
Evaluation metric. We follow the ASVspoof challenge series and employ equal error rate (EER), defined as the point where the false acceptance rate equals the false rejection rate. Lower EER indicates better performance.
Training details. All time-domain CMs are trained with the ASVspoof2019LA training set, and validated on the development set. To reproduce the best performance for all CM systems, we use the hyper-parameter settings as reported in <cit.>, and select the checkpoint with the lowest validation EER within 100 epochs for evaluation. All models are trained with three distinct random seeds and reported with an averaged result to mitigate the impact of the random seed on model performance <cit.>.
§.§ Results and analyses
Table <ref> demonstrates the EER performance for all three CM systems evaluated with phase perturbation. As a sanity check, the performance of all CM systems on ASVspoof2019LA is similar to that reported in <cit.> and <cit.>. As the amount of phase perturbation increases, the performance decreases on all three CM systems, indicating that time-domain CM systems utilize phase information to discern spoofed speech. As a comparison, Table <ref> shows the results of all CM systems evaluated with different magnitude perturbation settings.
We use the pooled EER from all phase or magnitude perturbed settings to represent the performance under this perturbation setting and compare them against the baseline setting. As illustrated in Figure <ref>, CM systems with better performance in ASVspoof2019LA show more degradation in phase-perturbed settings and less degradation in magnitude-perturbed settings, indicating that better-performing time-domain CM systems also rely more on phase information.
§ PHASE PERTURBATION DURING TRAINING IMPROVES CHANNEL ROBUSTNESS
In Section 2, our experiments establish the problem of overfitting to phase information in the trainin data..
In this section, we will explore perturbing phase during training to lessen phase reliance for time-domain CM to improve channel robustness.
Communication channels typically employ lossy compression codecs, many of which focus on encoding only frequency-magnitude information, since humans are more sensitive towards them <cit.>. After transmitting through such compression, much phase information is corrupted, making the performance of time-domain CM systems degrade as they have much reliance on phase information.
By perturbing the phase during training, we can reduce the CM systems' reliance on phase information to build more robust CM systems. However, we expect that this perturbation should not be too much that completely removes the reliance on phase, as some useful phase information may still remain for the time-domain CM systems to pick up, even after going through the communication channel.
We hypothesize that there is a midway setting between not perturbing and perturbing all phases that provides the best trade-off resulting in best performance.
§.§ Channel-shifted dataset
To evaluate the performance of CM systems in unseen communication channels, we use ASVspoof2021LA, the logical access (LA) sub-track of the ASVspoof2021 challenge. This subset transmitted the entire ASVspoof2019LA set, along with additional samples, through seven communication channels, denoted as C1 to C7. C1 is the same channel as ASVspoof2019LA, while C2 to C7 are unseen channels. Amongst the unseen channels, C2 and C5 use the time-domain compression algorithms a-law and μ-law, while C4, C6, and C7 employ magnitude-based compression codecs, G.722 <cit.>, GSM <cit.>, and OPUS <cit.>. C3 differs from C2 by transmitting over a public switched telephone network, therefore introducing uncontrollable and unknown artifacts, such as data corruption during transit.
§.§ Experimental setup
As described in Section 2, all three CM systems heavily utilize phase information. In the interest of saving space while not losing generalization ability, we selected AASIST to study in depth due to its superior performance on ASVspoof2019LA.
We perturb the phase of the training portion of ASVspoof2019LA with the same four perturbation amounts: 1/2π, π,3/2π and 2π, then evaluate on ASVspoof2021LA. For magnitude, we use the same five magnitude perturbation settings as in Section 2. A baseline setting with no perturbation is also provided. Each setting is trained with three distinct random seeds and results are averaged. To better utilize GPU resources, the batch size of the experiments in this part is slightly increased, while all other hyperparameters
remain unchanged.
Following the ASVspoof2021 challenge, when reporting performance on channel-variant data, we use pooled EER of all channels to represent the overall performance of CM systems.
§.§ Results and analysis
We begin by confirming our first hypothesis: by perturbing phase or magnitude during training, we can reduce the time-domain CM system's reliance on phase or magnitude. To do so, we train the CM system with different levels of phase or magnitude perturbation, and test it on phase or magnitude perturbed version of the ASVspoof2019LA evaluation set. For phase perturbed test data, we use the maximum phase perturbation (i.e., 2π), and for magnitude perturbed test data, we use -10 dB SNR for the perturbation.
Since the perturbations are so strong in the test data, we expect to see performance degradation when there is no perturbation during training, then performance improvement as more phase perturbation is applied.
Results are as shown in Figure <ref>. In Figure <ref>(a), the performance improves with more phase perturbation in training, indicating less dependence on phase information. Similarly, Figure <ref>(b) illustrates the results on magnitude perturbation. We observe performance improvement as magnitude perturbation increases during training, indicating that the CM system is less dependent on magnitude information.
With this established, we perform evaluation of all training settings on ASVspoof2021LA, and the results are shown in Table <ref>.
As expected, as magnitude is perturbed during training, performance on all unseen communication channels degrades, showing that removing reliance on magnitude information is harmful for CM systems' performance. This suggests that CM systems can benefit from the magnitude information preserved by lossy compression codecs.
At the same time, all settings with phase perturbed show some improvement in pooled results, achieving a more robust overall performance compared to non-perturbed settings.
This indicates that by being less sensitive to phase information, time-domain CM systems can better generalize to unseen communication channels. Best performing setting shows a relative EER improvement of 26.2% compared to the no perturbation baseline, without introducing any channel data during training.
We also notice that the best-performing phase perturbation setting appears at π, aligning with to our hypothesis of a “midway” perturbation setting.
Amongst the individual unseen communication channels, C2 and C5 are codecs encoding the time-domain waveform directly, and we observe slight performance degradation as phase information is perturbed. C4, C6, and C7, on the other hand, uses encode magnitude spectrogram with lossy compression, and
we see performance improvement with phase perturbation. Interestingly, we notice that on different channels, the best-performing phase perturbation setting appears at different perturbation amounts. This suggests that different amount of useful phase information is retained by different compression codecs.
C3 is transmitted through real-world phone lines and has more unknown data corruption en route, and we observe that phase perturbation during training improves performance. At different phase perturbation settings, C3 also shows more fluctuation, indicating that transmission artifacts are more likely to introduce phase artifacts as well.
On the C1 condition, even though no unseen communication artifacts are present, we still see a slight performance improvement at 1/2π. We believe that this is likely because phase perturbation also brings better generalization ability to unseen attacks by masking part of model-specific artifacts. Speech generative systems typically take magnitude features as input and synthesize time-domain data; therefore, they have a heavier burden generating phase information. This makes speech generative algorithms more prone to creating phase artifacts that are model-specific. As we have established time-domain CM systems' reliance on phase information, we believe CM systems may have overfitted to specific artifacts in training data, and phase perturbation can also aid in mitigating this effect.
§ CONCLUSIONS
In this paper, we observed significant degradation of various state-of-the-art time-domain CM systems when evaluated on phase-perturbed speech utterances. This degradation may cause channel robustness issues, since many communication channels employ lossy codecs that only encode frequency-magnitude information while losing much phase information. We proposed to mitigate this issue by perturbing phase in training. Systematic evaluation on real-world channel-variant data verified that perturbing phase in training does significantly improve the channel robustness of a state-of-the-art time-domain CM system.
For future work, we plan to use this insight to design CM systems that strike the balance between modeling useful phase information and being less sensitive to channel phase alternation.
§ ACKNOWLEDGEMENT
This work is partially supported by a New York State Center of Excellence in Data Science award, and synergistic activities funded by the National Science Foundation (NSF) under grant DGE-1922591.
IEEEtran
|
http://arxiv.org/abs/2306.02057v1
|
20230603091435
|
DataAI-6G: A System Parameters Configurable Channel Dataset for AI-6G Research
|
[
"Zibing Shen",
"Jianhua Zhang",
"Li Yu",
"Yuxiang Zhang",
"Zhen Zhang",
"Xidong Hu"
] |
eess.SP
|
[
"eess.SP"
] |
DataAI-6G: A System Parameters Configurable Channel Dataset for AI-6G Research
Zibing Shen, Jianhua Zhang, Li Yu, Yuxiang Zhang, Zhen Zhang, Xidong Hu
State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications,
Beijing, China
Email: {szb, jhzhang, li.yu, zhangyx, zhenzhang, hxd}@bupt.edu.cn
July 31, 2023
===============================================================================================================================================================================================================================================================================
With the acceleration of the commercialization of fifth generation (5G)
mobile communication technology and the research for 6G communication systems,
the communication system has the characteristics of high frequency, multi-band, high speed movement of users and large antenna array.
These bring many difficulties to obtain accurate channel state information (CSI), which makes the performance of traditional communication methods be greatly restricted. Therefore, there has been a lot of interest in using artificial intelligence (AI) instead of traditional methods to improve performance. A common and accurate dataset is essential for the research of AI communication.
However, the common datasets nowadays still lack some important features, such as mobile features, spatial non-stationary features etc.
To address these issues, we give a dataset for future 6G communication. In this dataset, we address these issues with specific simulation methods and accompanying code processing.
AI, mobile features, spatial non-stationary features
§ INTRODUCTION
6G mobile networks are expected to support further
enhanced mobile broadband, ultramassive machine-type, enhanced ultrareliable and low-latency, long-distance, and high-mobility communications and other
emerging scenarios for the 2030 intelligent information society, which requires instantaneous, extremely high-speed wireless connectivity <cit.>.
These new scenarios and requirements make it necessary to consider an increasing number of features when modeling the channel. The channel model based on statistical characteristics becomes more and more complex as the number of characteristics considered increases, and an overly complex model is not conducive to future research. In order not to further complicate the model, researchers have come up with the idea of using AI techniques instead of, or in addition to, optimizing the traditional modeling approach. This idea is well supported in today's era of big data.
With the continuous exploration of researchers, machine learning (ML)-
based AI techniques have become the
key to develop the next-generation communication system <cit.>.
High-mobility communications make the CSI tends to be
out of date in a short time period, multi-antenna and multi-band make acquiring CSI difficult and requires significant overhead, and with the usage of ultramassive MIMO, the energy consumed by signal
transmission and RF chains will become considerable. These make it very difficult to obtain channel information in the space, time, and frequency domains. In order to get accurate CSI and reduce overhead, AI-based time-, frequency-, and space- domain channel extrapolation <cit.> and compressive sensing for massive MIMO CSI feedback <cit.> have been presented.
In the millimeter wave band, blocking has a significant impact on the quality of communication and the overhead of beam selection is huge, which are the challenges of future high frequency communication. In <cit.>, two AI-based methods for blockage prediction and beam prediction are proposed, and both of these methods effectively solve the above problems. In addition, the prediction of a particular channel characteristic, such as path loss <cit.>, can also be very useful to further improve the communication quality.
Adding AI at the base station and user side can improve the performance and reduce the overhead of communication. And to implement these AI applications, a large amount of channel data is necessary. The set of these data is the essential channel dataset in AI training, as shown in Fig. 1.
To meet the data requirements of the researcher, we have introduced the DataAI-6G dataset, which is designed for
machine learning research in wireless channel transmission and modeling. More specifically, using this dataset, researchers can easily construct the inputs and outputs of several machine learning applications. The DataAI-6G dataset provides angle of departure (AOD), angle of arrival (AOA), delay, phase, power of each path and the path loss between any pair of
transceiver antennas. These data are obtained from the
ray-tracing simulator, Wireless InSite, developed by Remcom <cit.>. Remcom Wireless InSite, is widely used in
mmWave and massive MIMO research at both industry
and academia, and has been verified with real-world channel measurements <cit.>.
More important, our dataset has a dedicated set of codes, which can synthesize the UL and DL channel matrices and has user moving function. More details will be discussed in the rest of this paper.
§ DESIGN OF DATASET
According to <cit.>, the use of ultra-large antenna arrays introduces near-field spatial non-stationary features, which is an essential feature in 6G communications.
In <cit.>, the existence of UL to DL mapping has been proved, so there are many researchers are studying the mapping of UL to DL. In the future wireless communication, high-speed movement features are the focus of attention, but users in existing datasets are usually static.
So, in our dataset, we have considered the above three features. Firstly, in the data simulation stage, we simulate each antenna array element separately instead of using the plane wave synthesis method. Then in the the code synthesis stage, the dataset can synthesize the CSI of the UL/DL channel using the angle, delay, power, and phase information of each path. More importantly, the dataset is able to introduce further Doppler phase shifts on this basis to obtain the CSI of the moving state. The specific synthesis principle is as follows.
In our dataset, multi-antenna technology has been considered.
In order to get the channel matrix, we need calculate the channel impulse response (CIR) of each antenna pair first. Consider a MIMO system with multiple base stations and multiple user areas. For the k-th antenna
at the base station x and the g-th antenna at the u-th user point
in the y-th user area, there will be a large number of Multipath
components (MPC) between them. So the CIR can be written as
h_x_k,y_ug=∑_i=1^Mα_ie^jφ_iδ(τ-τ_i),
where α_i and φ_i represent the amplitude and the phase of the i-th path, respectively. M denotes the total number of paths between these two antennas and τ_i denotes the delay of the i-th path.
However, in real-world communication, the receiving antenna usually samples the received signal at a certain frequency, so the received signal will be divided into multiple time-delayed distinguishable paths. In the DataAI-6G dataset, we simulate this reception method to obtain the channel response that most closely resembles the actual situation.
Assuming that the receiver samples the received signal at a sampling interval of 1/BW (BW is the channel bandwidth),
the channel impulse response at the i-th sampling interval can be expressed as
h^i_x_k,y_ug=(∑_n=1^N_iα_ne^jφ_n)δ(τ-τ_i),
where α_n and φ_n represent the amplitude and the phase of the n-th path, respectively.
N_i denotes the total number of paths in the i-th sampling interval and τ_i denotes the
delay of the i-th sampling interval.
After that, the impulse responses of all sampling intervals are superimposed and
converted to the frequency domain to obtain the frequency domain channel response.
Then, the user-set UL/DL carrier frequency is brought into the formula of frequency domain channel response to obtain a complex value, and this complex value is stored as an approximate channel response in the generated dataset.
The UL/DL channel response can be written as
H^up_x_k,y_ug=∑_i=1^L(∑_n=1^N_iα_ne^jφ_n)e^-j2πf_upτ_l,
H^down_x_k,y_ug=∑_i=1^L(∑_n=1^N_iα_ne^jφ_n)e^-j2πf_downτ_l,
where L denotes the number of sampling intervals and f_up/f_down denotes the UL/DL carrier
frequency.
Due to the difference of UL and DL carrier frequencies, the UL and DL channel response will be different in magnitude and phase. And since the UL and DL channel responses are calculated using similar formulas, there is a strong correlation between them. The advantage of using this approach to obtain the UL and DL channel responses is that the researcher has the flexibility to set the UL and DL carrier frequencies. However, since only the simulation data of the DL channel are available, this synthesis can only approximate the UL channel, which is still lacking in terms of accuracy. To further improve the accuracy of the UL and DL channels, UL simulation data or actual measurement data can be included in future studies.
On the basis of the UL/DL features, we will proceed to discuss the mobile features. To get the mobile features, we need to take Doppler phase shift into consider. The expression of the Doppler phase shift can be written as
Δφ=2πv·n/λ_cΔ t,
where v denotes the velocity vector in the direction of movement and n denotes the direction vector of AOA in DL channel and negative direction vector of AOD in UL channel. λ_c and Δ t represents the wavelength of the carrier wave and time interval, respectively.
After get the Doppler phase shift, we add it to Eq.(3). For the k-th
antenna at the base station x and the g-th antenna at the u-th user point in the y-th user area, the frequency
domain channel response in the moving state can be written as
H^up_x_k,y_ug=∑_i=1^L(∑_n=1^N_iα_ne^j(φ_n+Δφ))e^-j2πf_upτ_l.
H^down_x_k,y_ug=∑_i=1^L(∑_n=1^N_iα_ne^j(φ_n+Δφ))e^-j2πf_downτ_l.
The channel response obtained in this way contains mobile features, so our dataset is well suited to researchers for the study of mobile features.
§ DATASET GENERATION
We build a large outdoor street scenario with multiple configurations in multiple bands
using Wireless Insite <cit.> and simulate it to obtain a set of channel parameters. We
provide a generic framework that allows researchers the flexibility to configure some
parameters in the code according to their needs. As shown in Fig. 2, the researchers can then bring the
raw channel parameters as input to the framework to output the customized dataset.
§.§ Outdoor street scenario
The outdoor street scenario is dedicated to providing researchers with diverse scene
features to meet the needs of machine learning different requirements. The whole
scenario is 646 m long and 290 m wide, which is an extensive outdoor scene, as shown
in Fig. 3. Two horizontally oriented main streets run through the whole scenario,
and four vertically oriented secondary streets are connected to the horizontally oriented
ones. To provide multi-regionalized data, we set up at least one base station for each
street. In total, we build 8 BS and 12 user grids, which are scattered within 6
streets. The users on the streets are evenly distributed within the grid. In addition, the streets are flanked by buildings of different heights and vegetation of varying sizes.
For simplicity, the buildings are rectangular and solid, so that the rays from the base
station cannot penetrate the buildings.
More detail, the locations of these 8 BS are distributed on both sides of the street. Four of the base
stations are set up in two main streets in a horizontal direction, and four base stations
are respectively set up in four streets in a vertical direction. Each base station is
equipped with different types of antennas as well as different heights. TX2 and TX5
are equipped with a single element, which is the omnidirectional antenna, and the rest
of the base stations are equipped with multiple antennas. It is necessary to elaborate
that each array element constituting the MIMO antenna array is a half-wave dipole,
and the distance between them is half a wavelength. Users are evenly distributed among 12 user grids, and each starting point of the user grid is
located in the left corner. An example of the user points arrangement is shown in Fig. 4. The users in RX1, RX2 and RX3 are equipped with a 2×2 uniform planar array,
and the other users in the rest of the user gird are equipped with a omnidirectional antenna.
We use the X3D model, which is by far the most versatile, functional, and accurate
propagation model in Wireless Insite. Considering the meaningful received power, for
simplicity, only the first 4 reflections are considered. More importantly, the accuracy
of blocking and beam prediction can be further improved by exploiting the diffraction
properties <cit.>, <cit.>. But on the other hand, the received power decreases significantly
as the number of diffractions increases, so we turn on only one diffraction. After we configure the main parameters in Wireless Insite, it performs signal
propagation simulation and finally gives ray tracing results. The results of the simulation contain (i) the azimuth and elevation angles of departure of each path, (ii) the azimuth and elevation angles of arrival of each path, (iii) the path receive power, (iv) the path phase and (v) the propagation delay of each path.
Wireless Insite can also output the overall received power, the overall phase, and the path loss of a receive point for all valid paths.
§.§ Advantages of dataset
Compared with other datasets, such as DeepMIMO <cit.>,
Wireless AI Research Dataset <cit.>. Our dataset has three major
advantages(as shown in Table 1): (i) Spatial non-stationary features are considered in the simulation. (ii) With the user moving function that considers Doppler, users can freely
configure the moving route and moving speed. (iii) Has the ability to
generate any number of user points. In the next of this part, we will explain in detail how the last two functions are implemented in the code.
In the profile of the code, the researcher can select the base station and the user area to be activated, and can also select the desired user points in the user area, while the number of antennas of users and base stations can be freely set according to the requirements. After setting the above parameters, researchers can choose the frequency of the channel(3.5 GHz, 28 GHz or 60 GHz), the antenna pattern of users and base stations, the bandwidth and carrier frequencies of UL/DL channel. Then the code will extract the AOA, AOD, power, delay and phase of each path, and the path loss of the channel will also be extracted.
After obtained the angle, phase, delay and power information of each path,
The dataset will synthesize the channel response using Eq.(3).
If researchers want the user to move in the user grid, they just need set the parameter 'move' to 't'. In the DataAI-6G dataset, the user can move along four directions: up, down, left and right . The researcher only needs to set the corresponding parameters in the configuration file to specify both the path and direction of movement. Then, the code will perform point sampling on the movement path
according to the user-set speed and sampling interval. In order to calculate the Doppler phase shift due to the movement, the distance difference between the virtual point that get by point sampling and the user point in the user gird has to be calculated first.
Δ d=κ vΔ t -mΔ s,
where κ means the κ-th sample point, Δ s denotes the interval between real user points, v and Δ t represent the speed of the user and the sampling interval. m in Eq.(6) means the m-th user point in the moving path, which calculated by ⌈κ vΔ t/Δ s-1⌉. Then, the code will bring Δ d into Eq.(4) to calculate the Doppler phase shift, which can be written as
Δφ=2πΔdm·n/λ_c,
where m denotes the unit vector in the direction of movement and n denotes the direction vector of AOA/AOD in DL/UL channel. λ_c represents the wavelength of the carrier wave. After get the Doppler phase shift, the dataset will synthesize the channel response in the moving state using Eq.(5).
§ CASE OF BEAM PREDICTION
In this section, we will use a beam prediction algorithm to validate our dataset.
We consider a mobile cellular network including one base station (BS) and one moving user equipment (UE). The UE is communicating with the BS, and both line-of-sight (LOS) and none-line-of-sight (NLOS) exist during the movement. Since the future networks are likely to coexist in sub-6 GHz and mmWave bands, we assume that the BS is equipped with two antenna arrays. One works at the sub-6 GHz band and the other works at the mmWave band.
In our method, only the sub-6 GHz uplink (UL) channels is utilized for
beam prediction. During the UL signaling, UE sends pilot
signals to the BS in each scheduling time frame, and the BS
receives the UL signal. Denote y_up[k] as the received UL
signal at the k-th subcarrier, y_up[k] can be shown as
y_up[k]=h_up[k]s_p+n_up[k],
where h_up[k] denotes the UL sub-6 GHz channel and s_p
denotes the signal transmitted from UE. n_up[k] represents
the additive white Gaussian noise (AWGN).
Let h_down[k] denotes the DL channel. The received signal of the
UE at both sub-6 GHz and mmWave bands is given by
y_down[k]=h_down[k]fs_d+n_down[k],
where s_d represents the signal transmitted from the BS and n_down[k]
represents the AWGN. For the sub-6 GHz band, f denotes the
sub-6 GH beamforming (BF) vectors which can be
obtained by matched filtering. f_sub6 can be written as
f_sub6=h^*_up[k]/| h_up[k] |.
In the millimeter wave band, a large number of antennas will be used, resulting in a high overhead using the direct calculation method. Therefore, in order to reduce the overhead in high-band communications, we generally use codebooks for beam selection,
f_mmW∈ F_mmW which denotes
the mmWave BF vectors. F_mmW is a set of pre-prepared
beamforming vectors. Denote P/σ^2 as the DL transmit signal-to-noise ratio (SNR).
The DL data rate for both sub-6 GHz and mmWave channels
can be shown as
R(h_down[k],f)=Blog_2(1+P/σ^2| h_down[k]f|^2).
The optimal mmWave BF vector f^* is selected to maximize
the mmWave rates R. And the optimal BF vector f^* is
utilized to train the machine learning model. f^* can be given by
f^*=argmax_f_mmW∈ F_mmWR(h_down[k],f_mmW).
In this method, we will use the model in <cit.> and the DataAI-6G dataset to predict the DL optimal beam at 60 GHz at time slot t+1 using the UL channel response at 3.5 GHz from time slot t-24 to time slot t. To select the optimal beam of 60 GHz for
training and testing, an N-phase codebook C is utilized. Each
code in C can be utilized to generate a beam f_mmW, and all
beams form a beam set F_mmW. The method of selecting the optimal beam from F_mmW is shown in Eq.(12).
We choose TX3 BS and RX6 UE area, and set the user moving in this area at the speed of 72 km/h, 90 km/h and 108 km/h. The user moves in the positive direction of the x-axis with the sample frequency of 1 kHz. In order to be able to compare with the dataset in <cit.>, we take more than 220k points in total, making the volume of the data comparable to that in <cit.>.
More detail, we choose the dataset of 3.5 GHz to generate the UL channel response and set the number of base station antennas to 16. The dataset of 60 GHz is used to generate the DL channel response, and the number of base station antennas is set to 64. The UE is equipped with an omnidirectional
antenna. These settings are consistent with those in <cit.>. The BW of UL and DL channel are both set to 100 MHz with different carrier frequencies, and antenna pattern of UE and BS are set to isotropic.
In the model training phase, we used the same LSTM model as in <cit.>.
In addition, to further improve the prediction accuracy, we combined the LSTM model with convolutional neural network, thus improving the feature extraction capability of the model. The structure and parameters of the Conv-LSTM model are shown in Fig. 5 and Table 2.
The accuracy of correctly predicting the optimal beam is used as the evaluation criterion. The results of different datasets is shown in Fig. 6. The first column of the results is obtained by training the LSTM model with Wireless Insite data, and the accuracy has reached 88.20% in <cit.>. As users in Wireless Insite are not moving, we assume the accuracy can reach 88.20% at all speed. The second column of the results is obtained by training the LSTM model with DataAI-6G dataset. The accuracy of this case has reached 91.65%, 91.03% and 88.85% at the speed of 72 km/h, 90 km/h and 108 km/h. Comparing these two cases, we can find that model has higher accuracy using DataAI-6G dataset, which means that our dataset provides more realistic channel features and is well adapted to the existing algorithms.
Then, with the same use of the DataAI-6G dataset comparing the accuracy of two different models, Conv-LSTM model has a better performance than LSTM model. Further observation of the transformation of accuracy with speed, as shown in Fig. 7, we can find that (i) the accuracy decreases with increasing speed, which indicates that the difficulty of beam prediction increases with speed. This is consistent with reality and reflects a high degree of realism in the mobile features of our dataset, (ii) the accuracy decreases at a more moderate rate when using Conv-LSTM model, which indicates that Conv-LSTM model has a better adaptation to speed. Therefore, borrowing algorithms from computer vision into algorithms for channel prediction is a direction that can be investigated in the future.
§ CONCLUSION
Combining AI with wireless communication is a promising development direction for future 6G mobile communication.
To meet the future research, the DataAI-6G dataset takes into account the Doppler properties and, based on this, we implement a fully user-defined move function and interpolation function for the first time. Our dataset also considers spatial non-stationary properties, which are not considered in most other datasets. In the future, our dataset will further consider the communication containing RIS. And, in order to meet more research, we will also add environment and material data in future iterations of the dataset.
00
b1H. Tataria et al., “6G Wireless Systems: Vision, Requirements, Challenges, Insights, and Opportunities," in Proceedings of the IEEE, vol. 109, no. 7, pp. 1166-1199, July 2021.
b2C. Huang et al., “Artificial Intelligence Enabled Radio Propagation for Communications—Part II: Scenario Identification and Channel Modeling," in IEEE Transactions on Antennas and Propagation, vol. 70, no. 6, pp. 3955-3969, June 2022.
b3Z. Zhang et al., “AI-Based Time-, Frequency-, and Space-Domain Channel Extrapolation for 6G: Opportunities and Challenges," in IEEE Vehicular Technology Magazine, vol. 18, no. 1, pp. 29-39, March 2023.
b4J. Guo et al., “Convolutional Neural Network-Based Multiple-Rate Compressive Sensing for Massive MIMO CSI Feedback: Design, Simulation, and Analysis," in IEEE Transactions on Wireless Communications, vol. 19, no. 4, pp. 2827-2840, April 2020.
b5X. Li et al., “Diffraction Characteristics Aided Blockage and Beam Prediction for mmWave Communications," 2022 IEEE 95th Vehicular Technology Conference: (VTC2022-Spring), Helsinki, Finland, 2022, pp. 1-5.
b6L. Yu et al., “Long-Range Blockage Prediction Based on Diffraction Fringe Characteristics for mmWave Communications," in IEEE Communications Letters, vol. 26, no. 7, pp. 1683-1687, July 2022.
b7Y. Sun et al., “Environment Features-Based Model for Path Loss Prediction," in IEEE Wireless Communications Letters, vol. 11, no. 9, pp. 2010-2014, Sept. 2022.
b8Remcom, “Wireless insite,” http://www.remcom.com/wireless-insite.
b9W. Khawaja et al., “Indoor Coverage Enhancement for mmwave Systems
with Passive Reflectors: Measurements and Ray Tracing Simulations,”
arXiv preprint arXiv:1808.06223, 2018.
b10Q. Li et al., “Validation of a Geometry-based
Statistical mmwave Channel Model Using Ray-tracing Simulation," in 2015
IEEE 81st V ehicular Technology Conference (VTC Spring), May 2015,
pp. 1–5.
b11S. Wu et al., “Intra-cluster Characteristics
of 28 Ghz Wireless Channel in Urban Micro Street Canyon," in 2016 IEEE
Global Communications Conference (GLOBECOM), Dec 2016, pp. 1–6.
b12Z. Yuan et al, “Spatial Non-Stationary Near-Field Channel Modeling and Validation for Massive MIMO Systems," in IEEE Transactions on Antennas and Propagation, vol. 71, no. 1, pp. 921-933, Jan. 2023.
b13Y. Yang et al., “Deep Learning-Based Downlink Channel Prediction for FDD Massive MIMO System," in IEEE Communications Letters, vol. 23, no. 11, pp. 1994-1998, Nov. 2019.
b14DeepMIMO Dataset. [Online]. Available: http://www.DeepMIMO.net
b15Wireless AI Research Dataset. [Online]. Available: https://www.mobileai
-dataset.com/html/default/zhongwen/shujuji/1592719963402108929.html
?index=1
|
http://arxiv.org/abs/2306.11449v1
|
20230620110705
|
Extrapolation of compactness on Banach function spaces
|
[
"Emiel Lorist",
"Zoe Nieraeth"
] |
math.CA
|
[
"math.CA",
"math.FA",
"Primary: 46E30, Secondary: 46B50, 42B25"
] |
We prove an extrapolation of compactness theorem for operators on Banach function spaces satisfying a certain convexity and concavity condition. In particular, we show that the boundedness of an operator T in the weighted Lebesgue scale and the compactness of T in the unweighted Lebesgue scale yields compactness of T on a very general class of Banach function spaces.
As our main new tool, we prove various characterizations of the boundedness of the Hardy-Littlewood maximal operator on such spaces and their associate spaces, using a novel sparse self-improvement technique. We apply our main results to prove compactness of the commutators of singular integral operators and pointwise multiplication by functions of vanishing mean oscillation on, for example, weighted variable Lebesgue spaces.
[2020]Primary: 46E30; Secondary: 46B50, 42B25
A living forest of Tibetan Juniper trees as a new kind of astro-geophysical observatory
[
July 31, 2023
========================================================================================
§ INTRODUCTION
The classical Rubio de Francia extrapolation theorem <cit.> is one of the most powerful tools in the theory of weighted norm inequalities. In its simplest form, it states that if an operator T is bounded on the weighted Lebesgue space L^p_w(^d) some p ∈ (1,∞) and all weights w ∈ A_p, then T is automatically bounded on L^p_w(^d) for all p ∈ (1,∞) and w ∈ A_p. Here we call a positive function w a weight, we let L^p_w(^d) denote the space of all functions f such that fw ∈ L^p(^d), and write w ∈ A_p if
[w]_p:= sup_Q 1/Q∫_Q w^p^1/p1/Q∫_Q w^-p'^1/p' <∞,
where the supremum is taken over all cubes Q⊆^d.
In recent years, Rubio de Francia's extrapolation theorem has been extended to compact operators. Again in its simplest form, Hytönen and Lappas <cit.> showed that if
* T is bounded on L^p_w(^d) for some p ∈ (1,∞) and all weights w ∈ A_p;
* T is compact on L^p_w(^d) for some p ∈ (1,∞) and some weight w ∈ A_p;
then T is compact on L^p_w(^d) for all p ∈ (1,∞) and w ∈ A_p. Note that in typical applications, one checks the compactness assumption for p=2 and w=1, i.e. on the Hilbert space L^2(^d).
In order to fix ideas, let us briefly sketch the proof of Hytönen and Lappas <cit.> in the simplest case stated above.
In essence, the argument has three main ingredients:
* The Rubio de Francia extrapolation theorem;
* Interpolation of compactness: For Banach function spaces X_0 and X_1 and an operator T which is compact on X_0 and bounded on X_1, T is compact on the product space
X_0^1-θ· X_1^θ for θ∈ (0,1);
* The self-improvement property of Muckenhoupt classes: For w ∈ A_p there is an 1<r<p such that w^r ∈ A_p/r.
Using these ingredients, one starts by observing that T is bounded on L^p_w(^d) for all p ∈ (1,∞) and w ∈ A_p by <ref> and hence, by <ref>, the compactness of the operator T on L^p_w(^d) can be used to deduce the compactness of T on
L^2(^d) = L^p_w(^d)^1/2· L^p'_w^-1(^d)^1/2.
Next, fix p∈(1,∞) and w∈ A_p. Using <ref> on both w∈ A_p and w^-1∈ A_p', one finds 1<r<p<s<∞ such that w^r∈ A_p/r and w^-s'∈ A_p'/s' and
1-1/r=1/s.
One readily checks that this is equivalent to the condition w_r,s∈ A_p_r,s, where
1/p_r,s:=1/p-1/s/1/r-1/s, and w_r,s:=w^1/1/r-1/s.
Note that the affine transformation that maps 1/p→1/p_r,s is the one that maps the interval (1/s,1/r) to the interval (0,1) through a translation by -1/s and a scaling by a factor of 1/r-1/s. Similarly, the space X_r,s:=L^p_r,s_w_r,s(^d) can be scaled and translated back to the space X:=L^p_w(^d) through the factorization
X=(X_r,s)^1/r-1/s· L^s(^d)= (X_r,s)^1-θ· L^2(^d)^θ, θ = 2/s.
Therefore, since T is bounded on X_r,s and compact on L^2(^d), this means it is also compact on L^p_w(^d) by <ref>, proving the result.
The extrapolation theorem of Rubio de Francia has recently been generalized to
a general class of Banach function spaces. In the work of Cao, Márin and Martell <cit.> weighted Banach function spaces under fairly restrictive conditions were considered. In <cit.> by the second author this was generalized to the class of saturated spaces, which is also the class of Banach function spaces we consider in this work, see Section <ref>. We refer the reader to <cit.> for a direct comparison of the assumptions with those of <cit.>. In particular, it was shown in <cit.> that if T is bounded on L^p_w(^d) for some p ∈ (1,∞) and all weights w ∈ A_p, then T is bounded on any Banach function space X for which
M:X→ X, M:X'→ X',
where M denotes the Hardy–Littlewood maximal operator and X' the associate space of X. This, of course, includes the weighted Lebesgue spaces X=L^p_w(^d) for w ∈ A_p, as it is well-known that M is bounded on X and X' = L^p'_w^-1(^d).
Thus, we observe that <ref> is available in the setting of Banach function spaces. Moreover, <ref> is already phrased in this setting. Therefore to extend the extrapolation of compactness theorem of Hytönen and Lappas <cit.> to the setting of Banach function spaces, one only needs to find a suitable replacement for <ref>, which will be the main contribution of this paper.
In the above proof sketch, <ref> was used to deduce the factorization in (<ref>). Thus, we are looking for a self-improvement property of the form: If X is a Banach function space such that
M:X→ X, M:X'→ X',
then there are 1<r<s<∞ with 1-1/r=1/s such that
M:X_r,s→ X_r,s, M:(X_r,s)'→ (X_r,s)'
for some suitable space X_r,s satisfying (<ref>).
In <cit.> it was shown that a space X_r,s such that (<ref>) holds exists if and only if
X is r-convex and s-concave, i.e.,
(|f|^r+|g|^r)^1/r_X ≤(f_X^r+g_X^r)^1/r, f,g∈ X
(f_X^s+g_X^s)^1/s ≤(|f|^s+|g|^s)^1/s_X, f,g∈ X.
We note that in this case X_r,s is given by the formula
X_r,s:= (X^r)'^(s/r)''.
Combined with the boundedness of M on X_r,s for r small enough and s large enough, which we will discuss below, we have sketched the proof of our first main result.
Let
T:⋃_p∈(1,∞)
w∈ A_pL^p_w(^d)→ L^0(^d)
be a linear operator such that
* T is bounded on L^p_w(^d) for some p ∈ (1,∞) and all weights w ∈ A_p;
* T is compact on L^p_w(^d) for some p ∈ (1,∞) and some weight w ∈ A_p.
Let X be a Banach function space over ^d such that
M:X→ X, M:X'→ X',
and assume X is r-convex and s-concave for some 1<r<s<∞.
Then T:X→ X is compact.
We prove Theorem <ref> as Theorem <ref> below. We note that the r-convexity and s-concavity conditions in the case of the Lebesgue space X=L^p_w(^d) are satisfied with r=s=p. Theorem <ref> is also applicable to, for example, weighted variable Lebesgue spaces X=L_w^p(·)(^d). Here the function p ^d → (0,∞) has to satisfy
1< p ≤ p <∞
so that X is r-convex and s-concave with r= p, s= p, and the weight and the exponent have to satisfy some additional condition ensuring the boundedness of M on X (see, e.g., <cit.> for the unweighted setting and <cit.> for the weighted setting).
As we shall see in Section <ref>, Theorem <ref> can be used to deduce the compactness of commutators of singular integral operators and multiplication by functions with vanishing mean oscillation. We refer to <cit.> for further examples of operators to which Theorem <ref> is applicable.
It is clear that the proof strategy of Theorem <ref> cannot work without the convexity and concavity assumptions, since, as mentioned before, the existence of the factorization (<ref>) implies that X is r-concave and s-concave. Thismeans that Theorem <ref> is, in particular, not applicable to Morrey spaces, as Morrey spaces are not s-concave for any s<∞. In <cit.>, Lappas proved that extrapolation of compactness in the Morrey scale is possible if one replaces the compactness assumption on L^p_w(^n) by a compactness assumption on a Morrey space. Since there are factorization results in the spirit of (<ref>), but with L^s(^d) replaced by this Morrey space, this allows one to follow the same lines of reasoning as before. For a general, non-convex or non-concave Banach function space X, it is not clear what a suitable replacement for L^s(^d) would be.
Let us return to the analogue of the self-improvement property of the Muckenhoupt classes needed to prove Theorem <ref>, which was stated in (<ref>). For a Banach function space X, it was shown by Lerner and Ombrosi <cit.> that if M:X→ X, then there is an r>1 such that also M:X^r→ X^r and hence, if
M:X→ X, M:X'→ X'.
as in Theorem <ref>,
we can find 1<r<s<∞ so that
M:X^r→ X^r, M:(X')^s'→ (X')^s'.
Unfortunately, it is not clear if this is implies the bounds
M:X_r,s→ X_r,s, M:(X_r,s)'→ (X_r,s)'.
In fact, it was shown in <cit.> that (<ref>) implies (<ref>) and the converse is an open problem, see <cit.>.
Instead of using the self-improvement result of <cit.> to the spaces X and X' separately, we will prove a simultaneous self-improvement result to show that if M is bounded on X and X', and the space X is r^*-concave and s^*-concave for some 1<r^*≤ s^*<∞, then (<ref>) holds for all 1<r≤ r^* small enough and s^* ≤ s< ∞ large enough. This is a direct consequence of our second main result.
Let r^*∈(1,∞) and let X be an r^*-convex Banach function space over ^d. Then the following are equivalent:
* We have M:X→ X and M:X'→ X';
* There is an r_0∈(1,r^*] so that for all r∈(1,r_0] we have
M:X^r→ X^r, M:(X^r)'→ (X^r)';
* There is an r∈(1,r^*] so that M:(X^r)'→ (X^r)'.
Theorem <ref> is proved as Theorem <ref> below. It relies on a sparse characterization of the boundedness of M on X and X', followed by a sparse self-improvement result based on a novel use of the classical reverse Hölder inequality of Muckenhoupt weights. Applying Theorem <ref> first to X and then to the resulting space (X^r)' yields (<ref>) for some 1<r<s<∞, see Corollary <ref>.
We note that Theorem <ref> is also of independent interest, as various works use <ref> as assumption, often not realizing that it is equivalent to <ref>.
Rubio de Francia's extrapolation theorem for L^p_w(^d) has been generalized to the off-diagonal setting by Harboure, Macías and Segovia <cit.> and to the limited range setting by Auscher and Martell <cit.>. Both settings were extended to general (quasi)-Banach function spaces X in <cit.>. Using <cit.>, we also obtain a limited range, off-diagonal extrapolation of compactness theorem for Banach function spaces, which is our third and last main result. We refer to Section <ref> for the definition of A_p⃗,(r⃗,s⃗).
Let α∈ and let r_1,r_2∈[1,∞), s_1,s_2∈(1,∞] satisfy r_j<s_j for j∈{1,2} and
1r_1-1r_2=1s_1-1s_2=α.
Define
𝒫:={(p_1,p_2)∈(0,∞]^2:1p_j∈[1s_j,1r_j], j∈{1,2}, 1p_1-1p_2=α}
and let
T:⋃_(p_1,p_2)∈𝒫
w∈ A_p⃗,(r⃗,s⃗) L^p_1_w(^d)→ L^0(^d)
be a linear operator such that
* T is bounded from L^p_1_w(^d) to L^p_2_w(^d) for some (p_1,p_2)∈𝒫 and all w∈ A_p⃗,(r⃗,s⃗);
* T is compact from L^p_1_w(^d) to L^p_2_w(^d) for some (p_1,p_2)∈𝒫 and some w∈ A_p⃗,(r⃗,s⃗).
Let r_j<r_j^∗<s_j^∗<s_j and let X_j be an r_j^∗-convex and s_j^∗-concave Banach function space for j∈{1,2} satisfying
(X_1)_r_1,s_1=(X_2)_r_2,s_2
and
M:(X_1)_r_1,s_1→ (X_1)_r_1,s_1, M:((X_1)_r_1,s_1)'→ ((X_1)_r_1,s_1)'.
Then T:X_1→ X_2 is compact.
We prove Theorem <ref> as Theorem <ref> below. Note that Theorem <ref> recovers <cit.> for X_1 and X_2 weighted Lebesgue spaces in a unified result.
For weighted Lebesgue spaces, the result in <cit.> was further generalized to multilinear operators by Cao, Olivo and Yabuta <cit.> (see also <cit.> by Hytönen and Lappas). Currently, multilinear extrapolation in quasi-Banach function spaces has not yet been proven in its full expected generality (see <cit.>). Moreover, the compactness of multilinear operators on product spaces seems to be only available for bilinear operators (see <cit.>) and, although appearing naturally in the multilinear setting, quasi-Banach function spaces are not allowed in this result. Because of these limitations, we will not develop multilinear versions our results in the current paper. We do point out that a bilinear compact extrapolation for products of weights classes extending <cit.> can easily be obtained using <cit.>, but since the proof is just an application of the linear case presented in this paper on each component, we do not consider this an important contribution to the literature, and therefore omit it.
Finally, we would like to note that for weighted Lebesgue spaces, an extrapolation of compactness result has also been obtained in the two-weight setting by Liu, Wu and Yang <cit.>.
The plan for this paper is as follows. We start in Section <ref> by defining Banach function spaces and all their relevant properties. Afterwards, in Section <ref>, we prove the self-improvement property of the maximal operator on Banach function spaces stated in Theorem <ref>. Sections <ref> and <ref> are devoted to proving the extrapolation of compactness results in the full range case (Theorem <ref>) and limited range, off-diagonal case (Theorem <ref>) respectively. Finally, in Section <ref> we apply Theorem <ref> to deduce the compactness of commutators of both Calderón–Zygmund and rough homogeneous singular integral operators with pointwise multiplication by a function with vanishing mean oscillation.
§ BANACH FUNCTION SPACES
Let (Ω,μ) be a σ-finite measure space. Let L^0(Ω) denote the space of measurable functions on (Ω,μ). A vector space X ⊆ L^0(Ω) equipped with a norm · _X is called a Banach function space over Ω if it satisfies the following properties:
* Ideal property: If f∈ X and g∈ L^0(Ω) with |g|≤|f|, then g∈ X with g_X≤f_X.
* Fatou property: If 0≤ f_n ↑ f for (f_n)_n≥ 1 in X and sup_n≥ 1f_n_X<∞, then f ∈ X and f_X=sup_n≥ 1f_n_X.
* Saturation property: For every measurable E⊆Ω of positive measure, there exists a measurable F⊆ E of positive measure with _F∈ X.
We note that the saturation property is equivalent to the assumption there is an f ∈ X such that f>0 a.e. (see <cit.>). Moreover, the Fatou property ensures that X is complete (see <cit.>).
We define the associate space X' as the space of all g ∈ L^0(^n) such that
g_X':= sup_f_X ≤ 1∫_^nfg<∞,
which is again a Banach function space, see <cit.>.
By the Lorentz–Luxembourg theorem (see <cit.>) we have X”=X with equal norms.
Throughout the literature, following the book by Bennet and Sharpley <cit.>, in the definition of a Banach function space X it is often in addition assumed that for all measurable E⊆Ω with μ(E)<∞ one has
_E∈ X and _E ∈ X'.
Note that this implies the saturation property. However, (<ref>) is too restrictive to study weighted norm inequalities in harmonic analysis. Indeed, there are examples of weighted Lebesgue spaces L^p_w(^d) for p ∈ (1,∞) and w ∈ A_p that do not satisfy (<ref>), see <cit.>.
§.§ Convexity properties
Let X be a Banach function space over Ω and 1 ≤ p ≤ q ≤∞. We call X p-convex if
(|f|^p+|g|^p)^1/p_X ≤(f_X^p+g_X^p)^1/p, f,g∈ X,and we call Xq-concave if(f_X^q+g_X^q)^1/q ≤(|f|^q+|g|^q)^1/q_X, f,g∈ X.
Note that any Banach function space is 1-convex by the triangle inequality and ∞-concave by the ideal property. One often defines p-convexity and q-concavity using finite sums of elements from X and a constant in the defining inequalities, but, by <cit.>, one can always renorm X such that these constants are equal to one, yielding our definition.
We note that if X is p-convex and q-concave, then X is also p_0 convex and q_0-concave for all p_0 ∈ [1,p] and q_0 ∈ [q,∞] and X' is q'-convex and p'-concave (see, e.g., <cit.>).
For p∈ (0,∞) we define the p-concavification of X by
X^p:= f∈ L^0(Ω):f^1/p∈ X,
i.e. for a positive f ∈ L^0(Ω) we have f ∈ X if and only if f^p ∈ X^p. We equip X^p with the quasi-norm
f_X^p:= f^1/p_X^p, f ∈ X^p.
Note that X is a Banach function space if and only if X is p∨ 1-convex.
Let 1≤ r<s≤∞ and let X be an r-convex and s-concave Banach function space. We define the (r,s)-rescaled Banach function space of X by
X_r,s:= (X^r)'^(s/r)'',
which is again a Banach function space.
§.§ Calderón–Lozanovskii products
Let X_0 and X_1 be Banach function spaces over Ω and θ∈ (0,1). We define the Calderón–Lozanovskii product (see <cit.>) X_θ:= X_0^1-θ· X_1^θ as the space of those h∈ L^0(Ω) for which there exist 0≤ f∈ X_0, 0≤ g∈ X_1 such that |h|≤ f^1-θg^θ. We equip this space with the norm
h_X_θ:=inff_X_0^1-θg_X_1^θ,
where the infimum is taken over all 0≤ f∈ X_0, 0≤ g∈ X_1 for which |h|≤ f^1-θg^θ. We note that X_θ is a Banach function space, see e.g. <cit.>.
The following proposition will play a key role in the proof of Theorem <ref> and Theorem <ref>. Note that one can interpret the appearing products as Calderón–Lozanovskii products since
L^s(Ω) = L^1+s/r'(Ω)^1-(1/r-1/s).
Let 1≤ r<s≤∞ and let X Banach function space over Ω.
* If X is r-convex and s-concave, then
X = (X_r,s)^1/r-1/s· L^s(Ω).
* If there is a Banach function space Y such that
X = Y^1/r-1/s· L^s(Ω),
then X is r-convex and s-concave and Y=X_r,s.
For <ref> we refer to <cit.> and <ref> is proven analogously to <cit.>, substituting X_r,s by Y.
§ SPARSE SELF-IMPROVEMENT FOR THE MAXIMAL OPERATOR
As we will need Theorem <ref> in the proof of Theorem <ref>, we start in this section by proving
Theorem <ref> and its consequences. First we introduce some notation. For r ∈ (0,∞), a cube Q ⊆^d and function f ∈ L^0(^d) we define the r-average of f by
f_r,Q:= 1/Q∫_Q f^r^1/r,
and we define the Hardy–Littlewood maximal operator by
Mf := sup_Q f_1,Q_Q,
where the supremum is taken over all cubes Q in ^d.
Let r^*∈(1,∞) and let X be an r^*-convex Banach function space over ^d. Then the following are equivalent:
* We have M:X→ X and M:X'→ X';
* There is an r_0∈(1,r^*] so that for all r∈(1,r_0] we have
M:X^r→ X^r, M:(X^r)'→ (X^r)';
* There is an r∈(1,r^*] so that M:(X^r)'→ (X^r)'.
Before we turn to the proof, we provide two useful corollaries. First of all, we obtain the following equivalent formulations of the bounds
M:X→ X, M:X'→ X',
assumed in Theorem <ref>.
Let X be a Banach function space for which there exist 1<r^*<s^*<∞ such that X is r^*-convex and s^*-concave. Then the following are equivalent:
* We have M:X→ X and M:X'→ X';
* There is an r∈(1,r^*] so that M:(X^r)'→ (X^r)';
* There is an s∈[s^*,∞) so that M:X_1,s→ X_1,s.
The equivalence of <ref> and <ref> is contained in Theorem <ref> whereas the equivalence of <ref> and <ref> follows from applying the first equivalence with X replaced by X', which is (s^*)'-convex, to find an s'∈(1,(s^*)'] for which M is bounded on [(X')^s']'=X_1,s.
Our second corollary is the self-improvement property in Banach function spaces discussed in the introduction, which will replace the self-improvement property of the Muckenhoupt classes in the proof of Theorem <ref> and Theorem <ref>:
Let X be a Banach function space for which there exist 1<r^*<s^*<∞ such that X is r^*-convex, s^*-concave, and
M:X→ X, M:X'→ X'.
Then there exist r_0∈(1,r^*], s_0∈[s^*,∞) such that for all r∈(1,r_0] and s∈[s_0,∞) we have
M:X_r,s→ X_r,s, M:(X_r,s)'→(X_r,s)'.
By Theorem <ref> there is a r_0∈(1,r^*] for which
M: X^r_0→ X^r_0, M:(X^r_0)'→ (X^r_0)'.
Note that (X^r_0)' is (s^*/r_0)'-convex. By applying Theorem <ref> to (X^r_0)', we find a t_0∈(1,(s^*/r_0)'] such that M is bounded on [(X^r_0)']^t_0 and ([(X^r_0)']^t_0)'.
Now define s_0:=r_0t_0'>s^*. Then we have
t_0=(s_0/r_0)',
so that
M:X_r_0,s_0→ X_r_0,s_0, M:(X_r_0,s_0)'→(X_r_0,s_0)'.
Letting r∈(1,r_0), s∈(s_0,∞) and noting that (X_r_0,s_0)'=(X')_s_0',r_0' by <cit.>, it follows from <cit.> that also
M:X_r,s→ X_r,s, M:(X_r,s)'→(X_r,s)'.
The assertion follows.
We now turn to the proof of Theorem <ref>. As a final preparation, we require a lemma on sparse operators. A collection of cubes 𝒮 in ^d is called sparse if for each Q∈𝒮 there is a measurable set E_Q⊆ Q for which |E_Q|≥1/2|Q| and, furthermore, the collection (E_Q)_Q∈𝒮 is pairwise disjoint. For a sparse collection of cubes 𝒮 and f∈ L^0(^d) we define
T_𝒮f:=∑_Q∈𝒮⟨ f⟩_1,Q_Q.
We have the following characterization of when M is bounded on X and X' in terms of T_𝒮.
Let X be a Banach function space over ^d. Then
M:X→ X, M:X'→ X'
if and only if we have T_𝒮:X→ X for all sparse collections of cubes 𝒮 with
sup_𝒮 is sparseT_𝒮_X→ X<∞.
For f,g∈ L^0(^d) we define
M_1,1(f,g):=sup_Q ⟨ f⟩_1,Q⟨ g⟩_1,Q_Q.
By <cit.> it suffices to show that M_1,1:X× X'→ L^1(^d) if and only if for all sparse collections 𝒮 we have T_𝒮:X→ X uniformly in 𝒮 with
sup_𝒮 is sparseT_𝒮_X→ X_dM_1,1_X× X'→ L^1(^d).
Assume first that M_1,1:X× X'→ L^1(^d). If 𝒮 is sparse, then for f ∈ X and g ∈ X'
(T_𝒮 f)g_L^1(^d) =∑_Q∈𝒮⟨ f⟩_1,Q⟨ g⟩_1,Q|Q|
≤ 2∑_Q∈𝒮∫_E_QM_1,1(f,g) dx
≤ 2 M_1,1(f,g)_L^1(^d)
Hence,
sup_𝒮 is sparseT_𝒮_X→ X
=sup_𝒮 is sparsesup_f_X=1
g_X'=1(T_𝒮 f)g_L^1(^d)≤ 2 M_1,1_X× X'→ L^1(^d).
Conversely, using <cit.>, for each f,g∈ L^0(^d), each dyadic lattice 𝒟 in ^d, and each finite collection ℱ⊆𝒟, there exists a sparse collection 𝒮⊆ℱ such that
M^ℱ_1,1(f,g)≤ 4∑_Q∈𝒮⟨ f⟩_1,Q⟨ g⟩_1,Q_Q,
where the superscript F indicates that the defining supremum in the definition of M_1,1 is only taken over Q ∈F. Hence,
M^ℱ_1,1(f,g)_L^1(^d) ≤ 4∑_Q∈𝒮⟨ f⟩_1,Q⟨ g⟩_1,Q|Q|=4 (T_𝒮 f)g_L^1(^d)
≤ 4 sup_𝒮 is sparseT_𝒮_X→ Xf_Xg_X'.
The assertion now follows from the monotone convergence theorem and the 3^d lattice theorem.
For <ref>⇒<ref>, we first note that the sharp reverse Hölder inequality <cit.> states that there is a dimensional constants c_d≥ 1 so that for all weights w such that
[w]_A_∞ := sup_Q 1/w(Q)∫_Q M(w_Q) <∞
and all r∈(1,∞) satisfying r'≥ c_d[w]_A_∞ we have
⟨ w⟩_r,Q≤ 2 ⟨ w⟩_1,Q
for all cubes Q in ^d.
Fix an r_0∈(1,r^*] with r_0'≥ 2c_dM_X→ X, and let r∈(1,r_0]. For f∈ X^r we define
w:=∑_k=0^∞M^k(|f|^1/r)/2^kM^k_X→ X,
where M^k denotes the k-th iterate of M. Then we have
c_d[w]_A_∞≤ 2c_dM_X→ X≤r',
and therefore ⟨ w⟩_r,Q≤ 2⟨ w⟩_1,Q for all cubes Q.
Let 𝒮 be a sparse collection. Since |f|^1/r≤ w and w_X≤ 2f_X^r^1/r, we obtain
T_𝒮f_X^r =(∑_Q∈𝒮⟨ |f|^1/r⟩_r,Q^r_Q)^1/r^r_X≤∑_Q∈𝒮⟨ w⟩_r,Q_Q^r_X
≤ 2^r T_𝒮w_X^r≤ 2^r T_𝒮_X→ X^rw^r_X≤ 4^rT_𝒮_X→ X^rf_X^r.
Thus, we have
sup_𝒮 is sparseT_S_X^r→ X^r≤ 4^r(sup_𝒮 is sparseT_𝒮_X→ X)^r.
The result now follows from Lemma <ref>.
The implication <ref>⇒<ref> is immediate.
To prove <ref>⇒<ref>, we apply <cit.> to the classical bound
M_L^p_w(^d)→ L^p_w(^d)≲_d p'[w]_p^p'
by Buckley <cit.> to conclude that
M_X→ X≲_d r'2^1/r-1M_(X^r)'→ (X^r)'^1/r-1, M_X'→ X'≲_d rM_(X^r)'→ (X^r)'.
This proves the result.
Theorem <ref> can easily be extended to Banach function spaces X over a space of homogeneous type (S,d,μ) in the sense of Coifman and Weiss <cit.>. Maximal operator bounds and sparse domination estimates in this setting are available through the use of e.g. Hytönen–Kairema cubes <cit.> (see also Christ <cit.>) and the sharp reverse Hölder inequality needs to be replaced by the weak sharp reverse Hölder inequality by Hytönen, Perez and Rela <cit.>.
§ FULL RANGE COMPACT EXTRAPOLATION
This section is dedicated to the proof of the full range extrapolation of compactness theorem, Theorem <ref>:
Let
T:⋃_p∈(1,∞)
w∈ A_pL^p_w(^d)→ L^0(^d)
be a linear operator such that
* T is bounded on L^p_w(^d) for some p ∈ (1,∞) and all weights w ∈ A_p;
* T is compact on L^p_w(^d) for some p ∈ (1,∞) and some weight w ∈ A_p.
Let X be a Banach function space over ^d such that
M:X→ X, M:X'→ X',
and assume X is r^*-convex and s^*-concave for some 1<r^∗<s^∗<∞.
Then T:X→ X is compact.
As noted in the introduction, we will need three main ingredients to prove Theorem <ref>. We will need the Rubio de Francia extrapolation result from <cit.>, the self-improvement result from Section <ref> and a result on the compactness of operators on product spaces. The latter is a special case of a result of Cobos, Kühn and Schonbek <cit.> with the function parameter ρ(t)=t^θ, which we formulate next.
Let (Ω,μ) be a σ-finite measure space and let X_0, X_1, Y_0, Y_1 be Banach function spaces over Ω. Let T X_0+X_1→ Y_0+Y_1 be a linear operator such that
* T is bounded from X_0 to Y_0;
* T is compact from X_1 to Y_1.
Then
T:X_0^1-θ· X_1^θ→ Y_0^1-θ· Y_1^θ
is compact for all θ∈ (0,1).
Having all main ingredients at our disposal, the proof of Theorem <ref> is rather short.
Note that, by the classical Rubio de Francia extrapolation theorem, T is bounded on L^p_w(^d) for all p ∈ (1,∞) and w ∈ A_p. Moreover, since
L^2(^d) = L^p_w(^d)^1/2· L^p'_w^-1(^d)^1/2
and w ∈ A_p if and only if w^-1∈ A_p',
Proposition <ref> implies that T is compact on L^2(^d).
By Corollary <ref>, there are r_0∈(1,r^*] and s_0∈[s^*,∞) such that M is bounded on X_r,s and (X_r,s)' for all r∈(1,r_0] and s∈[s_0,∞). Hence, by Rubio de Francia extrapolation in Banach function spaces as in <cit.>, T is bounded on X_r,s for all r∈(1,r_0] and s∈[s_0,∞).
By Proposition <ref>, we have
X=X_r,s^1-θ· L^p(^d)^θ.
with θ=1-(1/r-1/s)∈(0,1) and p=1+s/r'. Choosing 1/r'=1/s small enough, we have p=2 and T is bounded on X_r,s.
Since T is compact on L^2(^d), T is compact on X by Proposition <ref>. This proves the result.
§ LIMITED RANGE, OFF-DIAGONAL COMPACT EXTRAPOLATION
In this section, we prove Theorem <ref>. Essentially, the steps are the same as in the proof of Theorem <ref>, and the new difficulties lie mainly in unwinding the definitions while incorporating the additional parameters.
For 1 ≤ r <p<s≤∞, we say that a weight w belongs to the limited range Muckenhoupt class A_p,(r,s) if
[w]_p,(r,s) := sup_Q w_1/1/p-1/s,Qw^-1_1/1/r-1/p,Q< ∞.
For α∈, r_1,r_2∈[1,∞), s_1,s_2∈(1,∞] for which 1/s_j<1/r_j for j∈{1,2} and
1r_1-1r_2=1s_1-1s_2=α,
we note that we have A_p_1,(r_1,s_1)=A_p_2,(r_2,s_2). For a weight w in this class we will write w∈ A_p⃗,(r⃗,s⃗). Recall that our limited range, off-diagonal extrapolation of compactness theorem reads as follows:
Let α∈ and let r_1,r_2∈[1,∞), s_1,s_2∈(1,∞] satisfy 1/s_j<1/r_j for j∈{1,2} and
1r_1-1r_2=1s_1-1s_2=α.
Define
𝒫:={(p_1,p_2)∈(0,∞]^2:1p_j∈[1s_j,1r_j], j∈{1,2}, 1p_1-1p_2=α}
and let
T:⋃_(p_1,p_2)∈𝒫
w∈ A_p⃗,(r⃗,s⃗) L^p_1_w(^d)→ L^0(^d)
be a linear operator such that
* T is bounded from L^p_1_w(^d) to L^p_2_w(^d) for some (p_1,p_2)∈𝒫 and all w∈ A_p⃗,(r⃗,s⃗);
* T is compact from L^p_1_w(^d) to L^p_2_w(^d) for some (p_1,p_2)∈𝒫 and some w∈ A_p⃗,(r⃗,s⃗).
Let r_j<r_j^∗<s_j^∗<s_j and let X_j be an r_j^∗-convex and s_j^∗-concave Banach function space for j∈{1,2} satisfying
(X_1)_r_1,s_1=(X_2)_r_2,s_2
and
M:(X_1)_r_1,s_1→ (X_1)_r_1,s_1, M:((X_1)_r_1,s_1)'→ ((X_1)_r_1,s_1)'.
Then T:X_1→ X_2 is compact.
The limited range, off-diagonal Rubio de Francia extrapolation theorem in <cit.> is phrased for quasi-Banach function spaces. However, one of our other main ingredients, the compactness result in Proposition <ref>, does not seem to be available in the quasi-setting. Therefore, we have stated Theorem <ref> in the Banach function space setting and leave its extension to the quasi-Banach function space setting with r_1,r_2 ∈ (0,∞) as an open problem. Note that quasi-Banach function spaces typically only show up in harmonic analysis in multilinear or endpoint settings.
As said, the main challenge in the proof of Theorem <ref> is to unpack all definitions and to keep track of the additional parameters. In order to do so, we prove a couple of technical lemmata. We start with a limited range version of the self-improvement property for the maximal operator.
Let 1≤ r<r^∗<s^∗<s≤∞ and let X be an r^∗-convex and s^∗-concave Banach function space. If
M: X_r,s→ X_r,s, M:(X_r,s)'→ (X_r,s)',
then there are r_0∈(r,r^∗], s_0∈[s^∗,s) so that for all r∈(r,r_0), s∈(s_0,s) we have
M:X_r,s→ X_r,s, M:(X_r,s)'→ (X_r,s)'.
By Theorem <ref> there is a q_0∈(1,r^∗] so that for all q∈(1,q_0] we have
M: X_r,s^q→ X_r,s^q, M:(X_r,s^q)'→ (X_r,s^q)'.
Defining
1r_0:=1q(1r-1s)+ 1s∈(1s,1r),
it follows from <cit.> that for q>1 chosen small enough so that r_0≤ r^∗, we have
X_r,s^q=X_r_0,s.
Note that (X_r_0,s)'=[(X^r_0)']^(s/r_0)' is t^*-convex, where
t^* : = 1/r_0-1/s/1/r_0-1/s^∗.
Thus, by Theorem <ref>, we find a t_0∈(1,t^*] so that for all t∈(1,t_0] we have that M is bounded on [(X^r_0)']^t(s/r_0)' and ([(X^r_0)']^t(s/r_0)')'.
Now define
1s_0:=1r_0-1t(1r_0-1s)=1q1t'(1r-1s)+1s∈(1s,1r_0)
which, if t>1 is chosen small enough, satisfies s_0≥ s^∗. Then we have
t(sr_0)'=(s_0r_0)'
so that
M:X_r_0,s_0→ X_r_0,s_0, M:(X_r_0,s_0)'→(X_r_0,s_0)'.
Now let r∈(r,r_0), s∈(s_0,s). Noting that (X_r_0,s_0)'=((X^r)')_(s_0/r)',(r_0/r)' by <cit.>, it follows from <cit.> that also
M:X_r,s→ X_r,s, M:(X_r,s)'→(X_r,s)'.
The assertion follows.
Next, we prove a rescaling lemma.
Let 1≤ r<r<s<s≤∞
and let X be an r-convex and s-concave Banach function space. Define
1/t:=1/r1/s-1/r1/s/1/r-1/s
Then X is t-concave, and [X_r,t]^1/r is r-convex and s-concave with
([X_r,t]^1/r)_r,s=X_r,s.
We have
1/t=1/r(1/s-1/s)+(1/r-1/r)1/s/1/r-1/s>0
and
1/s-1/t=1/s1/r-1/s/1/r-1/s≥ 0.
Thus, t≥s and, hence, X is t-concave. Therefore ([X_r,t]^1/r)^r=X_r,t is a Banach function space, and hence, [X_r,t]^1/r is r-convex. Moreover, since
(tr)'(sr)'=(sr)'
we have
([([X_r,t]^1/r)^r]')^(r/s)'=[(X^r)']^(r/s)'=(X_r,s)'
which is a Banach function space, since X is r-convex and s-concave. This proves that [X_r,t]^1/r is s-concave. Moreover, taking associate spaces in this equality, the final assertion follows.
We finish our preparation for the proof of Theorem <ref> with a factorization lemma in the limited range setting.
Let 1 ≤ r<r<s<s≤∞ and let X be an r-convex and s-concave Banach function space. Define
1/t:=1/r1/s-1/r1/s/1/r-1/s
and
θ:=1-1/r-1/s/1/r-1/s∈(0,1), p=1/s-1/s+1/r-1/r/1/r(1/s-1/s)+(1/r-1/r)1/s∈ (1,∞).
Then
X=([X_r,t]^1/r)^1-θ· L^p(^d)^θ.
Since
1r(1-θ)=1r-1t
and t=p/θ, we have by Proposition <ref> and Lemma <ref>
([X_r,t]^1/r)^1-θ· L^p(^d)^θ
=(X_r,t)^1/r-1/t· L^t(^d)=X,
as asserted.
Let (p_1,p_2)∈P and w∈ A_p⃗,(r⃗,s⃗) so that T:L^p_1_w(^d)→ L^p_2_w(^d) is compact. Note that, by limited range, off-diagonal Rubio de Francia extrapolation for weighted Lebesgue spaces as in <cit.> (which is also a special case of <cit.>), we actually obtain the boundedness assumption on T for all (q_1,q_2) ∈P and weights in A_q⃗,(r⃗,s⃗). In particular, this is the case for (q_1,q_2)∈P defined by
1/q_j:=1/s_j+1/r_j-1/p_j, j∈{1,2},
and the weight w^-1∈ A_q⃗,(r⃗,s⃗), which one can directly verify using the definition of the weight constant. Since
L^1/1/2(1/r_j+1/s_j)(^d)=L^p_j_w(^d)^1/2· L^q_j_w^-1(^d)^1/2,
it follows Proposition <ref> that T:L^1/1/2(1/r_1+1/s_1)(^d)→ L^1/1/2(1/r_2+1/s_2)(^d) is compact. Thus, we have reduced the compactness assumption to the case 1/p_j=1/2(1/r_j+1/s_j) for j∈{1,2} and w=1.
Next, by Lemma <ref> we have
M:(X_1)_r_1,s_1→ (X_1)_r_1,s_1, M:((X_1)_r_1,s_1)'→ ((X_1)_r_1,s_1)'
for all r_1∈(r_1,r_1^∗], s_1∈[s_1^∗,s_1) with
1s_1-1s_1=1r_1-1r_1=ε
for ε>0 small enough.
Defining 1/r_2:=1/r_1-α and 1/s_2:=1/s_1-α, it follows from <cit.> that for
β:=1/r_1-1/s_1/1/r_1-1/s_1=1/r_2-1/s_2/1/r_2-1/s_2, γ:=1/r_1-1/r_1/1/r_1-1/r_1+1/s_1-1/s_1=1/r_2-1/r_2/1/r_2-1/r_2+1/s_2-1/s_2
we have
(X_1)_r_1,s_1 =(X_1)_r_1,s_1^β· L^1/1-γ(^d)^1-β
=(X_2)_r_2,s_2^β· L^1/1-γ(^d)^1-β=(X_2)_r_2,s_2.
Hence, by Lemma <ref> we conclude that
([(X_1)_r_1,t_1]^1/r_1)_r_1,s_1=(X_1)_r_1,s_1 = (X_2)_r_2,s_2=([(X_2)_r_2,t_2]^1/r_2)_r_2,s_2,
where
1/t_j=1/r_j1/s_j-1/r_j1/s_j/1/r_j-1/s_j, j∈{1,2}.
Thus, by <cit.> it follows that
T:[(X_1)_r_1,t_1]^1/r_1→[(X_2)_r_2,t_2]^1/r_2.
Moreover, by Lemma <ref> we have
X_1 =([(X_1)_r_1,t_1]^1/r_1)^1-θ· L^p_1(^d)^θ,
X_2 =([(X_2)_r_2,t_2]^1/r_2)^1-θ· L^p_2(^d)^θ
with
θ=1-1/r_1-1/s_1/1/r_1-1/s_1=1-1/r_2-1/s_2/1/r_2-1/s_2,
and
1p_j=12(1r_j+1s_j), j∈{1,2}.
Hence, it follows from Proposition <ref> that T:X_1→ X_2 is compact. This proves the assertion.
§ APPLICATIONS
In this final section, we will briefly outline applications of Theorem <ref> and Theorem <ref> to the compactness of commutators of singular integral operators and multiplication by functions with vanishing mean oscillation. We refer to <cit.> for further examples of operators to which Theorem <ref> or Theorem <ref> are applicable.
We start with an application to Calderón–Zygmund singular integral operators. Let T L^2(^d) → L^2(^d) be a bounded linear operator and suppose that, for any f ∈ C^∞_c(^d), T has the representation
Tf(x) = ∫_^dK(x,y)f(y) y, x ∈^d ∖(f),
where the kernel satisfies the estimates
K(x,y) - K(x,y) ≤ωx-x'x-y, 0<x-x' <12 x-y,
K(x,y) - K(x,y') ≤ωx-x'x-y, 0<y-y' <12 x-y,
for some increasing, subadditive ω [0,1] → [0,∞). If ∫_0^1ω(t) d t/t<∞, we call T a Calderón–Zygmund operator with Dini-continuous kernel.
We will be concerned with the commutators
[b,T]f := bT(f) -T(bf)
for pointwise multipliers b ∈ L^1_(^d) belonging to the space
:= C_c^∞(^d)^·_,
where denotes the classical space of functions with bounded mean oscillation. Note that the space is the space of all functions with vanishing mean oscillation and is therefore also denoted by in the literature, see, e.g., <cit.>.
Let T be a Calderón–Zygmund operator with Dini-continuous kernel. Let 1<r<s<∞ and suppose X is an r-convex and s-concave Banach function space over ^d such that
M:X→ X, M:X'→ X'.
For b ∈CMO, the commutator [b,T] X → X is a compact operator.
In view of Theorem <ref>, it suffices to check that [b,T] is bounded on L^2_w(^d) for all w ∈ A_2 and that [b,T] is compact on L^2(^d). For the weighted boundedness it suffices to check that T is bounded on L^2_w(^d) for all w ∈ A_2 by <cit.> (see <cit.> for a modern approach). The boundedness of T on L^2_w(^d) for all w ∈ A_2 is classical (see <cit.> for a modern approach). Finally, the compactness of [b,T] on L^2(^d) was shown by Uchiyama <cit.>.
Theorem <ref> in the specific case where X is a weighted Lebesgue space was previously obtained by Clop and Cruz <cit.> and subsequently proven using an extrapolation of compactness argument by Hytönen and Lappas <cit.>.
Theorem <ref> yields many examples of Banach function spaces X for which the compactness of [b,T] was previously unknown.
For example, one can consider weighted variable Lebesgue spaces X=L_w^p(·)(^d). Here the function p ^d → (0,∞) has to satisfy
1< p ≤ p <∞
so that we can take r= p, s= p in Theorem <ref>, and some additional condition ensuring the boundednes of M on X (see, e.g., <cit.> for the unweighted setting and <cit.> for the weighted setting). As a matter of fact, it was originally shown by Diening in <cit.> that in the case of unweighted variable Legbesgue spaces satisfying (<ref>) we have M:X→ X if and only if M:X'→ X'. We also refer the reader to <cit.> for an explicit characterization of when M is bounded on X in terms of the exponent function p. Diening's duality result was extended to weighted variable Lebesgue spaces X=L^p(·)_w(^d) by Lerner in <cit.>. In conclusion, in the setting of weighted variable Lebesgue spaces, the boundedness condition of M in Theorem <ref> only needs to be checked on X, as the boundedness on X' is then guaranteed.
We refer to <cit.> for further examples of Banach function spaces satisfying the conditions in Theorem <ref>.
We note that for the case that X is a Morrey space, similar results were obtained by Arai and Nakai <cit.> and Lappas <cit.>, which lie beyond the scope of our general framework as explained in the introduction.
Next, let us consider rough singular integral operators
T_Ω f:= p.v.∫_^dΩ(x-y)/x-y^d f(y) y, x ∈^d,
where Ω∈ L^r(𝕊^d-1) for some r ∈ [1,∞] is homogeneous of order zero and has mean value zero.
Let 1≤ r<r^*<s^* <∞ and let Ω∈ L^r'(Ω). Suppose X is an r^*-convex and s^*-concave Banach function space over ^d such that
M:X^r→ X^r, M:(X^r)'→ (X^r)'.
For b ∈CMO, the commutator [b,T_Ω] X → X is a compact operator.
In view of Theorem <ref>, it suffices to check that [b,T] is bounded on L^p_w(^d) for some p ∈ (r,∞) and all w ∈ A_p,(r,∞) and that [b,T] is compact on L^p(^d). As in the proof of Theorem <ref>, for the weighted boundedness it suffices note that T_Ω is bounded on L^p_w(^d) for all w ∈ A_p,(r,∞), which was shown independently by Watson <cit.> and Duoandikoetxea <cit.>. The compactness of [b,T_Ω] on L^p(^d) was shown by Chen and Hu <cit.>.
Theorem <ref> in the specific case where X is a weighted Lebesgue space was previously obtained by Guo and Hu <cit.> and subsequently proven using an extrapolation of compactness argument by Hytönen and Lappas <cit.>. In the case that X is a Morrey space, similar results were also obtained in <cit.> and <cit.>.
To conclude our applications section, we note that in the recent work by Tao, Yang, Yuan and Zhang <cit.>, compactness of [b,T_Ω] for b ∈ was studied in the setting of Banach function spaces as well. Let us give a comparison between Theorem <ref> in the specific case r=1 and <cit.>
* We work with Banach function spaces as in Section <ref>, whereas <cit.> uses ball Banach function spaces, the former being a more general in the sense that any ball Banach function space is also a Banach function space in the sense of Section <ref>. However, since we assume M to be bounded on X and X', so in particular _B∈ X and _B ∈ X' for all balls B ⊆^d, we also have that any Banach function space satisfying the assumptions of Theorem <ref> is a ball Banach function space.
* In <cit.> it is assumed that there is an r^*>1 such that X^r^* is a Banach function space (i.e. X is r^*-convex) and M is bounded on (X^r^*)'. This is equivalent to M being bounded on X and X' by Theorem <ref>. Thus, the only difference in the assumptions on X between Theorem <ref> with r=1 and <cit.> is that Theorem <ref> in addition assumes X to be s-concave for some s<∞. This, as discussed before, is a necessary assumption in our general approach.
* In Theorem <ref> with r=1 it is assumed that Ω∈ L^∞(𝕊^d-1), whereas in <cit.> a much stronger assumption on Ω is imposed. Indeed, Ω is assumed to satisfy a Dini continuity condition, see <cit.>.
* The proof of Theorem <ref> uses “soft” techniques, whereas <cit.> takes a more hands-on and technical approach, developing and using a Frechet–Kolmogorov compactness criterion in ball Banach function spaces (see also <cit.>).
alpha
|
http://arxiv.org/abs/2306.05789v1
|
20230609100548
|
Reflective Conditions for Radiative Transfer in Integral Form with H-Matrices
|
[
"Olivier Pironneau",
"Pierre-Henri Tournier"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"math-ph",
"math.MP",
"85A25, 37N30, 31A10, 35Q30, 68P30, 74S05"
] |
patterns
theoremTheorem
proposition[theorem]Proposition
exampleExample
remarkRemark
notationNotation
definitionDefinition
lemmaLemma
hypothesisHypothesis
corollaryCorollary
remarksRemarks
e
ℝ
𝕊
ℕ
ℚ
ℂ
ℤ
𝔻
ε
|
http://arxiv.org/abs/2306.06825v1
|
20230612022544
|
AnoFel: Supporting Anonymity for Privacy-Preserving Federated Learning
|
[
"Ghada Almashaqbeh",
"Zahra Ghodsi"
] |
cs.CR
|
[
"cs.CR",
"cs.LG"
] |
AnoFel: Supporting Anonymity for Privacy-Preserving Federated Learning
Ghada Almashaqbeh
University of Connecticut
[email protected]
Zahra Ghodsi
Purdue University
[email protected]
July 31, 2023
======================================================================================================================
Federated learning enables users to collaboratively train a machine learning model over their private datasets. Secure aggregation protocols are employed to mitigate information leakage about the local datasets. This setup, however, still leaks the participation of a user in a training iteration, which can also be sensitive. Protecting user anonymity is even more challenging in dynamic environments where users may (re)join or leave the training process at any point of time.
In this paper, we introduce , the first framework to support private and anonymous dynamic participation in federated learning. leverages several cryptographic primitives, the concept of anonymity sets, differential privacy, and a public bulletin board to support anonymous user registration, as well as unlinkable and confidential model updates submission. Additionally, our system allows dynamic participation, where users can join or leave at any time, without needing any recovery protocol or interaction. To assess security, we formalize a notion for privacy and anonymity in federated learning, and formally prove that satisfies this notion. To the best of our knowledge, our system is the first solution with provable anonymity guarantees. To assess efficiency, we provide a concrete implementation of , and conduct experiments showing its ability to support learning applications scaling to a large number of clients. For an MNIST classification task with 512 clients, the client setup takes less than 3 sec, and a training iteration can be finished in 3.2 sec.
We also compare our system with prior work and demonstrate its practicality for contemporary learning tasks.
§ INTRODUCTION
Privacy-preserving machine learning is a critical problem that has received huge interest from both academic and industrial communities. Many crucial applications involve training ML models over highly sensitive user data, such as medical screening <cit.>, credit risk assessment <cit.>, or autonomous vehicles <cit.>. Enabling such applications requires machine learning frameworks that preserve the privacy of users' datasets.
Federated learning (FL) aims to achieve this goal by offering a decentralized paradigm for model training. Participants, or clients, train the model locally over their datasets, and then share only the local gradients with the model owner, or the server. After aggregating updates from all clients, the server shares the updated model with these clients to start a new training iteration. This iterative process continues until the model converges.
However, individual model updates leak information about clients' private datasets <cit.>, and therefore aggregation should be done in a secure way: a server only sees the aggregated value rather than individual contributions from each client. A large body of work emerged to build cryptographic protocols for secure aggregation to support private federated learning, e.g., <cit.>. Even with a provably secure aggregation protocol, the aggregated model updates still impose a leakage; it was shown that membership inference attacks can determine whether a data sample has been used in the training of a given ML model <cit.>. Several defense techniques have been proposed that rely on, e.g., regularization techniques to reduce overfitting <cit.>, knowledge distillation <cit.>, and differential privacy <cit.>.
Anonymous client participation. A related question to protecting data privacy in federated learning is protecting client identity and breaking linkability with any information that could be deduced from training. Anonymity is critical for training models over sensitive data related to, e.g., rare diseases or sexual abuse incidents. The mere knowledge that a user has participated implies being ill or a victim. It may also allow collecting sensitive information, e.g., location in autonomous vehicles related applications, or financial standing in trading or loans related training tasks. Such leakage invades privacy, and may discourage participation.
Unfortunately, existing frameworks for private federated learning either don't support client anonymity, or suffer from security issues. Secure aggregation protocols <cit.> require full identification of clients through a public key infrastructure (PKI) or a trusted client registration process to prevent Sybil attacks and impersonation. Even frameworks that deal with honest-but-curious adversaries <cit.> assign clients logical identities, where the mapping between the logical and real identities is known to the server or a trusted third party. On the other hand, techniques that anonymize datasets <cit.> do not support participation anonymity, but rather hide sensitive information in the dataset before being used in training. At the same time, existing solutions for anonymous client participation have several limitations <cit.>: they either rely on fixed psuedonyms that are susceptible to traffic analysis attacks <cit.>, assume a trusted party to mediate communication <cit.>, or are vulnerable to man-in-the-middle attacks <cit.>.
Dynamic settings. Allowing clients to (re)join or leave at any time is invaluable for training tasks targeting dynamic environments. A decentralized activity as federated learning may deal with heterogeneous settings involving weak clients who may use low-power devices or have unstable network connectivity. Even it could be the case that clients simply change their minds and abort the training protocol after it starts. The ability to support this dynamicity at low overhead promotes participation, but it is a more challenging setup for client anonymity.
Most privacy-preserving federated learning solutions do not support dynamic participation; usually clients must join at the onset of the training process during the setup phase. Several solutions support client dropouts (at a relatively high overhead by employing highly interactive recovery protocols that reveal dropout identities) <cit.> but not addition, or support both but at the expense of a constrained setup that places a cap on the number of clients who can participate <cit.>. To the best of our knowledge, supporting anonymity in a dynamic environment has been absent from the current state-of-the-art.
An open question. Therefore, we ask the following question: can we achieve private federated learning that supports users' anonymity in both static and dynamic settings?
§.§ Our Contributions
In this paper, we answer this question in the affirmative and propose a system called that fulfills the requirements above. In particular, we make the following contributions.
System design. utilizes various cryptographic primitives and privacy techniques to achieve its goals. To address anonymity, our system combines a public bulletin board, cryptographic commitments, non-interactive zero-knowledge proofs, and differential privacy such that users can participate in training without revealing their identities. This involves (1) an anonymous registration process guaranteeing that only legitimate clients with honestly-generated datasets can participate, and (2) an unlinkable model updates submission that cannot be traced back to the client. We rely on anonymity sets and zero-knowledge proofs to achieve this, where a client proves owning a legitimate dataset (during registration), and being one of the registered clients (during model updates submission), without revealing anything about their identity or registration information. Moreover, to address membership, inference, and model inversion attacks which could also compromise participation anonymity, we employ differential privacy. A client, after training the model locally and before encrypting the model updates, will sample a noise value and add it to the model updates before encrypting them. The value of this noise is adaptive; it decreases as the number of active clients (those who submitted updates so far in iteration) increases. This reduces the impact on training accuracy without violating the privacy leakage guarantees obtained by differential privacy as elaborated later.
To support dynamic user participation and secure aggregation of model updates, our system employs threshold homomorphic encryption. It splits the roles of a model owner from the aggregators (where we use a committee of aggregators to distribute trust). Aggregators receive encrypted model parameter updates (gradients) from users, and at the end of a training iteration, they operate on these ciphertexts by (homomorphically) adding them to produce a ciphertext of the aggregation. Afterwards, the aggregators decrypt the result and send the aggregated plaintext updates to the model owner so that a new training iteration can be started. Due to this configuration, the clients are not involved in the aggregation or decryption processes and can (re)join and leave training at any time without interrupting the system operation. Furthermore, the bulletin board provides a persistent log accessible to all parties, and facilitates indirect communication between them to reduce interaction.
Formal security notions and analysis. We define a notion for private and anonymous federated learning that encompasses three properties: correctness, anonymity, and dataset privacy. Then, we formally prove the security of based on this notion. To the best of our knowledge, we are the first to provide such formal definition covering anonymity, and the first to build a provably secure client anonymity solution for private federated learning. Our notion could be of independent interest as it provides a rigorous foundation for other anonymity solutions to prove their security guarantees.
Implementation and evaluation. To show practicality, we implement and empirically evaluate its performance covering different federated learning tasks. We demonstrate scalability of our system by testing scenarios that involve large numbers of clients and contemporary models—the benchmarked architectures are the largest studied in privacy-preserving federated learning literature.
We also show that the augmented components to support privacy and anonymity add reasonable overhead to client runtime.
For example, in a network of 512 clients, the client setup needed to join the training task takes less than 3 sec, and each training iteration for MNIST classification takes the client a total of 3.2 sec.
We also compare our system to prior work on privacy-preserving federated learning. For our largest benchmark on SqueezeNet architecture trained over TinyImageNet dataset with a network of 512 clients, is only 1.3× slower to finish a training round (in 17.5 sec) compared to Truex et al. <cit.> and 2.8× slower than Bonawitz et al. <cit.> (the former is a non-interactive private scheme while the latter is interactive, and neither support anonymity). Additionally, we evaluate the accuracy of models trained with and show that compared to a non-DP baseline, we obtain models with <0.5% accuracy loss in both independent and identically distributed (IID) datasets between clients and non-IID settings.
§ A SECURITY NOTION FOR PRIVATE AND ANONYMOUS FEDERATED LEARNING
In this section, we define a formal security notion for a private and anonymous federated learning scheme (PAFL). This notion, and its correctness and security properties, are inspired by <cit.>.
Notation. We use λ to denote the security parameter, α to denote correctness parameter, γ to denote the privacy advantage of the adversary (α and γ are the parameters of the technique used to address membership attacks, if any) (·) to denote negligible functions, and boldface letters to represent vectors. The variable represents the system state, including the data recorded on the bulletin board (these posted by all parties, and the public parameters of the cryptographic building blocks). The notation ^𝒪 means that an entity, in this case the adversary , has an oracle access to 𝒪. Lastly, denotes drawn at random, and is a shorthand for probabilistic polynomial time.
Let Π be a protocol between a server S, set of aggregators , and a set of clients such that each client _i ∈ holds a dataset D_i. Let 𝐌 be the initial model that S wants to train, 𝐌_actual be the model produced by training 𝐌 over D_i (in the clear) for i = 1, …, ||, and 𝐌_Π be the trained model produced by the protocol Π. Π is a private and anonymous federated learning (PAFL) scheme, parameterized by bounds α and γ, if it satisfies the following properties for every 𝐌:
-0.2em
* α-Correctness: The model trained by Π achieves an error bound α with high probability compared to the actual model. That is, for α≥ 0, and an error function , with high probability, we have (𝐌_actual, 𝐌_Π) ≤α.
* Anonymity: Any adversary has a negligible advantage in winning the anonymity game . Formally, for a security parameter λ, there exists a negligible function such that wins with probability at most 1/2 + (λ), where the probability is taken over all the randomness used by 𝒜 and Π.
* γ-Dataset Privacy: Any adversary has a negligible additional advantage over γ in winning the dataset indistinguishability game . Formally, for a security parameter λ and γ≥ 0, there exists a negligible function such that wins with probability at most 1/2 + γ + (λ), where the probability is taken over all randomness used by 𝒜 and Π.
Intuitively, a PAFL scheme is one that is correct and provides anonymity and dataset privacy for clients. Ideally, correctness guarantees that the outcome of a PAFL scheme (i.e., the final trained model) is identical to what will be produced by a training scheme that gets full access to the clients' datasets. Anonymity means that no one can tell whether a client has registered or participated in any training iteration. In other words, a submitted model updates, or any other information a client uses for registration, cannot be traced back to this client. Dataset privacy means that no additional information will be leaked about the private datasets of honest clients beyond any prior knowledge the adversary has.
To make our definition more general, we account for the use of non-cryptographic privacy techniques, such as DP, that may result in accuracy and privacy loss. We do that by parameterizing our definition with α and γ standing for correctness (or accuracy loss) and indistinguishability (or privacy loss) parameters, respectively. Having α = γ = 0 reduces to the ideal case in which 𝐌_actual = 𝐌_Π, and an adversary has negligible advantage in breaking anonymity and data set privacy. The bounds for these parameters are derived based on the non-cryptographic privacy technique employed in the system.
We define two security games to capture anonymity and dataset privacy, denoted as and , respectively, and the interfaces offered by a PAFL scheme. All parties receive the security parameter λ, and are given an oracle access to . maintains the state of the system, including the set of registered clients and aggregators, and any additional information recorded on the board. Since the goal is to protect clients from the model owner S, we assume controls S and any subset of clients and aggregators. That is, can register any client or aggregator committee in the system, and can corrupt any of the registered clients and aggregators. supports the following query types:
-0.2em
* (, 1^λ): takes the security parameter as input and sets up the system accordingly—creating the bulletin board, the public parameters needed by all parties/cryptographic building blocks, and the bounds/parameters needed by any additional non-cryptographic privacy technique employed in the system. This command can be invoked only once.
* (, p, ): registers party p that could be a client or an aggregators committee. The field specifies the party type and its input: if p is a client, then will include its certified dataset and the certification information, while if p is an aggregator committee, will include the committee's public key. This command can be invoked anytime and as many times as desired.
* (, , ): instructs a (registered) client to train the model using its dataset and submit the model updates. The field defines the dataset that belongs to (the exact information is based on Π). This command can be invoked anytime and as many times as desired.
* (): returns the updated model (after aggregating all submitted individual model updates received in an iteration). For any iteration, this command can be invoked only once and only at the end of that iteration.
* (, p, ): This allows to corrupt party p, which could be a client or an aggregator. If p is a client, then will be the registration information of this client (e.g., in , it is the dataset commitment as we will see later), while if p is an aggregator, will be the public key of that party. This command can be invoked at anytime and as many times as wishes.
Note that the notion only represents the type of a party to be a client, it does not contain its real identity.
Accordingly, the proceeds as follows:
-0.2em
* b {0, 1}
* ← (, 1^λ)
* (_0, _0, _1, _1) ←^(1^λ, )
* (, _b, _b)
* continues to have access to
* At the end, outputs b', if b' = b and:
-0.3em
* both _0 and _1 are honest,
* and there is at least two honest clients participated in every training iteration, and that these clients remained to be honest until the end of the game,
then return 1 (meaning that won the ), otherwise, return 0.
Note that has access to the current state of the system at anytime, and can see the updated bulletin board after the execution of any command. Also, can see all messages sent in the system, and can access the updated model at the end of any iteration, before and after submitting the challenge. For the chosen clients, _i represents their registration information (which does not include the client identity or its actual dataset D_i).[If the adversary knows the dataset of a client, then it knows that this client is part of the population, i.e., this client suffers from illness, for example; there is no point of hiding whether that client participated in training or not.] Since can choose any two clients for the challenge, also captures anonymity of registration. That is, if the registration information can be linked to a model update submission, then can always win the game.
game includes several conditions to rule out trivial attacks. The two clients that selects must be honest, otherwise, if any is corrupt, it will be trivial to tell which client was chosen. Furthermore, since the aggregated model updates is simply the summation of these updates, if only _b participates in the iteration during which the challenge command is executed, it might be trivial for to win (same for any other iteration if only one honest client participates). This is because can access the updated model at the end of that iteration and can extract _b's individual updates. Thus, we add the condition that there must be at least two honest clients have participated in any iteration, and these have to be honest until the end of the game. (Also, any can have at most n-t corrupt parties, which is implicit in 's capabilities.)
The proceeds as follows:
-0.3em
* b {0, 1}
* ← (, 1^λ)
* (D_0, _0, D_1, _1) ←^(1^λ, )
* (, , (D_b, )), (, , )
* continues to have access to
* outputs b', if b' = b, and:
-0.2em
* is honest,
* and there is at least two honest clients participated in every training iteration, and that these clients remained to be honest until the end of the game,
then return 1 (meaning that won the ), otherwise return 0.
This game follows the outline of , but with a different construction of the challenge command to reflect dataset privacy. In particular, chooses two valid datasets (the field contains all information required to verify validity). The challenger chooses one of these datasets at random (based on the bit b), queries to register a client using this dataset, and then instructs this client to train the model using the dataset D_b. Note that in line 4 is the registration information of the client constructed based on the Π scheme. continues to interact with the system, and access the updated model at the end of any iteration. As before, conditions are added to rule out trivial attacks. wins the game if it guesses correctly which of the datasets was chosen in the challenge. Being unable to guess after seeing the outcome of the challenge command, and even after accessing the aggregated model updates, means that a PAFL schemes does not reveal anything about the underlying datasets.
Our PAFL notion can be further generalized to have only the model owner S, i.e., no aggregators, so this party is the one who aggregates the individual model updates as well. Moreover, if preserving the privacy of the model is required, then our notion can be extended with a model privacy property to reflect that. Since model privacy is outside the scope of this work, we did not include this property to keep the definition simple.
It should be noted that our definition of anonymity (and so our scheme that satisfies this notion) does not leak negative information (i.e., a client has not participated in training). Both participation and the absence of participation are protected, i.e., identities of those who participate or do not participate are not revealed.
§ BUILDING BLOCKS
In this section, we provide a brief background on the building blocks employed in , covering all cryptographic primitives that we use and the technique of differential privacy.
Commitments. A cryptographic commitment scheme allows hiding some value that can be opened later. It consists of three algorithms: 𝖲𝖾𝗍𝗎𝗉, 𝖢𝗈𝗆𝗆𝗂𝗍, and 𝖮𝗉𝖾𝗇. On input the security parameter λ, 𝖲𝖾𝗍𝗎𝗉 generates a set of public parameters 𝗉𝗉. To commit to a value x, the committer invokes 𝖢𝗈𝗆𝗆𝗂𝗍 with inputs 𝗉𝗉, x, and randomness r to obtain a commitment c. 𝖮𝗉𝖾𝗇(𝗉𝗉,c) opens a commitment by simply revealing x and r. Anyone can verify correctness of opening by computing c' = 𝖢𝗈𝗆𝗆𝗂𝗍(𝗉𝗉, x, r) and check if c = c'.
A secure commitment scheme must satisfy: hiding, meaning that commitment c does not reveal any information about x beyond any pre-knowledge the adversary has, and binding, so a commitment c to x cannot be opened to another value x' ≠ x (formal definitions can be found in <cit.>). These security properties enable a party to commit to their inputs (i.e., private datasets in our case), and publish the commitment publicly without exposing the private data.
Threshold homomorphic encryption. Homomorphic encryption allows computing over encrypted inputs and producing encrypted outputs. Such operations include homomorphic addition and multiplication. That is, let ct_1 be a ciphertext of x_1, and ct_2 be a ciphertext of x_2, then ct_1 + ct_2 produces a ciphertext of x_1 + x_2, and ct_1 · ct_2 produces a ciphertext of x_1 · x_2 (the exact implementation of the homomorphic '+' and '·' vary based on the encryption scheme). Some encryption schemes support only one of these operations, e.g., Paillier scheme <cit.> is only additively homomorphic. Supporting both addition and multiplication leads to fully homomorphic encryption <cit.>. Since we focus on secure aggregation of model updates, we require only additive homomorphism.
A homomorphic encryption scheme is composed of three algorithms: 𝖪𝖾𝗒𝖦𝖾𝗇 that generates encryption/decryption keys (and any other public parameters), 𝖤𝗇𝖼𝗋𝗒𝗉𝗍 which encrypts an input x to produce a ciphertext ct, and 𝖣𝖾𝖼𝗋𝗒𝗉𝗍 which decrypts a ciphertext ct to get the plaintext input x. Correctness states that 𝖣𝖾𝖼𝗋𝗒𝗉𝗍 produces the original plaintext for any valid ciphertext produced by 𝖤𝗇𝖼𝗋𝗒𝗉𝗍, and in case of homomorphic operations, the correct result (add and/or multiply) is produced. Security is based on the regular security notion for encryption (i.e., semantic security or indistinguishability-based). In this work, we require indistinguishability against CPA (chosen-plaintext attacker).
The threshold capability is related to who can decrypt the ciphertext. To distribute trust, instead of having the decryption key known to a single party, it is shared among n parties. Thus, each of these parties can produce a partially decrypted ciphertext upon calling 𝖣𝖾𝖼𝗋𝗒𝗉𝗍, and constructing the plaintext requires at least t parties to decrypt. In a threshold homomorphic encryption scheme, 𝖪𝖾𝗒𝖦𝖾𝗇 will be a distributed protocol run by the n parties to produce one public key and n shares of the secret key, such that each party will obtain only her share (and will not see any of the others' shares). In , we use the threshold Paillier encryption scheme <cit.>.
Zero-knowledge proofs. A (non-interactive) zero-knowledge proof (ZKP) system allows a prover, who owns a private witness ω for a statement x in language ℒ, to convince a verifier that x is true without revealing anything about ω. A ZKP is composed of three algorithms: 𝖲𝖾𝗍𝗎𝗉, 𝖯𝗋𝗈𝗏𝖾, and 𝖵𝖾𝗋𝗂𝖿𝗒. On input a security parameter λ and a description of ℒ, 𝖲𝖾𝗍𝗎𝗉 generates public parameters 𝗉𝗉. To prove that x ∈ℒ, the prover invokes 𝖯𝗋𝗈𝗏𝖾 over 𝗉𝗉, x, and a witness ω for x to obtain a proof π. To verify this proof, the verifier invokes 𝖵𝖾𝗋𝗂𝖿𝗒 over 𝗉𝗉, x, and π, and accepts only if 𝖵𝖾𝗋𝗂𝖿𝗒 outputs 1. In general, all conditions needed to satisfy the NP relation of ℒ are represented as a circuit. A valid proof will be generated upon providing valid inputs that satisfy this circuit. Some of these inputs could be public, which the verifier will use in the verification process, while others are private, which constitute the witness ω that only the prover knows.
A secure ZKP system must satisfy completeness, soundness, and zero-knowledge. Completeness states that any proof generated in an honest way will be accepted by the verifier. Soundness ensures that if a verifier accepts a proof for a statement x then the prover knows a witness ω for x. In other words, the prover cannot convince the verifier with false statements. Finally, zero-knowledge ensures that a proof π does not reveal anything about the witness ω. Many ZKP systems added a succinctness property, so that proof size is constant and verification time is linear the input size and independent the NP relation circuit size. These are called zero-knowledge succinct non-interactive argument of knowledge (zk-SNARKs). Formal definitions of ZKP systems can be found in <cit.>. In , we use the proof system proposed in <cit.>.
Differential privacy. Differential privacy (DP) is a technique usually used to address attacks such as membership and inference attacks. That is, knowing a data point and the trained model, an attacker can tell if this datapoint was used in training the model. DP provides the guarantee that inclusion of a single instance in the training datasets causes a statistically insignificant change to the training algorithm output, i.e., the trained model. Thus, it limits the success probability of the attacker in membership attacks. Formally, DP is defined as follows <cit.> (where ϵ > 0 and 0 < δ < 1):
A randomized mechanism 𝒦 provides (ϵ, δ)-differential privacy, if for nay two datasets D_0 and D_1 that differ in only single entity, for all R ⊆ Range(𝒦): [𝒦(D_0) ∈ R] ≤ e^ϵ[𝒦(D_1) ∈ R] + δ
We adopt the Guassian DP mechanism as described in <cit.>, and apply the optimization for the secure aggregation setting in <cit.>. The clients sample a noise from a Gaussian distribution 𝒩(0, σ^2) and add the noise to their model updates before encrypting and submitting them.
Each client i clips their model gradients g_i to ensure ∥g_i∥ < C where C is a clipping threshold for bounding the norm of gradients. Client i then sets sensitivity S_f = 2C/|D_i| where D_i is the i-th client's dataset, assuming gradients are shared after one epoch of local training. The noise scale is set as σ≥ cTS_f/ϵ where c ≥√(2ln(1.25/δ)) and T indicates exposures of local parameters (number of epochs or iterations), and satisfies (ϵ, δ)-DP <cit.> for ϵ < 1.
We apply the optimization in <cit.> by dividing the noise scale by the number of participating clients. Since only the aggregation of the updates is revealed, the individual noise value added by a client can be reduced while guaranteeing that the aggregate value maintains the desired level of privacy, leading to better accuracy.
Using these parameters, the advantage γ of the attacker in membership attacks can be computed as <cit.>:
γ≤ (1 - e^-ϵ + 2δ)(e^ϵ + 1)^-1
As for the error bound α, the Gaussian mechanism satisfies an absolute error bound α≤ O(R√(log k)), where k is the number of queries and R := S_f √(k log 1/δ)/ϵ <cit.> (a query here indicates one training round, so k is the number of training iterations, and S_f is the query sensitivity, which is the sensitivity of the training function). We use these bounds in our security proofs in Appendix <ref>.
In describing our design, we use DP as a blackbox. A client invokes function called 𝖣𝖯.𝗇𝗈𝗂𝗌𝖾 to sample the appropriate noise value. This makes our design modular as any secure DP-mechanism, other than the one we employ, can be used.
§ DESIGN
relies on four core ideas to achieve its goals: certification of clients' datasets to prevent impersonation, anonymity sets to support anonymous registration and model training, a designated aggregators committee to prevent accessing the individual model updates submitted by the clients, and a public bulletin board to facilitate indirect communication and logging. In this section, we present the design of showing the concrete techniques behind each of these ideas and how they interact with each other.
§.§ System Model
As shown in Figure <ref>, for any learning task there is a model owner S, a model aggregator set (or committee) of size n (we use a committee instead of a single aggregator to distribute trust), and a set of clients who wants to participate in this learning task. During a training iteration, will train the initial model locally over their private datasets and publish encrypted updates. S has to wait until the iteration is finished, after which will aggregate the updates and grant S access to the aggregated value. can support any aggregation method based on summation or averaging (e.g., FedSGD <cit.> and FedAVG <cit.>) given that the number of participants is public. All parties have access to a public bulletin board, allowing them to post and retrieve information about the training process. There could be several learning tasks going on in the system, each with its own S, , and , and all are using the same bulletin board.[In fact, it could be the case that the same parties are involved in several learning tasks, but they track each of these separately.] In the rest of this section, we focus on one learning task to explain our protocol; several learning tasks will separately run the same protocol between the involved parties.
§.§ Threat Model
We assume a secure and immutable public bulletin board available to all parties, which accepts only authenticated information that complies with predefined correctness rules.[This can be instantiated in a decentralized way using a blockchain with miners, or validators, verifying correctness.]
We adopt the following adversary model (we deal with adversaries). For clients, we assume them to be malicious during registration (a malicious party may behave arbitrarily), while we assume these clients to be semi-honest during training (a semi-honest party follows the protocol but may try to collect any additional information). Thus, during registration, a client with an invalid (or poisoned) dataset may try to register, while during training, registered clients will use their valid (registered) datasets in training and submit valid updates. We assume the server S to be always malicious, so it may try to impersonate clients in the registration phase, collude with aggregators during the training phase, or manipulate the model posted at the beginning of each iteration. For , we assume that at maximum n - t parties can be malicious, where t is the threshold required for valid decryption.
Since our goal is supporting anonymity, we do not assume any authenticated communication channels. Lastly, we work in the random oracle model where hash functions are modeled as random oracles.
§.§ System Workflow
achieves anonymity and privacy by combining a set of cryptographic primitives, such as threshold homomorphic encryption and zero-knowledge proofs, differential privacy, and a public bulletin board. The latter is used to facilitate indirect communication between the parties and to create anonymity sets to disguise the participants. Our techniques of combining zero-knowledge proofs and anonymity sets are inspired by recent advances in private and anonymous cryptocurrencies <cit.>.
Each client must register before participation (step 1.b in Figure <ref>) by publishing on the board a commitment to the master public key and dataset this client owns (the commitments are never revealed and don't leak any information). Similarly, the aggregators must register by posting their public key on the board (step 1.c in Figure <ref>), which will be used by the clients to encrypt their model updates. As we use DP to protect against membership and inference attacks, the client samples a noise and adds it to their updates before encrypting them. The encrypted updates will be accompanied with a zero-knowledge proof (ZKP) attesting that a client is a legitimate and registered data owner, but without revealing the public key or the dataset commitment of this client. Thus, anonymity is preserved against everyone (the server, aggregators, other clients, or any other party).
As shown in the figure (steps 1.d and 3), a server publishes the initial model updates on the bulletin board, which are retrieved by the clients to be used in the local training. The use of this immutable public bulletin board avoids direct communication between the server and clients, which would compromise anonymity. Moreover, it addresses privacy attacks resulting from distributing different initial model parameters to clients (this type of attacks has been recently demonstrated in <cit.>). In fact, is the only private federated learning system that enjoys this advantage.
To prevent Sybil and data poisoning attacks, each dataset must be certified by its source, e.g., a hospital. Since exposing the certifier reveals the dataset type and impacts privacy, creates an anonymity set for the certifiers by having all their public keys posted on the board (step 1.a in Figure <ref>). Thus, during the registration, a client will provide a ZKP that its hidden (committed) dataset is signed by one of the certifiers—without specifying which one.
Furthermore, to hide which training activity a client is participating in, which if revealed it will expose the dataset type, creates an anonymity set for the aggregators. That is, the system will have several ongoing training activities, each of which with its own committee. When submitting a model update, the client will choose a subset of these aggregators including the target who is managing the training activity the client is interested in. The client then encrypts the updates under the public keys of this subset—encrypt the actual updates for the target while encrypt 0 for the rest of the aggregators. Consequently, even if it is revealed that a client has submitted a model update, this will not expose which training activity this client is part of.
Accordingly, proceeds in three phases: setup, model training, and model access, which we discuss below.
§.§.§ Setup
The initial system setup includes creating the bulletin board, generating all public parameters needed by the cryptographic primitives used, and configuring the parameters for noise distribution and privacy/accuracy loss bounds of DP that all parties will use. Then, the certifiers, aggregators, and each client must run the setup process.[A PKI is needed to ensure the real identities of the certifiers, aggregators, and model owner. So, when any of these entities posts its public key on the board, this must be accompanied with a certificate (from a certificate authority) to prove that indeed the party owns this key.]
Certifiers. Each certifier posts its public key on the board. Let 𝒫𝒦_C be the set of all certifiers' public keys.
Aggregators. run the setup of a threshold homomorphic encryption to generate a public key and shares of the secret key. They post the public key on the board, while each party keeps its secret key share. To authenticate the public key, at least t of these aggregators must post it on the board (each posting is signed using the aggregator's own public key).[The individual public keys of are managed by a PKI.]
Clients. The setup process for clients is more involved compared to the previous entities. This is a natural result of supporting anonymity. As shown in Figure <ref>, which is the detailed version of step 1.b in Figure <ref>, each client _i ∈, with a dataset D_i and a master keypair (_i, _i),[Clients need a PKI for their master public keys, so a certifier can check that a client owns the presented master public key. However, to preserve anonymity, these are hidden in the training process, and a certifier cannot link an encrypted model update to the master public keys.] obtains a certificate σ_i from its certifier—σ_i could be simply the certifier's signature over D_i _i. Then, _i commits to its dataset D_i (without revealing anything about it) as follows:
-0.3em
* Compute a commitment to D_i and _i as _i = H(D_i _i ), where H is a collision resistant hash function and is a fresh random string in ^λ.
* Generate a fresh digital signature keypair (_sig, _sig). Compute a = H(_sig) and = __i(a), where is a pseudorandom function, and serves as an authentication tag over the fresh key to bind it to the master keypair of the client. Similar to <cit.>, we instantiate the PRF as __i(a) = H(_i a).
* Generate a ZKP π to prove that the dataset is legit and owned by _i. In particular, this ZKP attests to the following statement (again without revealing anything about any of the private data that the client knows): given a commitment _i, a signature verification key _sig, a tag , and a bulletin board state index , client _i knows a dataset D_i, randomness , master keypair (_i, _i), a certifier's key _c, a certificate σ_i, and a signing key _sig, such that:
-0.3em
* D_i, _i, and are a valid opening for the commitment _i, i.e., _i = H(D_i _i ).[Note that the opening of the commitment is a private input the client uses to generate the proof (and same applies to the training phase as we will see shortly). This opening is never revealed to the public—The board only has the commitment, and a ZKP on its well-formedness that does not leak any information about any private data used locally to generate the proof.]
* σ_i has been generated using _c over D_i _i, and that _c ∈_C with respect to the set _C registered on the board at state with index .
* The client owns _i, i.e., knows _i that corresponds to _i.
* The tag over the fresh _sig is valid, i.e., compute a = H(_sig) and check that = __i(a).
* Sign the proof and the commitment: set m =(_i,, _sig, π, ), use _sig to sign m and obtain a signature σ__i.
* Post (m, σ__i) on the bulletin board.
As for verifying the third condition, i.e., the client knows _i, it is simply done by computing the public key based on the input _i and checking if it is equal to _i. Although _i is not recorded explicitly on the bulletin board, it is bound to the client since it is part of the dataset commitment _i certified by σ_i.
To allow efficient proof generation with respect to the anonymity set _C, a Merkle tree is used to aggregate the key set _C as shown in Figure <ref>. Proving that a key _c ∈_C is done by showing a proof of inclusion (PoI) of that key in the tree. In other words, the circuit underlying the ZKP generation takes a membership path of the key and the tree root and verifies the correctness of that path. Thus, the cost will be logarithmic in the set size. The tree can be computed by the entities maintaining the board, with the root published on the board to allow anyone to use it when verifying the ZKP.
Note that a ZKP is generated with respect to a specific state of the anonymity set _C. This state is the root of the Merkle tree of this set, which changes when a new certifier joins the system. Such change will invalidate all pending ZKPs, and thus, invalidate all pending client registrations tied to the older state. To mitigate this, a client should specify the state index based on which the ZKP (and hence, the Merkle tree) was generated. So if the board is a series of blocks as in Figure <ref>, is the block index containing the root used in the proof.
All these conditions are modeled as an arithmetic circuit. The client has to present valid inputs that satisfy this circuit in order to generate a valid ZKP. Only registration with valid ZKPs will be accepted, where we let _CL be the set of all valid clients' commitments. Registration integrity is preserved by the security of the digital signature σ__i: if an adversary tampers with any of the information that a client submits—_i, , _sig, π, or —this will invalidate σ_ and will lead to rejecting the registration.
Note that clients can perform the setup at anytime, and once their registration information is posted on the board, they can participate in the model training immediately. Thus, allows clients to join at anytime during the training process, and each client can perform the setup phase on its own.
§.§.§ Model Training
At the beginning of each training iteration, S posts the initial model parameters on the bulletin board (step 1.d in Figure <ref>). Each client _i retrieves them and trains the model locally over its dataset (step 3 and 4 in Figure <ref>, respectively). Then _i shares the model updates privately and anonymously without revealing (1) which training activity it is participating in nor (2) the dataset commitment it owns (i.e., without revealing its identity). Client _i does that as follows (see Figure <ref>, which is the detailed version of step 5 in Figure <ref>):
-0.3em
* Choose a set of aggregators AG = {_1, …, _u} including the target , with AG_ denoting the public keys of these aggregators. Shuffle AG to avoid any ordering attacks (e.g., if the target is always placed first, this reveals the target training activity).
* Invoke to sample a noise value to be added to the model updates. As mentioned earlier, we apply the optimization in <cit.> by dividing the noise scale by the number of participating clients since the model updates will be decrypted after being aggregated.
* Encrypt the model updates under the target public key, while encrypt 0 under the public keys of the rest. This will produce 𝐜; a vector of u ciphertexts.[A client will have a fixed AG selected at the beginning. Changing AG between iterations must be done carefully; for a new AG', if AG ∩ AG' =, it would be trivial to tell which training activity a client is part of.]
* Generate a fresh digital signature keypair (_sig, _sig). Compute a = H(_sig) and = __i(a).
* Produce a ZKP π (with respect to the current state of the board at index ) attesting that: _i is a legitimate owner of a dataset, and that the fresh digital signature key was generated correctly. Thus, this ZKP proves the following statement: given a signature key _sig, and a tag , _i knows the opening of some commitment ∈_CL (this proves legitimacy), and that was computed correctly over _sig as before. Since we adopt the semi-honest adversary model during the training phase, there is no need to prove that 𝐜 has only one non-zero update.[Note that although we assume semi-honest clients during training, we still need the ZKP above to preserve integrity and make sure only registered clients participate. That is, a malicious adversary (who could be the server) may impersonate a client during training (without doing any registration), and it may alter the submitted updates (thus we need to prove that the signature is honestly generated by a registered client).]
* To preserve integrity, sign the proof, the ciphertext, and the auxiliary information. That is, set m = (𝐜, AG_, , _sig, , π) and sign m using _sig to a produce a signature σ__i.
* Post (m, σ__i) on the bulletin board.
We use the Merkle tree technique to aggregate the commitment anonymity set _CL. A client provides a proof of inclusion of its commitment in the Merkle tree computed over _CL with respect to a specific state indexed by . The latter is needed since we allow clients to join anytime, and thus, _CL, and its Merkle tree, will change over time.
Based on the above, naturally supports dynamic client participation. As mentioned before, a client who wants to join can do that immediately after finishing the setup. While (registered) clients who do not wish to participate in a training iteration simply do not send any updates. does not need a recovery protocol to handle additions/dropouts since the setup of a client does not impact the setup of the system, , or other clients. Also, the model updates submitted by any client do not impact the updates submitted by others. Not to mention that any information needed to perform the setup is already on the bulletin board, so interaction between the parties is needed. Furthermore, submitting model updates is done in one shot; a client posts (m, σ__i) on the board. Since we use non-interactive ZKPs, , and any other party, can verify the proof on their own.
§.§.§ Model Access
At the end of each training iteration, members retrieve all client updates—those that are encrypted under her public key—from the board, and aggregate them using the additive homomorphism property of the encryption scheme (steps 6 and 7 in Figure <ref>, respectively). Then, each member decrypts the ciphertext using its secret key share, producing a partial decryption that is sent to S (step 8 in Figure <ref>). Once S receives at least t responses, it will be able to construct the plaintext of the aggregated model updates and start a new iteration.
Signaling the end of a training iteration relies on the bulletin board. Adding a future block with a specific index will signal the end. Thus, the system setup will determine the block index of when training starts, and the duration of each iteration (in terms of number of blocks). Since all parties have access to the board, they will be able to know when each iteration is over. Another approach is to simply have S post a message on the board to signal the end of each iteration.
Although is a system for federated learning that involves several parties, it is not considered an interactive protocol. These parties do not communicate directly with each other—the bulletin board mediates this communication. When sending any message, the sender will post it on the bulletin board, and the intended recipient(s) will retrieve the message content from the board.
The concrete instantiation of the bulletin board impacts runtime. The board mediates communication between parties and must verify the validity of all information postings before accepting them. To speed up this process, the board be formed from a sequence of blocks of information maintained by a committee of registered validators (similar to a blockchain but without the full overhead a regular blockchain introduces). Alternatively, a regular blockchain that takes advantage of recent optimized designs and scalability techniques
<cit.> can be adopted. We note that the concrete instantiation of the board is outside the scope of this work.
§.§ Extensions
Addressing a stronger adversary model.
assumes semi-honest clients in the training phase. Therefore, mitigating threats of (1) using a legitimate (registered and certified) dataset in a training activity of totally different type—e.g., use medical data to train a model concerned with vehicles, and (2) encrypt non-zero values to other aggregators (other than the target ) in AG to corrupt their training activities, are out of scope. Nevertheless, we can make our adversary model stronger by considering a semi-malicious client who may attempt these attacks.
To mitigate the first attack, we require the certifier to add a dataset type to the dataset certificate, and hence, we require the client to prove that it has used a dataset with the correct type in training. That is, each training activity in the system will have a designated type , and a certifier will check that a dataset D_i is indeed of type (so it can be used to train any model of type ) before signing D_i _i. Also, the ZKP circuit a client uses in training must check that the target (that will receive a ciphertext of non-zero value) is managing a training activity with an identical . Otherwise, a valid proof cannot be generated. To mitigate the second attack, we can add another condition to be satisfied in the ZKP circuit (again the one a client uses during training) to validate the ciphertext 𝐜. That is, verify that only one ciphertext in 𝐜 is for a non-zero value while the rest are zeros. This requires providing the ZKP circuit with all plaintexts and the randomness used in encryption, so it can the ciphertexts based on these, denoted as 𝐜', and check if 𝐜' = 𝐜.
Addressing malicious clients during training, i.e., these who may deviate arbitrarily from the protocol, can be done in a generic way using ZKPs as in <cit.>. That is, a model update will be accompanied with ZKP proofs on well-formedness, meaning that the registered dataset and the actual initial model parameters were used and training was done correctly. Extending to support that while preserving its efficiency level is part of our future work.
Reducing storage costs of the bulletin board. ML models may involve thousands of parameters. The server needs to post the initial values of these parameters on the board for each training iteration. Also, a client posts the updated version of all these parameters on the board. This is a large storage cost that may create a scalability problem, e.g., if the board is a blockchain this cost could be infeasible. To address this issue, we can employ any of the solutions currently used by the blockchain community, e.g., store the (ciphertext of) model parameters on a decentralized storage network, and post only the hash of them on the board (with a pointer to where the actual data is stored). Furthermore, once the data is used, i.e., a training iteration concluded, initial model parameters and all clients updates can be discarded, which reduces the storage cost significantly. Note that the initial model parameters are posted by the server, who is not anonymous, while the encrypted updates are posted by clients. Thus, an anonymous off-chain storage must be used (like an anonymous sidechain) to avoid compromising their anonymity.
§.§ Security of
realizes a correct and secure PAFL scheme based on the notion defined in Section <ref>. In Appendix <ref>, we formally prove the following theorem:
The construction of as described in Section <ref> is a correct and secure PAFL scheme (cf. Definition <ref>).
§ PERFORMANCE EVALUATION
In this section, we provide details on the implementation and performance evaluation of , and benchmarks to measure its overhead compared to prior work.
§.§ Implementation
For hash functions, we use the Pedersen hash function <cit.>,
with an alternative implementation using
Baby-Jubjub elliptic curve <cit.> and 4-bit windows <cit.>, which requires less constraints per bit than the original implementation.
For threshold additive-homomorphic encryption, we use the threshold version of Paillier encryption scheme <cit.> based on <cit.>. For digital signatures, we use EdDSA <cit.> over Baby-Jubjub elliptic curve based on <cit.>.
For zero-knowledge proofs, we use Groth16 zk-SNARKS <cit.> implemented in libsnark <cit.>.
We use PyTorch <cit.> to incorporate differential privacy, and implement the FedSGD <cit.> algorithm for federated learning.
Dataset and Models. We evaluate the performance of using three federated learning tasks. Our first benchmark is LeNet5 <cit.> architecture with 61.7K parameters trained on the MNIST <cit.> dataset.
Our second benchmark is ResNet20 <cit.> architecture with 273K parameters trained over the CIFAR10 <cit.> dataset.
Our third benchmark is SqueezeNet <cit.> with 832K parameters trained over TinyImageNet <cit.> dataset. This benchmark is the largest studied in private federated learning literature <cit.>.
Since batch normalization is not compatible with DP <cit.>, we replace all batch normalization layers with group normalization <cit.> in ResNet20 and SqueezeNet with negligible effect on accuracy.
Configuration.
For our runtime experiments, we consider a network of N={16,32,64,128,256,512} clients. We also assume one committee consisting of 3 aggregators. Runtimes are benchmarked on an Intel i9-10900X CPU running at 3.70GHz with 64GB of memory assuming 8 threads, and the mean of 5 runs are reported for each experiment. We present micro-benchmarks of components as well as end-to-end benchmarks for a training iteration. We also evaluate accuracy under IID and non-IID dataset settings. The DP parameters are set as ϵ=0.9, δ=10^-5, and norm clipping threshold C=1 for MNIST and C=2 for CIFAR10 and TinyImageNet benchmarks.
§.§ Results
Runtime Overhead.
Table <ref> shows the performance of zero-knowledge proof implementation of clients setup and training circuits. The prove and verify runtimes are reported for different numbers of clients participating in the learning task. The size of proof is a constant 1019 bits. The setup prove overhead is a one-time cost for clients, and the training prove overhead is incurred for each training iteration. As the results suggest, the prover runtime for both setup and training increase sub-linearly with the number of clients, remaining under 2.5 sec for all experiments. The verifier runtime remains constant under 5 msec. Later we will show that ZKP cost is a fraction of the total cost and add little overhead to the overall runtime.
Next, we show results for an end-to-end training iteration. Aggregators' runtime during training is found in Figure <ref> for different numbers of clients. Aggregators receive the ciphertexts from clients, perform aggregation using homomorphic addition over ciphertexts followed by a partial decryption after which the result is sent to the model owner.
The decryption cost only depends on the size of the model, while the cost of aggregating ciphertexts increases with the number of clients.
For our largest network of clients with 512 participants, the aggregator runtime is 13.3, 56.8, and 174.6 sec for MNIST, CIFAR10, and TinyImageNet benchmarks respectively.
Figure <ref> presents the client runtime with detailed breakdown for a training iteration with 512 clients for our benchmarks. The breakdown includes the cost of client local model training, model update encryption for the target aggregator committee, and ZKP generation. The cost of noising updates, generating keypairs, signatures, and hash computations are negligible, thus omitted from the plot.
Local model training cost is measured for training over a batch of 1024 images on a GeForce RTX 2080 Ti GPU, taking 0.616, 0.791, and 1.276 sec for MNIST, CIFAR10, and TinyImageNet benchmarks respectively.
The remaining protocol costs (encryption and ZKP) are measured using the CPU described in configuration.
For MNIST, the overhead of generating the ZKP is 39%, and for the larger CIFAR10 and TinyImageNet benchmarks the ZKP overhead constitutes 17% and 7% of the total runtime. For all benchmarks, the runtime cost is dominated by the model update encryption step.
Communication Overhead. The communication overhead of the parties during setup and training phases are as follows.
Clients: Client i's setup involves the certifier's signature (96 B), and posting m = (_i, , _sig, π, ) and its signature (360 B). Each training iteration involves obtaining model parameters (121 KB, 527 KB, and 1.6 MB of 16-bit updates for LeNet5, ResNet20, and SqueezeNet respectively) and posting m = (𝐜, AG_, , _sig, , π) and its signature (9.2 MB, 41 MB, and 124 MB for LeNet5, ResNet20, and SqueezeNet respectively).
Aggregators: During setup, aggregators post their (signed) public key (160 B). In each training iteration, aggregators receive encrypted updates from clients and send partial decryptions to model owner (9.2 MB, 41 MB, and 124 MB for LeNet5, ResNet20, and SqueezeNet respectively).
Model Owner: During each training iteration, model owner posts the model updates (124 KB and 546 KB for Lenet5 and ResNet20, respectively), and obtains partial ciphertexts from aggregators (9.2 MB, 41 MB, and 124 MB for LeNet5, ResNet20, and SqueezeNet respectively).
Comparison to Baseline. To better understand the performance of , we provide comparison to prior work on privacy-preserving federated learning. We don't know of any other framework that provides anonymity guarantees similar to , and therefore we chose two recent systems for privacy-preserving federated learning, namely, Truex et al. <cit.> and Bonawitz et al. <cit.>. Truex et al. present a non-interactive protocol that deploys threshold homomorphic encryption for secure aggregation, and Bonawitz et al. develop an interactive protocol based on masking to protect the client updates during aggregation.
We benchmarked the client runtime for a training iteration in Bonawitz et al. protocol based on implementation found at <cit.> with fixes to allow more than 40 clients, and Truex et al. protocol using the implementation found at <cit.> for different number of clients. The results are shown in Figure <ref>.
When compared to Truex et al. protocol, is at most 2×, 1.4×, and 1.3× slower on MNIST, CIFAR10, and TinyImageNet respectively for different number of participants.
Compared to Bonawitz et al. protocol, is at most 3.7×, 7.7×, and 12.1× slower on MNIST, CIFAR10, and TinyImageNet respectively. With larger number of clients, the runtime gap between Bonawitz et al. and reduces. For 512 participating clients is only slower by 1.1× on MNIST, 2.1× on CIFAR10, and 2.8× on TinyImageNet. Our results demonstrate the scalability of our framework; the cost of its additional anonymity guarantees, that are not supported by prior work, is relatively low especially in large scale scenarios.
Model Accuracy. We evaluate the model test accuracy in incorporating DP and compare to a non-DP baseline in Figure <ref>. For these experiments we assume number of clients N=16. Figure <ref>(a-c) presents the convergence characteristics of our benchmarks assuming IID datasets among clients. achieves 99.27%, 90.60%, and 40.30% test accuracy on MNIST, CIFAR10, and TinyImageNet datasets respectively.
We also present accuracy results for non-IID setting, following the distribution described in <cit.> on CIFAR10. As depicted in Figure <ref>(d), achieves an accuracy of 76.61%. Across all benchmarks, obtains models with less than 0.5% accuracy loss compared to non-DP models.
§ RELATED WORK
Private federated learning. Bonawitz et al. <cit.> is among the earliest works on private federated learning. It handles client dropouts (but not addition) using an interactive protocol. The proposed scheme does not support anonymity; each client is known by a logical identity, which in the malicious setting must be tied to a public key (through a PKI) to prevent impersonation. A followup work <cit.> optimized the overhead of <cit.> and supported the semi-malicious model—the server is only trusted to handle client registration. This also violates anonymity since the server has full knowledge on the clients. The works <cit.> also targeted the efficiency of <cit.>, and all inherit its lack of anonymity and interactivity issues.
Truex et al. <cit.> use homomorphic encryption and differential privacy to achieve secure aggregation. This makes it easy to handle dropouts—since a user's update is independent of others'—but not addition since all users must be known at the setup phase to get shares of the decryption key. The proposed scheme relies on clients to decrypt the aggregated updates—which introduces excessive delays, and requires the server to know all clients and communicate with them directly—thus violating anonymity. Xu et al. <cit.> avoid the distributed decryption process, and allows for user additions (up to a maximum cap per iteration). However, this comes at the expense of introducing a trusted party to run the system setup and help in decrypting the aggregated model after knowing who participated in each iteration. Ryffel et al. <cit.> employ function secret sharing and assumes a fixed client participation, with a (semi-honest) server knows all clients and will communicate with them directly. Thus, client anonymity is not supported. Mo et al. <cit.> use a trusted execution environment to achieve privacy. Beside not supporting anonymity, trusting a hardware is problematic due to the possibility of side channel and physical attacks.
addresses the limitations of prior work: it supports client anonymity, and does not involve them in the aggregation process.
Accordingly, not only reduces overhead, but supports
dynamic participation without any additional recovery protocol or any trusted party. This is in addition to addressing recent attacks resulting from disseminating different initial models to clients <cit.> as explained previously.
Anonymity and Federated Learning.
Several techniques were proposed to anonymize the dataset itself before using it in training, such as differential privacy <cit.>, k-anonymity <cit.>, l-diversity <cit.>, and t-closeness <cit.> (a survey can be found in <cit.>). These techniques are considered complementary to : they allow anonymizing a dataset, and our system guarantees client's anonymity in the sense that no one will know if this client participated in training or which updates they have submitted.
The works <cit.> target the same anonymity notion as in .
Domingo et al. <cit.> utilize probabilistic multi-hop routes for model update submission, with the clients known by fixed pseudonyms instead of their real identities. However, such pseudonyms provide only pseudoanonymity; several studies showed how network and traffic analysis can link these pseudonyms back to their real identities <cit.>. Also, their anonymity guarantees is based on assuming that clients do not collude with the model owner, a strong assumption that avoids. Li et al. <cit.> uses interactive zero-knowledge proofs to achieve client anonymity when submitting model updates. Their approach, however, suffers from several security and technical issues: First, any party can generate a secret key and pass the proof challenge, not necessarily the intended client. Second, in this protocol, some parameters must be made public, but no details on how to do this in an anonymous way. Third, no discussion on how to preserve message integrity, making the protocol vulnerable to man-in-the-middle attacks.
The scheme proposed in <cit.> works at the physical layer; it randomly samples a subset of clients' updates and aggregates their signals before submitting them to the model owner. The proposed protocol assumes clients are trusted, and does not discuss how to preserve integrity of the communicated updates. Zhao et al. <cit.> introduces a trust assumption to achieve anonymity; a trusted proxy server mediates communication between clients and model owner. Lastly, Chen et al. <cit.> use a modified version of Tor to preserve anonymity; users authenticate each other and then negotiate symmetric keys to use for encryption. However, the negotiation and authentication processes are interactive, and the model owner records all clients' (chosen) identities, putting anonymity at risk due to the use of fixed identities. Thus, none of these systems supports anonymity in a provably secure way as does.
§ CONCLUSION
In this paper, we presented , the first framework for private and anonymous user participation in federated learning. utilizes a public bulletin board, various cryptographic building blocks, and differential privacy to support dynamic and anonymous user participation, and secure aggregation. We also introduced the first formal security notion for private federated learning covering client anonymity. We demonstrated the efficiency and viability of through a concrete implementation and extensive benchmarking covering large scale scenarios and comparisons to prior work.
§ ACKNOWLEDGMENT
The work of G.A. is supported by UConn's OVPR Research Excellence Program Award.
plain
§ PROOF OF THEOREM 1
To prove Theorem <ref>, we need to prove that does not impact training correctness, and that no adversary can win the security games defined in Section <ref> for anonymity and dataset privacy with non-negligible probability.
Intuitively, satisfies these properties by relying on the correctness and security of the underlying cryptographic primitives, and the bounds on accuracy and privacy loss offered by DP as employed in our system. The use of a secure zero-knowledge proof guarantees: completeness (a valid honest proof generated by a client will be accepted by the bulletin board validators and the aggregators ), soundness (a client that does not own a certified dataset cannot register, and a client that does not belong to the registered set cannot forge valid proofs during training), and zero-knowledge (so the proof does not reveal anything about the master public key of the client or its dataset).
Furthermore, the use of a semantically secure threshold homomorphic encryption scheme guarantees that the ciphertexts of the model updates do not reveal anything about the underlying (plaintext) updates, and adding them will produce a valid result of the sum of these updates. The use of a secure commitment scheme guarantees that a commitment posted by a client _i hides the dataset D_i and binds this client to D_i. The security of the digital signature scheme and the PKI guarantees that a malicious adversary cannot forge a certificate for a corrupted dataset, and that a man-in-the-middle attacker cannot manipulate any of the messages that a client, aggregator, or a server send. Also, under the assumption that at least t members of are honest, this guarantees that S will not have access to the individual updates submitted by clients.
Moreover, the use of a (ϵ,δ)-differential privacy technique leads to small advantage for the attacker in membership attacks as given by equation <ref>, and a small error (or loss in accuracy) bound as detailed in Section <ref>. We use these parameters in our proofs below.
Formally, the proof of Theorem <ref> requires proving three lemmas showing that is correct, anonymous, and supports dataset privacy. For correctness, we remark that does not impact training correctness and accuracy. So if a defense mechanism against inference attacks is employed, and this mechanism provides a trade-off with respect to accuracy (as in differential privacy), will not impact that level.
satisfies the correctness property as defined in Definition <ref>.
Correctness follows by the correctness of the homomorphic encryption scheme and the security of the digital signature, as well as the accuracy level provided by DP. A semi-honest client in the training phase will perform training as required and encrypt the updates and post them on the board. Since uses an existential unforgeable digital signature scheme, a malicious attacker cannot modify the ciphertext of the updates without invalidating the signature, and cannot forge a valid signature over a modified ciphertext. Thus, it is guaranteed that all accepted model updates ciphertexts are the ones produced by the client. Also, since uses a correct (and secure) homomorphic encryption scheme, the homomorphic addition of the ciphertexts will produce a ciphertext of the sum of the actual updates (with their added noise level by DP). By the correctness of the decryption algorithm, after decrypting the sum ciphertext, the server will obtain the correct value of the aggregated updates in each training iteration. This trained model differs from the actual one by the error bounds α obtained from DP, thus satisfying α-correctness. This completes the proof.
For anonymity, as we mentioned previously, inference attacks will have no impact on anonymity unless the leaked datapoint contains sensitive data (like the identity of the client)—so this attack assumes that the adversary got a hold on the dataset or part of it. As an extra step, a technique can be used to pre-process the dataset to remove sensitive attributes from the dataset, satisfies that and renders membership attacks ineffective in compromising anonymity (note if the attacker gets a hold on a client dataset and he knows the client identity, then he already compromised privacy of that client). The definition of our anonymity property does not assume the adversary knows the dataset or identity of the clients involved in the challenge.[Even if we allow that, we can define γ-anonymity property where the attacker wins with probability bounded by γ inherited from DP.]
satisfies the anonymity property as defined in Definition <ref>.
Under the assumption that at least one honest client (other than ) has submitted updates during the challenge training iteration (as described in the game definition earlier), accessing the aggregated model updates at the end of any iteration will not provide with any non-negligible advantage in winning the game. Thus, the proof is reduced to showing that all actions introduced by preserve anonymity. We prove that using a similar proof technique to the one in <cit.>, where we show a series of hybrids starting with an with b = 0 (_0), and finishing with an game with b = 1 (_7). By showing that all these hybrids are indistinguishable, this proves that cannot tell which client was chosen for the challenge command in . Now, we proceed with a sequence of hybrid games as follows:
_0: The game with b = 0.
_1: Same as _0, but we replace the zero-knowledge proofs with simulated ones, i.e., we invoke the zero-knowledge property simulator for each of the and queries, and we replace the actual proofs in the output of these queries with simulated ones. The hybrids _0 and _1 are indistinguishable by the zero-knowledge property of the ZKP system that uses. That is, if can distinguish them, then we can build another adversary ' that can break the zero-knowledge property, which is a contradiction.
_2: Same as _1, but we replace (_0, _0, _1, _1) with fresh output (_0', _0', _1', _1'). That is, we choose fresh datasets and register two fresh clients using them. So if created a state with n clients, any query for any of the n clients other than _0 and _1 will proceed as in _1. However, if it is for _0 or _1, we replace them with _0' or _1' and proceed.
The hybrids _1 and _2 are indistinguishable by the zero-knowledge property of the ZKP system and the hiding property of the commitment scheme that uses (which implies that client registration is indistinguishable). That is, if can distinguish them, then we can build two adversaries: ' that can break the zero-knowledge property of the ZKP system, and ” that can break the hiding property of the commitment scheme, which is a contradiction.
_3: Same as _2, but we replace the output of training any of (_0, _0, _1, _1) with fresh output produced by training (_0', _0', _1', _1'). As above, if created a state with n client registrations, any query for any of the n clients other than _0 and _1 will proceed as in _2. However, if the train query is for _0 or _1, we replace them with training output based on the fresh datasets owned by _0' or _1' and proceed.
_2 and _3 are indistinguishable by the zero knowledge property of the ZKP system and the semantic security of the homomorphic encryption scheme used in (which implies that training is indistinguishable). If can distinguish them, then we can build two adversaries: ' that can break the zero-knowledge property of the ZKP system, and ” that can break the semantic security of the encryption scheme, which is a contradiction.
_4: Same as _3, but with b = 1. The hybrids _3 and _4 are indistinguishable by the indistinguishability of model training as described above.
_5: Same as _4, but with (_0, _0, _1, _1) used in training as in the original game. So this is _3 with b = 1. The hybrids _4 and _5 are indistinguishable by the indistinguishability argument of _3 and _2.
_6: Same as _5, but with (_0, _0, _1, _1) used in registration as in the original game. So this is _2 with b = 1. The hybrids _5 and _6 are indistinguishable by the indistinguishability argument of _2 and _1.
_7: Same as _6, but with real ZKPs instead of the simulated ones. So this is the original with b = 1. The hybrids _6 and _7 are indistinguishable by the indistinguishability argument of _1 and _0.
This shows that with b = 0 is indistinguishable from with b = 1, which completes the proof.
As for dataset privacy, membership attacks will allow to win the dataset privacy game with advantage γ as he selects the datasets involved in the challenge. Thus, our proofs proceeds in two stages: first, we show that the cryptographic pritmitives used in do not provide with any non-negligible advantage, and second, by the security guarantees of DP, this attacker has an advantage bounded by γ due to membership attacks.
satisfies the dataset privacy property as defined in Definition <ref>.
As defined in the , adversary chooses two datasets D_0 and D_1. Then, the challenger picks one of these datasets at random, registers a client with that dataset, and invokes the command for that client. gets to see the output of the registration and training commands, which are the messages and signatures that a client sends in the setup and training phases of as described before.
In order to win the , can attack the registration or the training process. That is, for the former tries to reveal which dataset is hidden in the posted commitment or obtain information about the witness underlying the submitted proof, which contains the dataset D_b in this case. While for the latter, may try to infer any information about the plaintext of the model updates ciphertext (recall that the model parameters in a training iteration are public, and thus, can produce the model updates resulted from the use of D_0 and D_1). Note that attacking the ZKP to reveal any information about the commitment that was used, and then attacking that commitment, reduces to the same case of attacking the registration process.
Attacking registration means attempting to break the hiding property of the commitment scheme and the zero-knowledge property of the ZKP system. Since uses a secure commitment scheme, the former will succeed with negligible probability _1(λ). Also, since uses a secure ZKP system that satisfies the zero-knowledge property, the latter will succeed with negligible probability _2(λ). Attacking the training process means attempting to break the semantic security of the encryption scheme. Since uses a semantically secure encryption to encrypt the model updates, such an attack will succeed with negligible probability _3(λ).
Accordingly, 's advantage by the cryptographic primitives that we use is _1(λ) + _3(λ) + _3(λ) = (λ).
Now, can query the oracle to access the updated model and perform a membership attack. That is, knows both datasets and query the model over various datapoints to see which dataset was used in training. The success of this strategy is bounded by the privacy loss γ given by equation <ref>.
Thus, the probability that wins in the is 1/2+ (λ) + γ, which completes the proof.
Proof of Theorem <ref>. Follows by Lemmas <ref>, <ref>, and <ref>.
|
http://arxiv.org/abs/2306.09050v2
|
20230615111439
|
A CLT for the difference of eigenvalue statistics of sample covariance matrices
|
[
"Nina Dörnemann",
"Holger Dette"
] |
math.ST
|
[
"math.ST",
"math.PR",
"stat.TH"
] |
A CLT for the difference of eigenvalue statistics of sample covariance matrices
Nina Dörnemann, Holger Dette
July 31, 2023
===============================================================================
In the case where the dimension of the data grows at the same rate as the sample size we prove a central limit theorem for the difference of a linear spectral statistic of the sample covariance and a linear spectral statistic of the matrix that is obtained from the sample covariance matrix by deleting a column and the corresponding row. Unlike previous works, we do neither require that the population covariance matrix is diagonal nor that moments of all order exist. Our proof methodology incorporates subtle enhancements to existing strategies,
which meet the challenges introduced by determining the mean and covariance structure for the difference of two such eigenvalue statistics. Moreover, we also establish the asymptotic independence of the difference-type spectral statistic and the usual linear spectral statistic of sample covariance matrices.
AMS subject classification:
15A18, 60F05
Keywords and phrases: central limit theorem, linear spectral statistic, sample covariance matrix
§ INTRODUCTION
Let _n be a p× p Hermitian nonnegative definite matrix and _n=(x_ij)_1≤ i ≤ p, 1 ≤ j ≤ n a p× n random matrix with independent centered and standardized entries. The sample covariance matrix is defined by
_n = 1/n_n^1/2_n _n^⋆_n^1/2,
and numerous authors have worked on the probabilistic properties of the spectrum of _n in the high-dimensional regime, where the dimension p=p_n is increasing with the sample size n. In a seminal paper, <cit.> proved the weak convergence of the spectral distribution of the empirical covariance matrix to the Marčenko-Pastur distribution in the case p/n→ y ∈ (0,∞), and <cit.> showed the convergence to the semicircle law if p/n→ 0. These results have been extended by many authors for various models, see <cit.> and <cit.> as examples for early references. Moreover, <cit.> dropped the independence structure in the columns of the sample covariance matrix, <cit.> and <cit.> considered separable sample covariance matrices in the case p/n→ 0,
and <cit.> and <cit.>
discussed the limit of the spectral distribution
of sample autocovariance matrices
for linear times series. We also mention the recent work of <cit.>, who allow for different distributions in the columns of the data matrix. The extreme eigenvalues of
_n have been investigated by
<cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> among many others.
A further line of research has its focus on the asymptotic properties of linear spectral statistics of the matrix _n, which is defined as an integral of an appropriate function with respect to the spectral distribution. In a meanwhile classical paper in this field, <cit.> proved a CLT for linear eigenvalue statistics of sample covariance matrices for a class of analytic functions under a Gaussian-type 4th moment condition. By imposing additional structural assumptions on the eigenvectors of _n, <cit.> generalized this result, allowing for distributions with a general 4th moment. Moreover, <cit.> showed that the Lévy-Prohorov distance between the distribution of a
linear spectral statistic with three times differentiable functions and a Gaussian distribution converges to zero, where mean and covariance of this random variable may diverge. Other extensions, among many noteworthy contributions, include <cit.> on a substitution principle for the non-centered case, <cit.>, <cit.> on the ultra-high dimensional case p/n→∞, <cit.> on a sequential model, and <cit.>, <cit.> on the asymptotic independence of spiked eigenvalues and linear spectral statistics.
In this work, we contribute to this discussion from a different perspective
and provide a central limit theorem for the difference of eigenvalue statistics of the matrix _n and its submatrix ^(-q)_n if
lim_p,n→∞p / n =y >0, where
the matrix ^(-q)_n is obtained from _n by deleting
the qth row and qth column (1≤ q ≤ p).
In contrast to the problems discussed in the previous paragraph, the literature on this topic is much scarce.
<cit.> investigated this problem for a Wigner matrix.
To our best knowledge, we are only aware of two references considering the sample covariance matrix, which concentrate on the null case (=).
<cit.> showed that the difference of two linear spectral statistics satisfies a central limit theorem if the underlying data are i.i.d. governed by a distribution with existing moments of all orders. <cit.> concentrated on the difference of two logarithmic linear spectral statistics.
As the arguments in these references heavily depend on the assumption _n =, they do not provide an immediate pathway to show weak convergence results in a more general context.
In this paper, we go beyond the existing literature by dropping the assumption _n = and provide a CLT for the difference of two linear spectral statistics of the matrices
_n and ^(-q)_n. We also establish the joint convergence of the difference of eigenvalue statistics of _n and ^(-q)_n for q∈{q_1, q_2}.
Moreover, we investigate the joint asymptotic distribution of eigenvalues statistics of _n and the difference of such statistics corresponding to _n and _n^(-q). Subsequently, we show that the diagonal entries of the sample precision matrix _n and the eigenvalue statistics of _n are asymptotically independent.
From a technical point of view, our results hold for independent random variables x_ij with existing moments of order 5. Thus, besides the consideration of a general population covariance matrix, we require
much weaker assumptions on the data compared to <cit.> who considered i.i.d. random variables with existing moments of all order.
For the proofs, we use the common approach of <cit.>
passing to the
corresponding Stieltjes transforms. However,
it is crucial to note that the
consideration of the difference of two linear spectral statistics requires subtle refinements
in the analysis of the process of the difference of the Stieltjes transforms.
Indeed, there are inherent challenges
when studying the difference of two linear spectral statistics compared to a single statistic due to an upscaling effect.
It is well-known that assumptions based solely on the spectrum of the population covariance matrix are insufficient to guarantee the convergence of the expected value and variance
of linear spectral statistics
_n unless we have a fourth moment of Gaussian-type <cit.>. In contrast, when considering differences of linear spectral statistics, we are able to control the bias without any further structural assumptions on _n, and thus an assumption that guarantees the convergence of the covariance suffices for our analysis.
§ DIFFERENCE OF DEPENDENT LINEAR SPECTRAL STATISTICS
For the statement of our main result, we require some
notation.
Let
F^𝐀 = 1/p∑_j=1^p δ_λ_j (𝐀),
be the empirical spectral distribution of a p× p Hermitian matrix 𝐀,
where λ_1 (𝐀) ≥…≥λ_p (𝐀) are the ordered eigenvalues of 𝐀
and δ_a denotes the Dirac measure at a point a ∈ℝ,
and define _1 ∘_2 as the Hadamard product of the matrices _1, _2 ∈^p.
Moreover, for a p× p matrix 𝐀 and some 1≤ q ≤ p, the (p-1) × (p-1) matrix 𝐀^(-q) denotes the submatrix of 𝐀 where the qth row and qth column are deleted.
Finally, if 𝐁 is a (p-1)× (p-1) matrix, then 𝐁̃^(-q) denotes the p× p matrix which is generated from 𝐁 by inserting an additional column and row at position q filled with zeros. If 𝐁 is nonsingular, then we define
𝐁̃^-:= 𝐁^(-q)
as the matrix which is obtained from 𝐁^-1 by inserting
an additional column and row with zeros at position q.
A useful tool in random matrix theory is the Stieltjes transform
s_F(z) = ∫1/λ - z dF (λ)
of a distribution function F on the real line, which is here considered
on the upper complex plane, that is
for
z∈ℂ^+ = { z ∈ℂ : (z) > 0 }. If F= F^𝐀 is an empirical spectral distribution, then its Stieltjes transform can be represented as
s_F^𝐀 (z) = 1/p{𝐀 - z 𝐈}, z ∈ℂ^+.
Standard results on
the spectral properties of the sample covariance matrix
<cit.> show that
under certain conditions, with probability 1,
the empirical spectral distribution F^_n
converges weakly. The limit, say
F^y,H, is the so-called generalized Marčenko-Pastur distribution defined by its Stieltjes transform s = s_F^y,H, which is the unique solution of the equation
s (z) = ∫1/λ( 1 - y - y z s (z) ) - z dH(λ)
on the set { s ∈ℂ^+ : 1-y/z + y s ∈ℂ^+ }.
Here, H denotes the limit of the spectral distribution
H_n = F^_n of the population covariance matrix _n, which will be assumed to exist
throughout this paper, and y ∈ (0,∞) is the limit of the dimension-to-sample-size ratio y_n=p/n. For the following discussion,
define for _n the (n × n)-dimensional companion matrix
Σ̂ _n= 1/n𝐗_n^⋆_n 𝐗_n
and denote the limit (if it exists) of its spectral distribution
F^Σ̂_n
and its corresponding Stieltjes transform by
F^y,H and s(z)=s_F^y,H (z),
respectively. A straightforward calculation (using (<ref>)) shows that this
Stieltjes transform satisfies the equation
z = - 1/(z) + y ∫λ/1 + λ(z) dH(λ).
Note that both _n and _n^(-q) share the same limiting spectral distribution H <cit.>. This implies that also the sample versions _n and _n^(-q) share the same limiting spectral distribution F^y,H, characterized by its Stieltjes transform s through the equation (<ref>).
These observations indicate that an upscaling (compared to the usual linear eigenvalue statistics) is necessary to obtain
non-degenerate limit distributions
for the difference of such two statistics, that is, for integrals of the form ∫ f(x) d G_n,q (x),
where f is a given function,
the random (signed) measure G_n,q on ℝ
is defined by
G_n,q(x) = p ( F^_n(x) - F^y_n, H_n (x) )
- (p-1) ( F^_n^(-q)(x) - F^(p-1)/n, H_nq (x) ), 1≤ q ≤ p,
and F_y_n,H_n, F_(p-1)/n, H_nq are finite-sample versions of the generalized Marčenko-Pastur distribution defined by (<ref>). Here H_nq = F^_n^(-q) denotes the spectral distribution of the matrix _n^(-q).
In fact, our main result shows that the sequence
(X_n(f,q_1), X_n(f,q_2))_n∈, 1 ≤ q_1, q_2 ≤ p,
converges weakly with a non-degenerate limit, where
X_n(f,q) = √(n)∫ f(x) d G_n,q (x), 1 ≤ q ≤ p .
For the proof of this and other statements we require several assumptions. In the following let κ =1 for the complex case and κ = 2 for the real case and q_1, q_2 ∈{ 1 , … , p }.
* For each n, the random variables x_ij=x_ij^(n) are independent with x_ij = 0, |x_ij|^2=1, x_ij^2=κ - 1, ν_4 = |x_ij|^4 < ∞ does not depend on i,j and max_i,j,n |x_ij|^5 < ∞.
* (_n)_n∈ℕ is a sequence of p× p Hermitian non-negative definite matrices with bounded spectral norm,
and the sequence of spectral distributions (F^_n)_n∈ℕ converges to a proper c.d.f. H.
* For z_1, z_2∈ℂ^+,
we assume the existence of the limits
g_ℓ_1, ℓ_2(z_1, z_2)
= lim_n→∞{ + (z_1) _n _n _ℓ_1ℓ_2
- (z_1) + s(z_1) _n _n
^(-ℓ_2) + s(z_2) _n^(-ℓ_2)^-_n
_ℓ_1ℓ_2}
for ( ℓ_1 , ℓ_2) = (q_1,q_2), (q_1,q_1) and (q_2,q_2).
*
For any fixed η >0, it holds
lim_n→∞1/np∑_i,j[ |x_ij|^5 I( |x_ij | ≥√(n)η ) ] = 0.
*
For z_1, z_2∈ℂ^+,
we assume the existence of the limits
h_ℓ_1, ℓ_2 (z_1, z_2) = lim_n→∞ {_n + s(z_1) _n - ^(-ℓ_1) + s(z_1) _n^(-ℓ_1)^-
∘_n + s(z_2) _n - ^(-ℓ_2) + s(z_2) _n^(-ℓ_2)^-},
for ( ℓ_1 , ℓ_2) = (q_1,q_2), (q_1,q_1) and (q_2,q_2).
We are now in the position to formulate our main result.
Let f_1, f_2 be functions, which are analytic on an open region containing the interval
[ lim inf_n→∞λ_p(_n) I_(0,1) (y) (1-√(y))^2 ,
lim sup_n→∞λ_1(_n) ( 1+ √(y ) )^2 ].
Then, under assumptions <ref>-<ref>, the random vector
( X_n(f_1,q_1), X_n(f_2, q_2) )
converges waekly to a centered normal distribution (X(f_1, q_1), X(f_2, q_2))
with covariance
(X(f_1, q_1), X(f_2, q_2))
= κ/4 π^2 ∫_𝒞_1∫_𝒞_2 f_1(z_1) f_2(z_2)σ^2 (z_1,z_2, q_1, q_2)
dz_2 dz_1
+ ν_4 - κ - 1/4 π^2 ∫_𝒞_1∫_𝒞_2 f_1(z_1) f_2(z_2)τ^2(z_1, z_2, q_1, q_2)
dz_2 dz_1
,
where 𝒞, 𝒞_1, 𝒞_2 are arbitrary closed,
positively orientated contours in the complex plane
enclosing the interval in (<ref>), 𝒞_1, 𝒞_2
are non overlapping and the functions σ^2(z_1,z_2, q_1, q_2) and τ^2(z_1,z_2, q_1, q_2) are defined in (<ref>) and (<ref>), respectively.
We would like to comment on our assumptions and compare our result to previous works.
* In the meanwhile classical work <cit.>, a CLT for the eigenvalue statistics of _n was proven and attracted many researchers to work on related problems.
It was also stated by these authors, that the mean and variance of such statistics do not only depend on the eigenvalues of the population covariance matrix captured by assumption <ref>, but, under a non-Gaussian-type 4th moment, also on the eigenvectors of _n which cannot be controlled by such a condition. While <cit.> rely on a Gaussian type 4th moment condition ν_4 = κ +1 in order to circumvent this problem, many researchers relaxed their assumptions in several directions. For example, <cit.> imposed a condition on _n, which ensures the convergence of the additional terms for mean and variance arising in the case ν_4 ≠κ +1, while <cit.> verified that the Lévy-Prohorov distance between the linear statistics’ distribution and a normal distribution, whose mean and variance may diverge, vanishes asymptotically.
In this work, we consider a different type of statistic, namely a difference of linear spectral statistics of two highly dependent sample covariance matrices. The conditions <ref> and <ref> ensure the convergence of the variance in our setting. The latter condition is inspired by formula (1.17) in <cit.>. While one needs to impose a further assumption such as condition (1.18) in <cit.> for the convergence of the mean when investigating the standard linear spectral statistics of _n, an additional assumption for proving that the bias is negligible in our setting is in fact not necessary.
Although our contribution, like the aforementioned works, utilizes the tools provided by <cit.>, it is important to highlight that the weak convergence of the statistic examined in our study cannot be inferred from prior findings. In particular, the computation of the mean and covariance presents a non-trivial challenge, since the difference in the two linear spectral statistics fluctuate on a significantly smaller scale than each individual statistic.
* We emphasize that the condition <ref> is not necessary if the data admits a fourth moment of Gaussian type, that is, ν_4 = κ + 1.
Furthermore, if _n is a diagonal matrix with diagonal entries Σ_ii = Σ_ii^(n), 1 ≤ i ≤ p,, then we have h_q_1,q_2(z_1, z_2)=0 for 1≤ q_1 ≠ q_2 ≤ p and
h_q, q (z_1, z_2)
= lim_n→∞Σ_qq^2/(1 + (z_1) Σ_qq ) (1 + (z_2) Σ_qq ) .
In this case, the limits in condition <ref> satisfy g_q_1,q_2(z_1, z_2)=0 for 1 ≤ q_1 ≠ q_2 ≤ p, and
g_q,q(z_1,z_2) = lim_n→∞Σ_qq/1 + (z_1) Σ_qq .
(for a proof see the discussion surrounding equation (<ref>)
in Section <ref>). Summarizing, in the diagonal case, the conditions <ref> and <ref> can be replaced by assuming that the limits lim_n→∞Σ_qq for q∈{q_1,q_2} exist.
* For our proof, we assume that moments up to order 5 exist, which is needed for sharper concentration inequalities of certain quadratic forms involving the random variables x_ij, 1≤ i ≤ p, 1 ≤ j ≤ n.
This assumption might be improved to the optimal 4th moment condition, but we do not pursue in this direction. Indeed, the condition on the 5th moment is a substantial improvement compared to previous results. In particular, the work by <cit.> provides a special case of Theorem <ref> for the null case _n =, and the authors assumed the existence of moments of any order (and that the random variables x_ij are i.i.d.). Moreover, our result provides the joint convergence of the difference of linear spectral statistics corresponding to functions f_1, f_2, while the work <cit.> covers the case of a single difference corresponding to one function f_1 in the case _n = and q_1=q_2.
On the other hand,
they allow for a less regular class of functions used in the definition of the eigenvalue statistics.
Using the Helffer–Sjöstrand formula <cit.>,
we expect that one can obtain similar results as presented in this paper under weaker smoothness assumptions of the function f.
* The Lindeberg-type condition <ref> ensures a proper truncation of the random variables x_ij. Note that this assumption is somewhat stronger compared to
(9.7.2) in <cit.> due to the different scaling needed when considering the difference of eigenvalue statistics.
We conclude this section studying the joint limiting distribution of linear spectral statistics
X_n(f) =
∫ f(x) d G_n (x),
and their differences X_n(f,q),
where the random measure G_n is defined by
G_n(x) = p ( F^_n(x) - F^y_n, H_n (x) ),
and f is some appropriate function, as in Theorem <ref>.
The Gaussian limiting distribution X(f) of (X_n(f))_n∈
is characterized in Theorem 1.4 of <cit.>, who need
weaker moment conditions as the original work of <cit.>
<cit.>.
The following result provides the joint limiting distribution of the usual linear spectral statistics X_n(f) and the difference-type statistics X_n(f,q) considered in this work.
The proof can be found in Section <ref>.
Under the assumptions of Theorem <ref> and Theorem 1.4 of <cit.>, the sequences (X_n(f_1,q)) and (X_n(f_2)) are asymptotically independent. Thus, the joint limiting distribution of (X_n(f_1,q), X_n(f_2))^⊤ is (X(f_1,q), X(f_2))^⊤
, where X(f_1,q) is defined in Theorem <ref>,
and X(f_2) is the Gaussian limiting distribution of (X_n(f))_n∈
characterized in Theorem 1.4 of <cit.> (independent of X(f_1,q)).
§ SOME SPECIAL CASES
In the null case Σ_n =, the contour integrals describing the covariance structure of the limiting Gaussian random vector in Theorem <ref> can be expressed via integrals over the unit circle, which allows the explicit calculation for given functions f_1, f_2.
The proof of the following result is postponed to Section <ref>.
Let h = √(y)∈ (0,∞), _n = 𝐈, q_1, q_2 ∈ and let f_1 and f_2 be functions which are analytic on an open region containing the interval in (<ref>). For the random vector ( X(f_1,q_1), X(f_2, q_2)) given in Theorem <ref>, we have the following covariance structure
(X(f_1,q_1) , X(f_2,q_1) )
= - κ/2 π^2lim_r_2 > r_1,
r_1, r_2 ↘ 1∮_|ξ_1|=1∮_|ξ_2|=1
f_1 ( 1 + h r_1 ξ_1 + h r_1ξ_1 + h^2 )
f_2 ( 1 + h r_2 ξ_2 + h r_2ξ_2 + h^2 )
× r_1 r_2 (r_1 r_2 ξ_1 + ξ_2) /h^2 (r_1 r_2 ξ_1 - ξ_2)^3 d ξ_2 d ξ_1
- ν_4 - κ - 1/2 π^2lim_r_2 > r_1,
r_1, r_2 ↘ 1∮_|ξ_1|=1∮_|ξ_2|=1
f_1 ( 1 + h r_1 ξ_1 + h r_1ξ_1 + h^2 )
×f_2 ( 1 + h r_2 ξ_2 + h r_2ξ_2 + h^2 ) 1/h^2 r_1 r_2 ξ_1^2 dξ_2 d ξ_1,
(X(f_1,q_1) , X(f_2,q_2) ) = 0, q_1 ≠ q_2.
If the functions f_1, f_2 are explicitly specified, the integrals in Proposition <ref> can be calculated. In the following corollary we will demonstrate this for some examples. Its proof is deferred to Section <ref>.
Let q_1, q_2 ∈, p/n → y ∈ (0,∞) and _n = . We assume that conditions <ref> and <ref> hold true. Then, we have
√(n)( _n - _n^(-ℓ) - 1
) ^⊤_ℓ=q_1,q_2 𝒩_2 ( 0, ( 2 κ + (ν_4 - κ - 1) ) _2 ),
√(n)( ( _n^2 ) - ( _n^(-ℓ ) )^2 - ( 1 + 2p/n )
) ^⊤_ℓ=q_1,q_2 𝒩_2 ( 0, d _2 ) ,
where d= ( 8 κ ( 1 +3y +y^2) + 4 (ν_4 - κ - 1) (1+y)^2 ). If y∈(0,1), we also have
√(n)( log| _n | - log| _n^(-ℓ)| - log( n - p + 1/n)
)^⊤_ℓ=q_1,q_2 𝒩_2 ( 0, ( κ / ( 1 - y) + (ν_4 - κ - 1) )
_2
) .
The choice of the logarithm reveals an interesting connection to another type of random matrix, namely the sample precision matrix _n.
In particular, in the case f(x) = log (x), the difference of linear spectral statistics corresponding to _n and _n^(-q) is basically the logarithmic diagonal entry of _n. More precisely, Theorem <ref> provides a multivariate central limit theorem for ( (_n)_q_1,q_1, ( _n)_q_2,q_2).
Combining this with the delta method, we can extend the result in <cit.>, where the authors imposed a diagonal assumption on the population covariance matrix _n and worked in the i.i.d. setting.
In the case where _n is a diagonal matrix and
and y∈ (0,1), we can confirm the result in <cit.> on the diagonal entries of _n using Corollary <ref> and the delta method.
§ PROOFS
§.§ Main steps in the proof of Theorem <ref>
We begin with the usual truncation argument and may assume without loss of generality that the entries of _n additionally satisfy |x_ij| ≤η_n √(n). Using Assumption <ref>, this step can be formally justified by similar arguments as given in Section 9.7.1 of <cit.>.
A frequently used powerful tool in random matrix theory is the Stieltjes transform. This
is partially explained by the formula
∫ f(x) dG(x) = 1/2π i∫∫_𝒞f(z)/z-x dz dG(x)
= - 1/2 π i∫_𝒞 f(z) s_G(z) dz,
where G is an arbitrary cumulative distribution function (c.d.f.) with a compact support, f is an arbitrary analytic function on an open set, say O, containing the support of G, 𝒞 is a positively oriented contour in O enclosing the support of G and s_G
denotes the Stieltjes transform of G. Note that (<ref>) follows from
Cauchy’s integral formula <cit.> and Fubini’s theorem. Thus invoking the continuous mapping theorem, it may suffice to prove weak convergence for the sequence (M_n,q)_n∈, where
M_n,q(z) = √(n){ p s_F^ (z) - s_F^y_n, H_n (z)
- (p - 1) s_F^^(-𝐪) (z) - s_F^(p - 1)/n, H_nq (z) }, z ∈𝒞.
Here, s_F^y_n, H_n denotes the Stieltjes transform of the generalized Marčenko–Pastur distribution F^y_n, H_n characterized through the equation
s_F^y_n, H_n(z) = ∫1/λ 1 - y_n - y_n z s_F^y_n, H_n(z) - z dH_n (λ).
A similar formula to (<ref>) holds true for s_F^(p - 1)/n, H_nq.
The contour 𝒞 in (<ref>) has to be constructed in such a way that it encloses the support of F^y_n, H_n, F^(p-1)/n, H_nq and F^ and F^^(-q) with probability 1 for sufficiently large n∈. Note that F^ and F^^(-q) have the same limiting spectral <cit.>.
In order to prove the weak convergence of (<ref>),
we define a contour 𝒞 as follows. Let x_r be any number greater than the right endpoint of the interval (<ref>) and v_0 >0 be arbitrary. Let x_l be any negative number if the left endpoint of the interval (<ref>) is zero. Otherwise, choose
x_l ∈ 0, lim inf_n→∞λ_p() I_(0,1) (y) (1-√(y))^2 .
Let
𝒞_u = { x + i v_0 : x∈ [x_l , x_r] } ,
𝒞^+ = { x_l + i v : v ∈ [0,v_0] } ∪ 𝒞_u ∪ { x_r + i v : v ∈ [0,v_0] },
and define 𝒞 = 𝒞^+ ∪ 𝒞^+, where 𝒞^+ = {z | z ∈𝒞^+ }.
Next,
consider a sequence (ε_n)_n ∈ converging to zero such that for some α∈ (0,1)
ε_n ≥ n ^-α,
define
𝒞_l =
{ x_l + iv : v ∈ [n^-3/2ε_n, v_0] }
𝒞_r = { x_r + i v : v ∈ [ n^-3/2ε_n , v_0 ] },
and consider the set 𝒞_n = 𝒞_l ∪𝒞_u ∪𝒞_r.
We define an approximation
M̂_n,q of the random variable M_n,q for z=x + iv∈𝒞^+ by
M̂_n,q (z) =
M_n,q(z) if z ∈𝒞_n,
M_n,q(x_r + i n^-3/2ε_n ) if x=x_r, v∈ [0,n^-3/2ε_n ],
M_n,q (x_l + i n^-3/2ε_n ) if x=x_l, v∈ [0,n^-3/2ε_n ].
In Lemma <ref> below, it is shown that the inequality (M̂_n,q)_n∈ approximates (M_n,q)_n∈ appropriately in the sense that the corresponding linear spectral statistics
- 1/2 π i∫_𝒞 f(z) M_n,q(z) dz and
- 1/2 π i∫_𝒞 f(z) M̂_n,q(z) dz
in (<ref>)
coincide asymptotically. As a consequence, the weak convergence of the process (<ref>) follows from that of M̂_n,q, which is established in the following theorem. The proof is given in Section <ref>.
Note that we use a different definition of M̂_n,q in (<ref>) in contrast
to formula (9.8.2) in <cit.> and formula (6.4) in <cit.>, which is essential for Lemma <ref> to be correct. Indeed, we replaced n by n^-3/2 in the definition of M̂_nq, 𝒞_l and 𝒞_r. Although this change is crucial for our theory, it does not affect the results of <cit.> and <cit.> significantly, in the sense that most of the auxiliary results in these papers remain valid also under the new definition of M̂_n,q.
Under the assumptions of Theorem <ref>, the sequence ((M̂_n,q_i (z))_z∈𝒞^+, i∈{1,2})_n∈ defined in (<ref>)
converges weakly to a centered Gaussian process (M_q_i(z))_z∈𝒞^+, i∈{1,2} in the space 𝒞(𝒞^+ ) ^2 . The covariance kernel of the limiting process
is given by
(M_q_1 (z_1 ), M_q_2 (z_2 ))
= [ M_q_1 (z_1) - [ M_q_1 (z_1)] M_q_2 (z_2) - [ M_q_2 (z_2) ] ]
= κσ^2(z_1,z_2, q_1, q_2) + ( ν_4 - κ - 1) τ^2(z_1,z_2, q_1, q_2) ,
z_1, z_2 ∈𝒞^+, q_1, q_2 ∈,
where σ^2(z_1,z_2, q_1, q_2) and τ^2(z_1,z_2, q_1, q_2) are defined in (<ref>) and (<ref>)-
Such a reduction of the linear spectral statistics to the process of Stieltjes transforms has become standard in the literature on random matrices and thus, we will omit more details on the proof of Theorem <ref> using Theorem <ref>. Indeed, the arguments in the proof of Theorem <ref> on the basis Theorem <ref> are almost identical to those given in Section 6.2 of
<cit.>, and therefore omitted. The novelty of our techniques lies in the proof of Theorem <ref>, on which we will concentrate in the following sections.
§.§ Proof of Theorem <ref>
To begin with, we decompose the process M_n,q(z) = M_n,q^(1) (z) + M_n,q^(2) (z) into a random and a deterministic part, where
M_n,q^(1) (z)
= √(n) p s_F^(z) - (p-1) s_F^^(-q)(z)
- [ p s_F^(z) - (p-1) s_F^^(-q)(z) ] ,
M_n,q^(2) (z) = √(n)[ p s_F^(z) - (p-1) s_F^^(-q)(z) ]
- p s_F^y_n, H_nq (z) + (p - 1) s_F^(p - 1)/n, H_nq (z)
.
The assertion of Theorem <ref> follows from the following results, whose proofs are carried out in the following sections. Our first result provides the convergence of the finite-dimensional distributions of (M_n,q^(1))_n∈. Its proof relies on a central limit theorem for martingale difference schemes and is given in in Section <ref>.
It holds
for all k∈, z_1,…,z_k ∈ℂ, (z_i) ≠ 0
( M_n,q_1^(1)(z_1), … , M_n,q_1^(1)(z_k), M_n,q_2^(1)(z_1), … , M_n,q_2^(1)(z_k) )^⊤
( M_q_1(z_1), … , M_q_1(z_k), M_q_2(z_1), … , M_q_2(z_k) )^⊤ ,
where (M_q_1(z), M_q_2(z))_z ∈𝒞^+ is a centered Gaussian process with covariance structure given in Theorem <ref>.
Next, we define the process
M̂_n,q^(1) in the same way as M̂_n,q in (<ref>)
replacing M_n,q by M_n,q^(1)
and show in Section <ref>
the following tightness
result.
Under the assumptions of Theorem <ref>, the sequence (M̂_n,q^(1))_n∈ is tight in the space 𝒞 (𝒞^+).
The third step is an investigation of
the deterministic part. In particular, we show in Section <ref> that the bias (M_n,q^(2) )_n∈ converges uniformly to zero.
Under the assumptions of Theorem <ref>, it holds
lim_n→∞sup_z∈𝒞_n | M_n,q^(2)(z)
|
= 0.
The assertion of Theorem <ref> follows from Theorem <ref>, <ref> and <ref>.
§.§ Preliminaries for the proofs
For a p× p matrix 𝐀, we define 𝐀^(-q,·) as the (p-1)× p submatrix of 𝐀, where the qth row is deleted. Similarly, we set 𝐀^(·, -q) as the p× (p-1) submatrix of 𝐀, where the qth column is deleted.
Furthermore, 𝐀^(q,q) contains only the qth row and qth column of 𝐀 is elsewhere filled with zeros. Similarly, 𝐀^(q,·) (or 𝐀^(·,q)) contains only the qth row (or column) of 𝐀, and is elsewhere filled with zeros.
Moreover, recall that if 𝐁 is a (p-1)× (p-1) matrix, then 𝐁̃^(-q) denotes the p× p matrix which is generated from 𝐁 by inserting an additional column and row at position q filled with zeros. Whenever it is clear from the context, the dependency on q is omitted in the notation. For example, we write _j(q) instead of _j(q)^(-q), where the matrix _j(q) is defined below. If the (p-1) × (p-1) matrix 𝐁 is nonsingular, then we define
𝐁̃^-:= 𝐁^(-q)
as the matrix which obtained from 𝐁^-1 by inserting
an additional column and row with zeros at position q.
For j=1 … , n,
let _j denote the conditional expectation
with respect to the filtration ℱ_nj=σ( {𝐱_1,...,𝐱_j} )
(by _0 we denote the common expectation).
Moreover, for the sake of simple notation, we write and ^(-q) for the matrices _n and ^(-q)_n in the proofs.
Furthermore, we define for 1 ≤ j ≤ n, 1 ≤ q ≤ p the following quantities
_j =
1/√(n)^1/2_j , _jq = 1/√(n)^1/2^(-q, ·)_j,
= ∑_j=1^n _j _j^⋆, ^(-q) = ∑_j=1^n _jq_jq^⋆,
(z) = - z_p, _(q)(z) = ^(-q) - z_p-1,
_j(z) = - z_p - _j_j^⋆, _j(q)(z) = ^(-q) - z_p-1 - _jq_jq^⋆,
α_j (z) = 𝐫_j^⋆𝐃_j^-2 (z) 𝐫_j
- n^-1tr ( 𝐃_j^-2 (z) ), α_j(q) (z) = 𝐫_jq^⋆𝐃_j(q)^-2 (z) 𝐫_jq
- n^-1tr ( 𝐃_j(q)^-2 (z) ^(-q) ),
β_j(z) = 1/1 + _j^⋆_j(z) _j , β_j(q) (z) = 1/1 + _jq^⋆_j(q)(z) _jq,
β_j (z) = 1/1+ n^-1tr(𝐃_j (z) ) , β_j(q) (z) = 1/1+ n^-1tr(^(-q)𝐃_j(q) (z) ) ,
b_j(z) = 1/1+ n^-1[ tr(𝐃_j (z) ) ] ,
b_j(q) (z) = 1/1+ n^-1[ tr(^(-q)𝐃_j(q) (z) ) ] ,
_qj(z) = ^1/2_j^-2(z) ^1/2
- ^1/2^(·,-q)_j(q)^-2(z) ^1/2^(-q,·),
_qj(z) = ^1/2_j(z) ^1/2 - ^1/2^(·,-q)_j(q)^-1(z) ^1/2^(-q,·)
= ^1/2_j^-1(z_1) - _j(q)^-(z_1) ^1/2,
γ̂_j (z) = _j^⋆_j(z) _j - n_j(z), γ̂_j(q) (z) = _jq^⋆_j(q)(z) _jq - n^(-q)_j(q)(z).
Similarly to the arguments given on page 81 in
<cit.>, we have for any α≥ 2 and any matrix ∈ℂ^p× p
| 1/n_j^⋆_j - |^α≲^⋆^α/2
n^- (2.5 ∧α) η_n^( 2α - 5) ∨ 0
|| ||^α n^- (1.5 ∧α /2) η_n^( 2α - 5) ∨ 0.
§.§ Proof of Theorem <ref> (finite-dimensional distributions of M_n,q^(1))
The proof is divided in several parts. For the sake of simplicity, we will first concentrate on the case q_1 = q_2 =q.
*Step 1: CLT for martingale difference schemes
We aim to show that
∑_i=1^k α_i M_n,q^(1)(z_i) ∑_i=1^k α_i M_q(z_i)
for all α_1 , … , α_k∈ℂ, k∈,
where M_q is the Gaussian process defined in Theorem <ref>.
Using
(z) = _j(z) - β_j(z) _j(z) _j _j^⋆_j(z)
and an analogue formula for _(q)(z),
we note that
M_n,q^(1)(z)
= √(n)∑_j=1^n ( _j - _j- 1 ) [ (z) - _(q)(z) ]
= - √(n)∑_j=1^n ( _j - _j- 1 ) [
β_j(z) _j^⋆_j^-2(z) _j
- β_j(q)(z) _jq^⋆_j(q)^-2(z) _jq].
The identity
β_j(z) = β_j(z)
- β_j^2(z) γ̂_j(z)
+β_j^2(z) β_j(z) γ̂_j^2(z),
and an analog identity for β_j(q)(z) yield the decompositions
(_j - _j - 1 ) β_j(z) 𝐫_j^⋆𝐃_j^-2(z) 𝐫_j
= _jβ_j(z) α_j(z) - β_j^2(z)γ̂_j(z) 1/n (𝐃_j^-2 (z) )
- (_j - _j-1 ) β_j^2(z) γ̂_j(z) α_j(z) - β_j(z) 𝐫_j^⋆𝐃_j^-2(z) 𝐫_jγ̂_j^2(z) ,
(_j - _j - 1 ) β_j(q)(z) 𝐫_jq^⋆𝐃_j(q)^-2(z) 𝐫_jq
= _jβ_j(q)(z) α_j(q)(z) - β_j(q)^2(z)γ̂_j(q)(z) 1/n (^(-q)𝐃_j(q)^-2 (z) )
- (_j - _j-1 ) β_j(q)^2(z) γ̂_j(q)(z) α_j(q)(z) - β_j(q)(z) 𝐫_j^⋆𝐃_j(q)^-2(z) 𝐫_jγ̂_j(q)^2(z) .
By an application of Lemma <ref>, we obtain
M_nq^(1) (z) =
∑_j=1^n Y_jq (z)
+ o_ (1) ,
where the terms in the sum are defined by
Y_jq (z) = - √(n)_j [
β_j(z) α_j(z) - β_j^2(z)γ̂_j(z) 1/n (𝐃_j^-2 (z) )
- ( β_j(q)(z) α_j(q)(z) -
β_j(q)^2(z)γ̂_j(q)(z) 1/n (^(-q)𝐃_j(q)^-2 (z) ) )
]
= - √(n)_j[ ∂/∂ zβ_j(z) γ̂_j(z) - β_j(q)(z) γ̂_j(q)(z) ] .
Thus, it is sufficient to prove asymptotic normality for
the quantity
∑_j=1^n
Z_njq,
where Z_njq = ∑_i=1^k α_i Y_jq(z_i) for 1 ≤ j ≤ n.
For this purpose we verify conditions (5.29) - (5.31) of the central limit theorem for complex-valued martingale difference schemes given in Lemma 5.6 of <cit.>.
It is straightforward to show that for each n∈,
(Z_njq)_1 ≤ j ≤ n
forms
a martingale difference scheme with respect to
the filtration (ℱ_nj)_1 ≤ j ≤ n, where ℱ_nj denotes the σ-field generated by the random vectors _1, …, _j.
We have for 0 < δ≤ 1/2
| Y_jq(z) |^2+δ ≤
n^1+ δ /2{| β_j(z) α_j(z) - β_j(q)(z) α_j(q)(z) |^2+ δ
+ | β_j^2(z)γ̂_j(z) 1/n (𝐃_j^-2 (z) )
- β_j(q)^2(z)γ̂_j(q)(z) 1/n (^(-q)𝐃_j(q)^-2 (z) ) |^2+δ}
= o n,
where we used Lemma <ref> for the first summand and the second one can be handled similarly.
This implies the Lindeberg-type condition (5.31) given in <cit.>, namely
∑_j=1^n | Z_njq|^2
I | Z_njq| > ε≤1/ε^2∑_j=1^n| Z_njq|^ 2 + δ
= 1/ε^2∑_j=1^n| ∑_i=1^k α_i Y_jq(z_i) |^2+ δ = o(1),
as n→∞.
For a proof of condition (5.30), we note that
∑_j=1^n_j-1[ Z_njq^2]
= ∑_i,l=1^k∑_j=1^nα_iα_l_j-1 [ Y_jq(z_i) Y_jq (z_l) ]
As all summands have the same form, it is sufficient to show that for all z_1, z_2∈ℂ with Im(z_1), Im(z_2) ≠ 0
V_n(z_1,z_2) = ∑_j=1^n_j-1[ Y_jq (z_1) Y_jq (z_2) ]
κσ^2(z_1,z_2, q, q) + (ν_4 - κ - 1) τ^2(z_1,z_2, q, q)
for appropriate functions
σ^2(z_1,z_2, q, q) and τ^2(z_1,z_2, q, q). Note that this convergence implies condition (5.29) in <cit.>, since
∑_j=1^n_j-1[ Y_jq (z_1) Y_jq (z_2)]
= ∑_j=1^n_j-1[ Y_jq (z_1) Y_jq (z_2)] κσ^2(z_1,z_2, q, q)+ (ν_4 - κ - 1) τ^2(z_1,z_2, q, q).
Consequently, Lemma 5.6 in <cit.> combined with the Cramer–Wold device
yields the weak convergence of the finite-dimensional distributions to a multivariate normal distribution
with covariance κσ^2(z_1,z_2,q,q) + (ν_4 - κ - 1) τ^2(z_1,z_2, q, q) = (M_q^(1)(z_1,t_1),M_q^(1)(z_2,t_2)).
*Step 2: Calculation of the variance
Consider the sum
V_n^(0) (z_1,z_2) = n ∑_j=1^n_j-1[ _j β_j(z_1) γ̂_j(z_1) - β_j(q)(z_1) γ̂_j(q)(z_1) _j β_j(z_2) γ̂_j(z_2) - β_j(q)(z_2) γ̂_j(q)(z_2) ].
We use the dominated convergence theorem in combination with (<ref>) to get
∂^2/∂ z_1 ∂ z_2 V_n^(0) (z_1, z_2) = V_n(z_1, z_2).
Similarly to <cit.>, it can be shown that it suffices to show that V_n^(0)(z_1, z_2) given in (<ref>) converges in probability to a constant and in this case, the mixed partial derivative of its limit will give the limit of V_n(z_1, z_2).
Using (<ref>) and <cit.>, we see that
n | _j-1[ _j β_j(z_1) γ̂_j(z_1) - β_j(q)(z_1) γ̂_j(q)(z_1) _j β_j(z_2) γ̂_j(z_2) - β_j(q)(z_2) γ̂_j(q)(z_2) ]
- _j-1[ _j b_j(z_1) γ̂_j(z_1) - γ̂_j(q)(z_1) _j b_j(z_2) γ̂_j(z_2) - γ̂_j(q)(z_2) ] |
= n | _j-1[ _j β_j(z_1) - b_j(z_1) γ̂_j(z_1) - β_j(q)(z_1) - b_j(z_1) γ̂_j(q)(z_1) _j β_j(z_2) γ̂_j(z_2) - β_j(q)(z_2) γ̂_j(q)(z_2) ]
+ _j-1[ _j b_j(z_1) γ̂_j(z_1) - γ̂_j(q)(z_1) _j β_j(z_2) - b_j(z_2) γ̂_j(z_2)
- β_j(q)(z_2) - b_j(z_2) γ̂_j(q)(z_2) ]
|
= o n^-1.
Consequently, we have
V_n^(0) (z_1, z_2) = V_n^(1) (z_1, z_2) + o_ (1),
where
V_n^(1) (z_1, z_2) =
n ∑_j=1^n b_j(z_1) b_j(z_2) _j-1[ _j [ γ̂_j(z_1) - γ̂_j(q)(z_1) ]
_j [ γ̂_j(z_2) -γ̂_j(q)(z_2) ] ]
= n∑_j=1^n b_j(z_1) b_j(z_2) _j-1[ _j [ _j^⋆_qj(z_1) _j - _qj(z_1) ]
_j [_j^⋆_qj(z_2 ) _j - _qj(z_2) ] ]
= n∑_j=1^n b_j(z_1) b_j(z_2) _j-1[ _j^⋆_j [ _qj(z_1) ] _j - _j [ _qj(z_1) ] _j^⋆_j[ _qj(z_2 ) ] _j - _j [ _qj(z_2) ] ].
Using formula (9.8.6) in <cit.> we see that under a Gaussian-type 4th moment condition (ν_4 =3 for the real case or ν_4 = 2 for the complex case), we have
V_n^(1)(z_1,z_2) = κ V_n^(2) (z_1, z_2) + o_(1),
where
V_n^(2) (z_1, z_2) = 1/n∑_j=1^n b_j(z_1) b_j(z_2) _j [ _qj(z_1) ] _j [ _qj(z_2) ]
and
κ =1 for the complex case and κ = 2 for the real case.
Therefore it suffices to study the limit of
V_n^(2) (z_1, z_2)
(in the real case, we have to multiply this term by 2).
The analysis of V_n^(2)(z_1, z_2) requires a different representation of the differences of resolvents, which is provided in Step 3.
If ν_4 ≠ 3 for real case or ν_4 ≠ 2 for the complex case, the additional term
W_n (z_1, z_2) = v_3 - κ - 1/n∑_j=1^n b_j(z_1) b_j(z_2) _j [ _qj(z_1) ] ∘_j [ _qj(z_2) ]
has to be considered, which will be analyzed in Step 5 using assumption <ref>.
Step 3: Decomposition of the difference of resolvents
Similarly to formula (9.9.12) in <cit.>, we decompose the difference
_j(z) - _j(q)^-(z)
= - z𝐈 - n-1/n b_j(z)
- z 𝐈̃^(-q) - n-1/n b_j(q)(z) ^(-q)^-
+ 𝐗_j(z) + 𝐘_j(z) + 𝐙_j(z),
where
𝐗_j(z) = ∑_i=1
i≠ j ^n{ b_j(z) z𝐈 - n-1/n b_j(z) _i_i^⋆ - n_ij(z)
- b_j(q)(z) z 𝐈̃^(-q) - n-1/n b_j(q)(z) ^(-q)^- _i_i^⋆ - n^(-q)_ij(q)^-(z) },
𝐘_j(z) = ∑_i=1
i≠ j ^n{β_ij (z) - b_j(z) z𝐈 - n-1/n b_j(z) _i_i^⋆_ij (z)
- β_ij(q) (z) - b_j(q)(z) z 𝐈̃^(-q) - n-1/n b_j(q)(z) ^(-q)^-
_i_i^⋆_ij(q)^- (z)
} ,
𝐙_j(z) = n { b_j(z) z𝐈 - n-1/n b_j(z) ∑_i=1
i≠ j ^n_ij (z) - _j (z)
- b_j(q)(z) z 𝐈̃^(-q) - n-1/n b_j(q)(z) ^(-q)^- ^(-q)∑_i=1
i≠ j ^n^-_ij(q) (z) - ^-_j(q) (z) } .
Here, the main difference and challenge
compared to <cit.>
lies in identifying the dominating terms in V_n^(2), since the techniques developed in this reference are not directly applicable due to the different normalizations of the difference of spectral statistics compared to a single eigenvalue statistic.
Step 4: Analysis of V_n^(2)(z_1, z_2)
To begin with, we will see that
_j [ _qj(z_1) ] _j [ _qj(z_2)]
=
{^1/2_j [ _j^-1(z_1) - _j(q)^-(z_1) ] ^1/2^1/2_j [ _j^-1(z_2) - _j(q)^-(z_2) ] ^1/2}
=
{_j [ _j^-1(z_1) - _j(q)^-(z_1) ] _j [ _j^-1(z_2) - _j(q)^-(z_2) ] }
= 1/z_1 z_2𝐇^Δ_q(z_1) 𝐇^Δ_q (z_2)
+ _j [ 𝐗_j(z_1) ] _j [ _j^-1(z_2) - _j(q)^-(z_2) ] + o_(1),
where we used (<ref>) for the first equality sign.
Here, the remainder does not depend on j and we define
𝐇_q^Δ(z) = 𝐇(z) - 𝐇_q(z),
𝐇(z) = + s(z) ,
𝐇_q(z) = ^(-q) + s(z) ^(-q)^-.
Indeed, (<ref>) follows from (<ref>)
and the estimates
| 𝐇_q^Δ(z_1) _j[ 𝐘_j(z_2) ] |
= o(1),
| 𝐇_q^Δ(z_1) _j[ 𝐗_j(z_2) ] |
= o(1),
| _j[ 𝐘_j(z_1) ] _j [ _j^-1(z_2) - _j(q)^-(z_2) ] | = o(1),
| 𝐇_q^Δ(z_1) _j[ 𝐙_j(z_2) ] |
= o(1),
| _j [ 𝐙_j(z_1) ] _j [ _j^-1(z_2) - _j(q)^-(z_2) ] | = o(1),
which can be obtained by a tedious but straightforward calculation using
(<ref>) and (<ref>).
We continue by analyzing the remaining term involving 𝐗_j(z_1). Similarly to
formula (9.9.17) in <cit.>, we decompose
_j [ 𝐗_j(z_1) ] _j^-1(z_2) - _j(q)^-(z_2)
= ∑_i=1 ^j-1{ b_j(z_1) z_1𝐈 - n-1/n b_j(z_1) _i_i^⋆ - n_j [ _ij(z_1) ]
- b_j(q)(z_1) z_1 𝐈̃^(-q) - n-1/n b_j(q)(z_1) ^(-q)^- _i_i^⋆ - n^(-q)_j [ _ij(q)^-(z_1) ] }_j^-1(z_2) - _j(q)^-(z_2)
= X_1j (z_1, z_2) + X_2j (z_1, z_2) + X_3j (z_1, z_2),
where
X_1j (z_1, z_2) = ∑_i=1 ^j-1{ b_j(z_1) z_1 𝐈 - n-1/n b_j(z_1) _i_i^⋆_j [ _ij(z_1) ]
- b_j(q)(z_1) z_1 𝐈̃^(-q) - n-1/n b_j(q)(z_1) ^(-q)^- _i_i^⋆_j [ _ij(q)^-(z_1) ] }
× - β_ij(z_2) _ij(z_2) _i _i^⋆_ij(z_2)
+ β_ij(q)(z_2) _ij(q)^-(z_2) _i _i^⋆_ij(q)^-(z_2) ,
= ∑_i=1^j - 1{
- b_j(z_1) β_ij(z_2) _i^⋆_j [ _ij(z_1) ] _ij(z_2) _i _i^⋆_ij(z_2)
z_1 𝐈 - n-1/n b_j(z_1) _i
+ b_j(z_1) β_ij(q)(z_2) _i^⋆_j [ _ij(z_1) ] _ij(q)^-(z_2) _i _i^⋆_ij(q)^-(z_2)
z_1 𝐈 - n-1/n b_j(z_1) _i
+ b_j(q)(z_1) β_ij(z_2) _i^⋆_j [ _ij(q)^-(z_1) ] _ij(z_2) _i _i^⋆_ij(z_2)
z_1 𝐈̃^(-q) - n-1/n b_j(q)(z_1) ^(-q)^- _i
- b_j(q)(z_1) β_ij(q)(z_2) _i^⋆_j [ _ij(q)^-(z_1) ] _ij(q)^-(z_2) _i _i^⋆_ij(q)^-(z_2)
z_1 𝐈̃^(-q) - n-1/n b_j(q)(z_1) ^(-q)^- _i} ,
X_2j (z_1, z_2) = - n ∑_i=1 ^j-1{ b_j(z_1) z_1 𝐈 - n-1/n b_j(z_1) _j [ _ij(z_1) ]
- b_j(q)(z_1) z 𝐈̃^(-q) - n-1/n b_j(q)(z_1) ^(-q)^- ^(-q)_j [ _ij(q)^-(z_1) ] }
×_j^-1(z_2) - _ij^-1(z_2) - ( _j(q)^-(z_2) - _ij(q)^-(z_2) ) ,
X_3j (z_1, z_2) = ∑_i=1 ^j-1{ b_j(z_1) z_1 𝐈 - n-1/n b_j(z_1) _i_i^⋆ - n_j [ _ij(z_1) ]
- b_j(q)(z_1) z_1 𝐈̃^(-q) - n-1/n b_j(q)(z_1) ^(-q)^- _i_i^⋆ - n^(-q)_j [ _ij(q)^-(z_1) ] }
×_ij^-1(z_2) - _ij(q)^-(z_2)
= ∑_i=1 ^j-1{_i^⋆( _j [ _ij(z_1) ] _ij^-1(z_2) - _ij(q)^-(z_2) b_j(z_1) z_1 𝐈 - n-1/n b_j(z_1)
- _j [ _ij(q)^-(z_1) ] _ij^-1(z_2) - _ij(q)^-(z_2) b_j(q)(z_1) z_1 𝐈̃^(-q) - n-1/n b_j(q)(z_1) ^(-q)^-
) _i
- n( _j [ _ij(z_1) ]_ij^-1(z_2) - _ij(q)^-(z_2) b_j(z_1) z_1𝐈 - n-1/n b_j(z_1)
- _j [ _ij(q)^-(z_1) ] _ij^-1(z_2) - _ij(q)^-(z_2) b_j(q)(z_1) z_1 𝐈̃^(-q) - n-1/n b_j(q)(z_1) ^(-q)^-
) }
In the following, we will show that
1 n∑_j=1^n b_j(z_1) b_j(z_2) X_2j(z_1,z_2) and 1 n∑_j=1^n
b_j(z_1) b_j(z_2) X_3j(z_1,z_2) are asymptotically negligible, while the term 1 n∑_j=1^n b_j(z_1) b_j(z_2) X_1j(z_1,z_2) contributes to the covariance structure.
We first consider X_2j(z_1, z_2).
Observing (<ref>), the representation
β_j(z) = b_j(z) - β_j(z) b_j(z) γ_j(z),
we obtain
X_2j (z_1, z_2) = n ∑_i=1 ^j-1{ b_j(z_1) z_1 𝐈 - n-1/n b_j(z_1) _j [ _ij(z_1) ]
- b_j(q)(z_1) z 𝐈̃^(-q) - n-1/n b_j(q)(z_1) ^(-q)^- ^(-q)_j [ _ij(q)^-(z_1) ] }
×β_ij(z_2) _ij^-1(z_2) _i _i^⋆_ij^-1(z_2) - β_ij(q)(z_2) _ij(q)^-(z_2) _i _i^⋆_ij(q)^-(z_2) )
= n ∑_i=1 ^j-1{ b_j(z_1) z_1 𝐈 - n-1/n b_j(z_1) _j [ _ij(z_1) ]
- b_j(q)(z_1) z 𝐈̃^(-q) - n-1/n b_j(q)(z_1) ^(-q)^- ^(-q)_j [ _ij(q)^-(z_1) ] }
×( b_ij(z_2) _ij^-1(z_2) _i _i^⋆_ij^-1(z_2) - b_ij(q)(z_2) _ij(q)^-(z_2) _i _i^⋆_ij(q)^-(z_2)
- b_ij(z_2) β_ij(z_2) γ_ij(z_2) _ij^-1(z_2) _i _i^⋆_ij^-1(z_2) + b_ij(q)(z_2) β_ij(q)(z_2) γ_ij(q)(z_2) _ij(q)^-(z_2) _i _i^⋆_ij(q)^-(z_2) ) .
Using (<ref>), this yields
1/n∑_j=1^n
b_j(z_1) b_j(z_2) X_2j (z_1, z_2) = o_ (1).
To bound the term X_3j(z_1,z_2), we employ the following strategy. We denote the summands of X_3,j(z_1, z_2) by X_3,j,i(z_1, z_2), 1≤ i ≤ j -1, and thus, we write
X_3,j(z_1, z_2) = ∑_i=1^j-1 X_3,j,i(z_1, z_2).
We aim to show that for 1≤ j ≤ n
| X_3,j(z_1,z_2) |^2 = ∑_i=1^j -1| X_3,j,i(z_1, z_2) |^2
+ ∑_i,k=1, i≠ k^j -1[ X_3,j,i(z_1, z_2) X_3,j,k(z_1, z_2)] = o(1).
Using (<ref>), we see that ∑_i=1^j -1 | X_3,j,i(z_1, z_2)|^2 =o(1). Thus, it is left to analyze the sum of cross terms. We use (<ref>) to replace the matrices _ij(z), _ij(q)(z), _kj(z), _kj(q)(z) and we use (<ref>) to replace the scalars β_kij(z), β_kij(q)(z), β_ikj(z), β_ikj(q)(z), which gives different types of resulting terms. Here, the matrices _ij(z), _ij(q)(z) and the scalars β_ij(z), β_ijk(z), β_ij(q)(z), β_ijk(q)(z) are defined similarly to _j(z), _j(q)(z) and β_j(z), β_j(q)(z), respectively.
Thus, we get the following representation
∑_i,k=1, i≠ k^j -1[ X_3,j,i(z_1, z_2) X_3,j,k(z_1, z_2)]
=
∑∑_i,k=1, i≠ k^j -1[ T_3,j,i,k(z_1, z_2) ].
Here, the first sum on the right-hand side corresponds to the summation with respect to a finite number of different terms T_3,j,i,k(z_1, z_2).
On the one hand, the expected value of quadratic forms involving only matrices like _ij(z) and _ij(q)(z) and no β-term or either only β_kij(z), β_kij(q)(z) or β_ikj(z), β_ikj(q)(z) is equal to zero. On the other hand, the remaining terms can be shown to be of order o(n^-2) by using Cauchy-Schwarz inequality and (<ref>).
Let us now consider the contributing term X_1j(z_1, z_2) in more detail. The quantities β_ij(z) and β_ij(q)(z) can be replaced by b_j(z), resulting in a negligible error.
Therefore, using (<ref>), formula (9.9.12) in <cit.>,
recalling the definition of the matrices 𝐇_q^Δ(z), 𝐇(z) in (<ref>),
and using the notation
𝐆(z) = z 𝐈 - n-1/n b_j(z)
,
𝐆_q(z) = z 𝐈̃^(-q) - n-1/n b_j(q)(z) ^(-q)^-,
we investigate
X̃_1j(z_1, z_2)
= b_j(z_1) b_j(z_2) ∑_i=1^j - 1{
- _i^⋆_j [ _ij(z_1) ] _ij(z_2) _i _i^⋆_ij(z_2)
𝐆(z_1) _i
+ _i^⋆_j [ _ij(z_1) ] _ij(q)^-(z_2) _i _i^⋆_ij(q)^-(z_2)
𝐆(z_1) _i
+ _i^⋆_j [ _ij(q)^-(z_1) ] _ij(z_2) _i _i^⋆_ij(z_2)
𝐆_q(z_1) _i
- _i^⋆_j [ _ij(q)^-(z_1) ] _ij(q)^-(z_2) _i _i^⋆_ij(q)^-(z_2)
𝐆_q(z_1) _i}
= b_j(z_1) b_j(z_2) ∑_i=1^j - 1{
- _i^⋆_j [ _ij(z_1) ] _ij(z_2) - _ij(q)^-(z_2)_i _i^⋆_ij(z_2)
𝐆(z_1) _i
+ _i^⋆_j [ _ij(z_1) ] _ij(q)^-(z_2) _i _i^⋆_ij(q)^-(z_2) - _ij(z_2) 𝐆(z_1) _i
+ _i^⋆_j [ _ij(q)^-(z_1) ] _ij(z_2) - _ij(q)^-(z_2) _i _i^⋆_ij(z_2)
𝐆_q(z_1) _i
- _i^⋆_j [ _ij(q)^-(z_1) ] _ij(q)^-(z_2) _i _i^⋆_ij(q)^-(z_2) - _ij(z_2) 𝐆_q(z_1) _i}
= n^-2 b_j(z_1) b_j(z_2) ∑_i=1^j - 1{
- [ _j [ _ij(z_1) ] _ij(z_2) - _ij(q)^-(z_2)]
[ _ij(z_2)
𝐆(z_1) ]
+ [ _j [ _ij(z_1) ] _ij(q)^-(z_2) ]
[ _ij(q)^-(z_2) - _ij(z_2) 𝐆(z_1) ]
+ [ _j [ _ij(q)^-(z_1) ] _ij(z_2) - _ij(q)^-(z_2) ]
[ _ij(z_2)
𝐆_q(z_1) ]
- [ _j [ _ij(q)^-(z_1) ] _ij(q)^-(z_2) ] [ _ij(q)^-(z_2) - _ij(z_2) 𝐆_q(z_1)
] }
+ o_ (1)
= j- 1 /n^2 b_j(z_1) b_j(z_2) {
- [ _j [ _j(z_1) ] _j(z_2) - _j(q)^-(z_2)]
[ _j(z_2)
𝐆(z_1) ]
+ [ _j [ _j(z_1) ] _j(q)^-(z_2) ]
[ _j(q)^-(z_2) - _j(z_2) 𝐆(z_1) ]
+ [ _j [ _j(q)^-(z_1) ] _j(z_2) - _j(q)^-(z_2) ]
[ _j(z_2)
𝐆_q(z_1) ]
- [ _j [ _j(q)^-(z_1) ] _j(q)^-(z_2) ] [ _j(q)^-(z_2) - _j(z_2) 𝐆_q(z_1) ] }
+ o_ (1)
= j- 1 /n^2 b_j(z_1) b_j(z_2) {[ _j [ _j(z_1) ] _j(z_2) - _j(q)^-(z_2)]
[ 𝐆(z_1)
𝐆(z_2) ]
- [ _j [ _j(z_1) ] _j(q)^-(z_2) ]
[ 𝐆(z_1)
𝐆_q(z_2) - 𝐆(z_2) ]
- [ _j [ _j(q)^-(z_1) ] _j(z_2) - _j(q)^-(z_2) ]
[ 𝐆_q(z_1) 𝐆(z_2)
]
+ [ _j [ _j(q)^-(z_1) ] _j(q)^-(z_2) ]
[ 𝐆_q(z_1) 𝐆_q (z_2) - 𝐆(z_2) ] }
+ o_ (1),
= j- 1 /n^2 b_j(z_1) b_j(z_2) {[ _j [ _j(z_1) ] _j(z_2) - _j(q)^-(z_2)]
[ 𝐆(z_1)
𝐆(z_2) ]
- [ _j [ _j(z_1) ] _j^-1(z_2) ]
[ 𝐆(z_1)
𝐆_q(z_2) - 𝐆(z_2) ]
- [ _j [ _j(q)^-(z_1) ] _j(z_2) - _j(q)^-(z_2) ]
[ 𝐆(z_1) 𝐆(z_2)
]
+ [ _j [ _j (z_1) ] _j^-1(z_2) ]
[ 𝐆_q(z_1) 𝐆_q (z_2) - 𝐆(z_2) ] }
+ o_ (1),
= j- 1 /n^2 b_j(z_1) b_j(z_2) {[ _j [ _j(z_1) - _j(q)^-(z_1) ] _j(z_2) - _j(q)^-(z_2)]
[ 𝐆(z_1)
𝐆(z_2) ]
+ [ _j [ _j(z_1) ] _j^-1(z_2) ]
[ 𝐆(z_1) - 𝐆_q(z_1) 𝐆(z_2) - 𝐆_q(z_2) ] }
+ o_ (1)
= ( j- 1 ) b_j(z_1) b_j(z_2) /n^2 z_1 z_2 {[ _j [ 𝐁_qj(z_1) ] 𝐁_qj(z_2) ]
[ 𝐇(z_1)
𝐇(z_2) ]
+ [ _j [ _j(z_1) ] _j^-1(z_2) ]
[ 𝐇_q^Δ(z_1)
𝐇_q^Δ(z_2) ] }
+ o_ (1) .
Combining this with (<ref>), we conclude
_j [ _qj(z_1) ] _j [ _qj(z_2)]
= 1/z_1 z_2𝐇_q^Δ(z_1) 𝐇_q^Δ(z_2)
+ ( j- 1 ) b_j(z_1) b_j(z_2) /n^2 z_1 z_2 {[ _j [ 𝐁_qj(z_1) ] _j [ 𝐁_qj(z_2) ] ]
[ 𝐇(z_1)
𝐇(z_2) ]
+ [ _j [ _j(z_1) ] _j[ _j^-1(z_2) ] ]
[ 𝐇_q^Δ(z_1)
𝐇_q^Δ(z_2) ] }
+ o_(1),
where the negligible terms do not depend on j. This gives
V_n^(2) (z_1, z_2) = (z_1) (z_2) /n
×∑_j=1^n 𝐇_q^Δ(z_1) 𝐇_q^Δ(z_2) + ( j-1) b_j(z_1) b_j(z_2) /n^2[ _j [ _j(z_1) ] _j[ _j^-1(z_2) ] ]
[ 𝐇_q^Δ(z_1)
𝐇_q^Δ(z_2) ] /1 - j- 1 /n a_n(z_1, z_2)
+ o_(1),
where we define
a_n(z_1 , z_2) = (z_1) (z_2) /n[ 𝐇(z_1)
𝐇(z_2) ] .
Recalling that H is the limiting spectral distribution of , we observe for n→∞
a_n(z_1, z_2) → a(z_1, z_2) = y (z_1) (z_2) ∫λ/( 1 + λ(z_1) ) ( 1 + λ(z_2) ) d H(λ).
The term [ _j [ _j(z_1) ] _j[ _j^-1(z_2) ] ] has been studied in Section 9.9 of
<cit.> (see, in particular formula (9.9.21) and (9.9.23) in this reference).
These arguments give
1/n[ _j [ _j(z_1) ] _j[ _j^-1(z_2) ] ]
= 1/n z_1 z_2[ 𝐇(z_1)
𝐇(z_2) ] /1 - j - 1/n a_n(z_1, z_2)
+ o_ (1)
= 1/(z_1) (z_2) z_1 z_2 a_n(z_1, z_2) /1 - j - 1/n a_n(z_1,z_2) + o_(1).
Combining this with (<ref>), we have
V_n^(2) =
(z_1) (z_2) 𝐇_q^Δ(z_1) 𝐇_q^Δ(z_2) /n∑_j=1^n 1 + j-1 /na_n(z_1,z_2)/1 - j-1/n a_n(z_1, z_2) /1 - j- 1 /n a_n(z_1, z_2)
+ o_(1)
= (z_1) (z_2) 𝐇_q^Δ(z_1) 𝐇_q^Δ(z_2) /n∑_j=1^n 1 / 1 - j- 1 /n a_n(z_1, z_2) ^2
+ o_(1)
= (z_1) (z_2) 𝐇_q^Δ(z_1) 𝐇_q^Δ(z_2) ∫_0^1 1/ 1 - t a(z_1, z_2) ^2 dt + o_(1)
= (z_1) (z_2) 𝐇_q^Δ(z_1) 𝐇_q^Δ(z_2) /1 - a(z_1, z_2) + o_(1) .
Thus, in the case q_1 = q_2 =q and ν_4 = κ + 1, we have shown that (<ref>) holds true with
σ^2(z_1, z_2, q, q) =
∂^2/∂ z_1 ∂ z_2lim_n→∞(z_1) (z_2) 𝐇_q^Δ(z_1) 𝐇_q^Δ (z_2) /1 - a(z_1, z_2) .
For general 1 ≤ q_1, q_2 ≤ p, we proceed similarly and have that
σ^2(z_1, z_2, q_1, q_2) =
∂^2/∂ z_1 ∂ z_2lim_n→∞(z_1) (z_2) 𝐇_q_1^Δ(z_1) 𝐇_q_2^Δ(z_2) /1 - a(z_1, z_2) .
Note that the existence of the limit on the right-hand side of (<ref>) is guaranteed by assumption <ref> and Lemma <ref>. Thus, it is left to analyze the term W_n(z_1, z_2) if the fourth moment of the data does not admit a Gaussian type.
Before proceeding with this analysis, a few comments on other representations of the covariance are in place.
In Lemma <ref> below, we show that
𝐇_q_1^Δ(z_1)
𝐇_q_2^Δ(z_2)
= { + (z_1) _q_1q_2
- (z_1) + s(z_1) ^(-q_2) + s(z_2) ^(-q_2)^-_q_1q_2}
×{ + (z_2) _q_2q_1
- (z_2)
+ s(z_2) ^(-q_1) + s(z_1) ^(-q_1)^-_q_2q_1}.
Note that in the case where is a diagonal matrix, we have
+ s(z_2) ^(-q_2) + s(z_1) ^(-q_2)^-_q_1q_2
= 0 ,
(1 ≤ q_1, q_2 ≤ p)
and
+ (z_1) _q_1q_2 = 0 ,
if 1 ≤ q_1 ≠ q_2 ≤ p.
Consequently, σ^2(z_1, z_2, q_1, q_2) = 0 if
is diagonal and q_1 ≠ q_2.
In the following, we proceed with the final step for the proof of Theorem <ref>.
*Step 5: Analysis of W_n(z_1, z_2)
Recall the definition of W_n(z_1, z_2) in (<ref>), which implicitly also depends on q. In the previous part of this proof, we assumed that q=q_1=q_2.
For general 1 ≤ q_1, q_2 ≤ p, it follows combining techniques from Step 4, especially the decomposition in (<ref>), with the arguments given in Section 4
of <cit.> that
W_n(z_1, z_2) = v_3 - κ - 1/n∑_j=1^n z_1 z_2 (z_1) (z_2) _j [ _q_1 j(z_1) ] ∘_j [ _q_2j(z_2) ] + o_(1)
= (ν_4 - κ - 1) (z_1) (z_2) 𝐇_q_1^Δ(z_1) ∘𝐇_q_2^Δ(z_2) + o_(1)
= (ν_4 - κ - 1) (z_1) (z_2) h_q_1, q_2 (z_1, z_2) + o_(1),
where we used assumption <ref>.
Thus, under general moment conditions, we have
(M_q_1^(1)(z_1), M_q_2^(1)(z_2) ) =
κσ^2(z_1, z_2, q_1, q_2) +
( ν_4 - κ - 1) τ^2(z_1, z_2, q_1, q_2),
where σ^2 is defined in (<ref>) and
τ^2(z_1, z_2, q_1, q_2) = ∂^2/∂ z_1 ∂ z_2(z_1) (z_2) h_q_1, q_2(z_1, z_2).
In the case where is a diagonal matrx and 1≤ q_1 ≠ q_2 ≤ p, we have 𝐇_q_1^Δ(z_1) ∘𝐇_q_2^Δ(z_2) = 0, and thus, τ^2(z_1, z_2, q_1, q_2)=0. This implies, that the Gaussian processes (M_q_1(z))_z∈𝒞^+ and (M_q_2(z))_z∈𝒞^+ are independent in this case.
§.§ Proof of Theorem <ref> (tightness of M̂_n,q^(1))
In order to prove tightness, we will verify the conditions (i) and (ii) of
Theorem 12.3 in <cit.>.
For (i), it suffices to show that the sequence (M̂_n,q^(1)(z) )_n∈ is tight for some z∈𝒞^+. For any z∈𝒞^+ with (z) ≠ 0, this assertion follows from Theorem <ref>.
In order to prove (ii), we will show that
sup_n∈, z_1, z_2 ∈𝒞^+, z_1 ≠ z_2| M̂_n,q^(1)(z_1) - M̂_n,q^(1)(z_2) |^2 /|z_1 - z_2|^2≲ 1,
which is implied by
sup_n∈, z_1, z_2 ∈𝒞_n, z_1 ≠ z_2 | M_n,q^(1)(z_1) - M_n,q^(1)(z_2) |^2 /|z_1 - z_2|^2≲ 1.
This reduction can be shown
by similar arguments as given in Section 7.2
of <cit.> which are omitted for the sake of brevity. Instead, we concentrate on the proof of (<ref>)
and make use of the decomposition
M_nq^(1) (z_1) - M_nq^(1) (z_2) / z_1 - z_2 = √(n)∑_j=1^n ( _j - _j- 1 ) (z_1) - (z_2) - _(q)(z_1) - _(q)(z_2) / z_1 - z_2
= √(n)∑_j=1^n ( _j - _j- 1 ) [ (z_1) (z_2) - _(q)(z_1) _(q)(z_2) ]
= √(n) G_n1 - G_n2 - G_n3 ,
where
G_n1 = ∑_j=1^n ( _j - _j - 1 ) [
β_j(z_1) β_j(z_2) _j^⋆_j(z_1) _j(z_2)
_j ^2
- β_j(q)(z_1) β_j(q)(z_2) _jq^⋆_j(q)(z_1) _j(q)(z_2)
_jq^2 ] ,
G_n2 = ∑_j=1^n ( _j - _j - 1 )
[
β_j(z_1) _j^⋆_j^-2(z_1) _j(z_2) _j - β_j(q)(z_1) _jq^⋆_j(q)^-2(z_1) _j(q)(z_2) _jq] ,
G_n3 = ∑_j=1^n ( _j - _j - 1 )
[
β_j(z_2) _j^⋆_j^-2(z_2) _j(z_1) _j
- β_j(q)(z_2) _jq^⋆_j(q)^-2(z_2) _j(q)(z_1) _jq] .
These terms are now investigated separately
beginning with
G_n1 = G_n11 - G_n12 - G_n13,
where
G_n11
= ∑_j=1^n ( _j - _j - 1 ) [
b_j(z_1) b_j(z_2) _j^⋆_j(z_1) _j(z_2)
_j ^2
- b_j(q)(z_1) b_j(q)(z_2) _jq^⋆_j(q)(z_1) _j(q)(z_2)
_jq^2
] ,
G_n12
= ∑_j=1^n ( _j - _j - 1 )
[
b_j(z_2) β_j(z_1) β_j(z_2) _j^⋆_j(z_1) _j(z_2)
_j ^2 γ_j(z_2)
- b_j(q)(z_2) β_j(q)(z_1) β_j(q)(z_2) _jq^⋆_j(q)(z_1) _j(q)(z_2)
_jq^2 γ_j(q)(z_2) ],
G_n13
= ∑_j=1^n ( _j - _j - 1 ) [
b_j(z_1) b_j(z_2) β_j(z_1) _j^⋆_j(z_1) _j(z_2)
_j ^2 γ_j(z_1)
- b_j(q)(z_1) b_j(q)(z_2) β_j(q)(z_1) _jq^⋆_j(q)(z_1) _j(q)(z_2)
_jq^2 γ_j(q)(z_1) ].
In order to find appropriate estimates for these term, we need some preliminaries.
Note that we have similarly to <cit.>,
sup_n∈, z∈𝒞_nmax |b_j(z)|, |b_j(q)(z)| ≲ 1.
Similarly to Lemma 7.7.4 in
<cit.>, we obtain from (<ref>) the following lemma via induction.
Let j,m∈_0, α≥ 2 and 𝐀_l, l∈{1,…,m+1} be p × p (random) matrices independent of _j which obey for any α̃≥ 2
|| 𝐀_l ||^α̃ < ∞, l∈{1, …, m+1}.
Then, it holds
| ( ∏_k=1^m _j^⋆𝐀_k _j )
_j^⋆𝐀_m+1_j - n𝐀_m+1|^α≲ n^-((α/2) ∧ 1.5) .
If additionally for any l∈{1, …, m+1}, α̃≥ 2
[ 𝐀𝐀_l^⋆]^q̃ < ∞ ,
holds true, then we have
| ( ∏_k=1^m _j^⋆𝐀_k _j )
_j^⋆𝐀_m+1_j - n𝐀_m+1|^α≲
n^-(α∧ 2.5).
For the applications of Lemma <ref>
in the following discussion we note that _j(z), _j(q)^-(z) and similarly defined matrices satisfy condition (<ref>) uniformly over z∈𝒞_n, n∈, which can be shown similarly to Lemma 7.7.3 in <cit.>.
Furthermore, choices like 𝐀_l = _j(z) - _j(q)^-(z)
satisfy condition (<ref>) combining the observation above with ideas from the proof of Lemma <ref>.
To begin with, we make use of the decomposition G_n11 = G_n111 + G_n112, where
G_n111
= ∑_j=1^n ( _j - _j - 1 )
b_j(z_1) b_j(z_2) {_j^⋆_j(z_1) _j(z_2)
_j ^2 - _jq^⋆_j(q)(z_1) _j(q)(z_2)
_jq^2 }
= ∑_j=1^n ( _j - _j - 1 )
b_j(z_1) b_j(z_2) {_j^⋆_j(z_1) _j(z_2)
_j ^2 - _j^⋆_j(q)^-(z_1) _j(q)^-(z_2)
_j^2 },
G_n112
= ∑_j=1^n ( _j - _j - 1 ) b_j(z_1) b_j(z_2) - b_j(q)(z_1) b_j(q)(z_2) _jq^⋆_j(q)(z_1) _j(q)(z_2)
_jq^2.
In order to estimate these terms, we need further preparations.
Recall that we are able to bound the moments of ||_j(z)|| independent of n∈,z∈𝒞_n. Furthermore, let η_r > lim sup_n→∞ |||| ( 1+ √(y))^2 and 0<η_l < lim inf_n→∞λ_p ( ) I_(0,1)(y) ( 1- √(y))^2.
Then, we observe for z∈𝒞_n
|| (z) ||
≲ 1
+ n^3/2ε_n I { |||| ≥η_r or λ_p() ≤η_l}
≤ 1
+ n^3 I { |||| ≥η_r or λ_p() ≤η_l},
where we used the fact that ε_n ≥ n^-α for some α∈ (0,1). From <cit.> we know that for any m>0
{ ||_j(0) || ≥η_r or λ_p(_j(0)) ≤η_l} = o n^-m,
{ |||| ≥η_r or λ_p() ≤η_l}
= o n^-m.
Observing (<ref>), we have for the terms appearing in G_n111
| ( _j - _j - 1 )
b_j(z_1) b_j(z_2) {_j^⋆_j(z_1) _j(z_2)
_j ^2 - _j^⋆_j(q)^-(z_1) _j(q)^-(z_2)
_j^2 }|^2
≲ | ( _j - _j - 1 )
{_j^⋆_j(z_1) _j(z_2)
_j ^2 - _j^⋆_j(q)^-(z_1) _j(q)^-(z_2)
_j^2 }|^2
= | ( _j - _j - 1 )
{_j^⋆_j(z_1) _j(z_2)
- _j(q)^-(z_1) _j(q)^-(z_2) _j
×_j^⋆_j(z_1) _j(z_2)
+ _j(q)^-(z_1) _j(q)^-(z_2) _j}|^2
≲ | ( _j - _j - 1 )
{[ _j^⋆_j(z_1) _j(z_2)
- _j(q)^-(z_1) _j(q)^-(z_2) _j
- n _j(z_1) _j(z_2)
- _j(q)^-(z_1) _j(q)^-(z_2) ]
×_j^⋆_j(z_1) _j(z_2)
+ _j(q)^-(z_1) _j(q)^-(z_2) _j}|^2
+ | ( _j - _j - 1 )
n _j(z_1) _j(z_2)
- _j(q)^-(z_1) _j(q)^-(z_2)
×_j^⋆_j(z_1) _j(z_2)
+ _j(q)^-(z_1) _j(q)^-(z_2) _j}|^2
≲ | ( _j - _j - 1 )
{[ _j^⋆_j(z_1) _j(z_2)
- _j(q)^-(z_1) _j(q)^-(z_2) _j
- n _j(z_1) _j(z_2)
- _j(q)^-(z_1) _j(q)^-(z_2) ]
×[ _j^⋆_j(z_1) _j(z_2)
+ _j(q)^-(z_1) _j(q)^-(z_2) _j
- n_j(z_1) _j(z_2)
+ _j(q)^-(z_1) _j(q)^-(z_2) ] }|^2
+ | ( _j - _j - 1 )
{[ _j^⋆_j(z_1) _j(z_2)
- _j(q)^-(z_1) _j(q)^-(z_2) _j
- n _j(z_1) _j(z_2)
- _j(q)^-(z_1) _j(q)^-(z_2) ]
×
n_j(z_1) _j(z_2)
+ _j(q)^-(z_1) _j(q)^-(z_2) }|^2
+ | ( _j - _j - 1 )
n _j(z_1) _j(z_2)
- _j(q)^-(z_1) _j(q)^-(z_2)
×[ _j^⋆_j(z_1) _j(z_2)
+ _j(q)^-(z_1) _j(q)^-(z_2) _j
- n_j(z_1) _j(z_2)
+ _j(q)^-(z_1) _j(q)^-(z_2) ] }|^2
≲ n^-2,
where we used Lemma <ref>, Hölder's inequality, (<ref>) and (<ref>).
Noting that | b_j(z_1) b_j(z_2) - b_j(q)(z_1) b_j(q)(z_2) | ≲ n,
a similar estimate can be shown for the terms in G_n112, that is,
| ( _j - _j-1 ) [ b_j(z_1) b_j(z_2) - b_j(q)(z_1) b_j(q)(z_2) _jq^⋆_j(q)(z_1) _j(q)(z_2)
_jq^2
] |^2 ≲ n^-2.
Combining these estimates, we obtain | √(n) G_n11 |^2 ≲ 1.
Regarding G_n12, we proceed with the decomposition G_n12=G_n121+G_n122+G_n123, where
G_n121
= ∑_j=1^n ( _j - _j - 1 )
[
{ b_j(z_2) β_j(z_1) β_j(z_2) - b_j(q)(z_2) β_j(q)(z_1) β_j(q)(z_2) }
×_j^⋆_j(z_1) _j(z_2)
_j ^2 γ_j(z_2) ],
G_n122 = ∑_j=1^n ( _j - _j - 1 )
[ b_j(q)(z_2) β_j(q)(z_1) β_j(q)(z_2) {_j^⋆_j(z_1) _j(z_2)
_j ^2
- _jq^⋆_j(q)(z_1) _j(q)(z_2)
_jq^2 }γ_j(q)(z_2) ],
G_n123 = ∑_j=1^n ( _j - _j - 1 )
[ b_j(q)(z_2) β_j(q)(z_1) β_j(q)(z_2)
_j^⋆_j(z_1) _j(z_2)
_j ^2 {γ_j(z_2) - γ_j(q)(z_2) }]
.
Note that by combining (<ref>) with | _j |^2 ≤ n, we obtain
| β_j(z) | = | 1 - _j^⋆(z) _j | ≤ 1 + |_j|^2 || (z) ||
≲ 1 + | _j|^2+ n^4 I { |||| ≥η_r or λ_p() ≤η_l}.
Similarly to these bounds,
we get for any m≥ 1
| γ_j(z) |
= | _j^⋆_j(z) _j - n [ _j(z) ]|
≲ |_j|^2 || _j(z) || + || _j(z) ||
≲ |_j|^2
+ | _j|^2 n^3/2ε_n
I { ||_j(0) || ≥η_r or λ_p(_j(0)) ≤η_l}
+ | _j|^2 n^3/2ε_nℙ{ ||_j(0) || ≥η_r or λ_p(_j(0)) ≤η_l}
≤ | _j|^2 + n^4 I { ||_j(0) || ≥η_r or λ_p(_j(0)) ≤η_l}
+ o n^-m,
where we used (<ref>).
Naturally, similar bounds can be shown for _(q)(z),γ_j(q)(z), β_j(q)(z). Combining these bounds with Lemma <ref>, we get | √(n) G_n12 |^2 ≲ 1.
In the same manner, the remaining terms can be bounded, and the details will be omitted for the sake of brevity.
§.§ Proof of Theorem <ref> (uniform convergence of M_n,q^(2))
Let _n^0 = s_F^y_n,H_n be the Stieltjes transform of F^y_n,H_n, and, similarly, _nq^0 = s_F^(p-1)/n, H_nq.
Here, H_n = F^ denotes the empirical spectral distribution of , and, similarly, we define H_nq = F^^(-q). Moreover,
we denote by _n=s_F^ and _nq=s_F^^(-q) the Stieltjes transforms of F^ and F^^(-q), respectively. Observing
s_n(z) = s_F^Σ (z) = - 1 - y_n/z + y_n s_n(z) ,
_n^0 (z) = s_F^y_n,H_n(z)
= - 1 - y_n/z + y_n s_n^0(z),
and using analogous formulas for the Stieltjes transforms s_nq(z) and s_nq^0(z),
we obtain
M_n,q^(2) (z)
= √(n)[ p s_F^(z) - (p-1) s_F^^(-q)(z) ] -
p _n^0(z) - (p - 1) _nq^0(z)
= n^3/2 [ _n(z) - _nq(z) ] - _n^0(z) - _nq^0(z) .
Define
R_n (z) = - z - 1/ [ _n(z) ] + y_n ∫λ dH_n(λ) /1 + λ [ _n(z)]
= y_n n∑_j=1^n [ β_j( z) d_j (z) ] ( [ _n(z) ] ) ,
d_j (z) = - 𝐪_j^⋆_n_j(z) ( [ _n(z) ] + 𝐈 )𝐪_j
+ 1/p[ ( [ _n (z) ] + 𝐈 )(z) ],
𝐪_j = 1/√(p)𝐱_j,
I_n(z) = y_n ∫λ^2 _n^0(z) dH_n(λ)/( 1 + λ [_n (z) ] ) ( 1 + λ_n^0(z) ) = 1/n_n^0(z) + [ _n(z) ] + _n^0(z) .
Here, the second equality in (<ref>) follows from the proof of Lemma 6.3.6 in <cit.>.
Similarly, we define
R_nq (z) = - z - 1/ [ _nq(z) ] + p/n -1∫λ dH_nq(λ) /1 + λ [ _nq(z)]
= p - 1/n n∑_j=1^n [ β_j(q)( z) d_j(q) (z) ] ( [ _nq(z) ] ) ,
d_j(q) (z) = - 𝐪_jq^⋆ (^(-q))_j(q)^-(z) ( [ _nq(z) ] ^(-q) + 𝐈̃^(-q) )^- (^(-q))𝐪_jq
+ 1/p - 1[ ( [ _nq (z) ] ^(-q) + 𝐈̃^(-q) )^(-q)_(q)^-(z) ],
𝐪_jq = 1/√(p -1 )𝐱_j,
I_nq(z) = p - 1/n∫λ^2 _nq^0(z) dH_nq(λ)/( 1 + λ [_nq (z) ] ) ( 1 + λ_nq^0(z) )
= 1/n_nq^0(z) ^(-q) + [ _nq(z) ] ^(-q) + _nq^0(z) ^(-q) .
Using these definitions, we obtain by a tedious but straightforward calculation <cit.>
M_n,q^(2) (z) =n^3/2{ [ _n(z) ] - _n^0(z) I_n(z) [ _n(z) ]
- [ _nq(z) ] - _nq^0(z) I_nq(z) [ _nq(z) ]
+ R_n(z) [_n(z)] _n^0(z)
- R_nq(z) [_nq(z)] _nq^0(z)
}
= n^3/2{ [ _n(z) - _nq(z) ] - _n^0(z) - _nq^0(z) I_n(z) [ _n(z) ]
+ [ _nq] - _nq^0(z) I_n(z) [ _n(z) ] - I_nq(z) [ _nq(z) ]
+ R_n(z) [ _n(z) ] - R_nq(z) [_nq(z)] _n^0(z)
+ R_nq(z) [_nq(z)] _n^0(z) - _nq^0(z) }
= n^3/2{ [ _n(z) - _nq(z) ] - _n^0(z) - _nq^0(z) I_n(z) [ _n(z) ]
+ [ _nq] - _nq^0(z) I_n(z) - I_nq(z) [ _n(z) ]
+ [ _nq] - _nq^0(z) I_nq(z) [ _n(z) ] - [ _nq(z) ]
+ R_n(z) [ _n(z) ] - R_nq(z) [_nq(z)] _n^0(z)
+ R_nq(z) [_nq(z)] _n^0(z) - _nq^0(z) } ,
which implies
M_n,q^(2) (z) =
1/1 - I_n(z) [ _n(z) ]{
n [ _nq (z) ] - _nq^0(z) √(n) I_n(z) - I_nq(z) [ _n(z) ]
+ n [ _nq] - _nq^0(z) I_nq(z) √(n) [ _n(z) ] - [ _nq(z) ]
+ n^3/2 R_n(z) [ _n(z) ] - R_nq(z) [_nq(z)] _n^0(z)
+ n R_nq(z) [_nq(z)] √(n)_n^0(z) - _nq^0(z) } .
Using similar arguments as given in the derivation of formula (9.11.4)
in <cit.> and the results of page 50 in <cit.>
yields the following uniform convergence results
[ _n(z) ] →(z), [ _nq(z) ] →(z), _n^0(z) →(z), _nq^0(z) →(z),
I_n(z) → I(z) , I_nq(z) → I(z),
n R_nq(z) [_nq(z)] →y ∫^2(z)λ^2/(t (z) λ + 1)^3 dH(λ) /1 - y ∫^2(z)λ^2/( t (z) λ + 1 )^2 dH(λ) for the real case,
0 for the complex case,
n [ _nq (z) ] - _nq^0(z) → y ∫^3(z)λ^2/(t (z) λ + 1)^3 dH(λ) / 1 - y ∫^2(z)λ^2/( t (z) λ + 1 )^2 dH(λ) ^2 for the real case,
0 for the complex case,
as n→∞, where we use the notation
I(z) = y ∫λ^2 (z) dH(λ)/( 1 + λ(z) )^2 .
Thus, it is left to analyze the asymptotic behaviour of
√(n) I_n(z) - I_nq(z) ,
√(n) [ _n(z) ] - [ _nq (z) ] ,
√(n)_n^0(z) - _nq^0(z) ,
n^3/2 R_n(z) [ _n(z)] - R_nq(z) [ _nq(z)] .
Using (<ref>), we note that
√(n) [ _n(z) ] - [ _nq (z) ]
= √(n)_n^0(z) - _nq^0(z) + √(n) [ _n(z) ] - _n^0(z)
- √(n) [ _nq(z) ] - _nq ^0(z)
= √(n)_n^0(z) - _nq^0(z) + o(1),
that is, (<ref>) and (<ref>) share the same asymptotic behaviour.
Thus, it is left to investigate (<ref>), (<ref>) and (<ref>).
*Analysis of the term
(<ref>):
Using (<ref>), we have
√(n)_n^0(z) - _nq^0(z)
= 1/- z + y_n ∫λ/1+λ_n^0(z) dH_n(λ)
- 1/- z + p - 1/n∫λ/1+λ_nq^0(z) dH_nq(λ)
= 1/√(n)_n^0(z) _nq^0(z) + _nq^0(z) ^-
- + _n^0(z)
= 1/√(n)_n^0(z) _nq^0(z) + _nq^0(z) ^-
- + _n^0(z)
- 1/√(n)_n^0(z) _nq^0(z) + _nq^0(z) ^- _qq
= 1/√(n)_n^0(z) _nq^0(z) + _nq^0(z) ^-
- + _n^0(z) + o(1)
= 1/√(n)_n^0(z) _nq^0(z) + _n^0(z) ^-
_n^0(z) - _nq^0(z) + _n^0(z) ^(q,q) + _n^0(z)
+ 1/√(n)_n^0(z) _nq^0(z) + _n^0(z) ^(·, q)
+ o(1)
= 1/√(n)_n^0(z) _nq^0(z) _n^0(z) - _nq^0(z) + _n^0(z) ^-
+ _n^0(z)
+ 1/√(n)_n^0(z) ^2 _nq^0(z) + _n^0(z) ^-
^(q,q) + _n^0(z)
+ o(1)
= 1/√(n)_n^0(z) _nq^0(z) _n^0(z) - _nq^0(z) + _n^0(z) ^-
+ _n^0(z)
+ o(1) .
Note that
_n^0(z) _nq^0(z)/n + _n^0(z) ^-
+ _n^0(z)
= a(z,z) + o(1) ,
where the term a(z,z) is defined in (<ref>). By Lemma <ref> we have |a(z,z)|<1, which
implies √(n) (_n^0(z) - _nq^0(z) ) = o(1).
*Analysis of the term
(<ref>):
It holds uniformly with respect to z∈𝒞_n,
√(n) ( I_n(z) - I_nq (z) )
= 1/√(n)_n^0(z) - _nq^0(z) + [ _n(z) ] + _n^0(z)
+ 1/√(n)_nq^0(z) { + [ _n(z) ] + _n^0(z)
- ^(-q) + [ _nq(z) ] ^(-q) + _nq^0(z) ^(-q)}
= o(1).
For the first term (<ref>), we used the previous result for (<ref>) and the fact
+ [ _n(z) ] + _n^0(z) → y ∫λ^2/(1 + (z) λ)^2 dH(λ).
For the second term, one can proceed similarly to the analysis of (<ref>).
*Analysis of the term (<ref>):
Using (<ref>)
and the representation
β_j(z) = β_j(z) - β_j^2(z) γ̂_j(z)
+ β_j^2(z) β_j(z) γ̂_j^2(z),
as well as similar formulas for β_j(q)(z) and _(q)(z), we obtain
n^3/2 R_n(z) [ _n(z) ] - R_nq(z) [ _nq(z) ]
= - √(n)∑_j=1^n[ y_n β_j(z) {𝐪_j^⋆_j(z)
( [ _n(z) ] + 𝐈 )𝐪_j
- 1/p [ ( [ _n (z) ] + 𝐈 )_j(z) ] }
- p - 1/nβ_j(q)(z) {𝐪_jq^⋆ (^(-q))_j(q)^-(z) ( [ _nq(z) ] ^(-q) + 𝐈̃^(-q) )^- (^(-q))𝐪_j(q)
- 1/p - 1[ ( [ _nq (z) ] ^(-q) + 𝐈̃^(-q) )^(-q)_j(q)^-(z) ] }]
+ 1/√(n)∑_j=1^n[ β_j(z) ( [ _n (z) ] + 𝐈 )[ (z) - _j(z) ]
-
β_j(q)(z) ( [ _nq (z) ] ^(-q) + 𝐈̃^(-q) )^-^(-q)[ _(q)^-(z) - _j(q)^- (z) ] ]
= T_n,1(z)
+ T_n,2(z) + o(1)
uniformly with respect to z∈𝒞_n, where the terms T_n,1 and T_n,2 are defined by
T_n,1(z) = √(n)∑_j=1^n[ y_n β_j^2(z) {𝐪_j^⋆_j(z)
( [ _n(z) ] + 𝐈 )𝐪_j
- 1/p [ ( [ _n (z) ] + 𝐈 )_j(z) ] γ̂_j(z) }
- p - 1/nβ_j(q)^2(z) {𝐪_jq^⋆ (^(-q))_j(q)^-(z) ( [ _nq(z) ] ^(-q) + 𝐈̃^(-q) )^- (^(-q))𝐪_jq
- 1/p - 1[ ( [ _nq (z) ] ^(-q) + 𝐈̃^(-q) )^(-q)_j(q)^-(z) ] }γ̂_j(q)(z) ] ,
T_n,2(z) = - 1/√(n)∑_j=1^n{[ β_j(z) ]
[ β_j(z) _j^⋆_j(z)
( _n(z) + 𝐈) _j(z) _j ]
- [ β_j(q)(z) ]
[ β_j(q)(z) _jq^⋆_j(q)(z)
( _nq(z) ^(-q) + 𝐈) ^(-q)_j(q)(z) _jq]
}.
For this argument, we use the facts
[ β_j(z) {𝐪_j^⋆_j(z) ( [ s_n,t(z) ] + 𝐈 )𝐪_j - 1/p[ ( [ _n (z) ] + 𝐈 )_j(z) ] }]
= 0,
[ β_j(q)(z) {𝐪_jq^⋆ (^(-q))_j(q)^-(z) ( [ _nq(z) ] ^(-q) + 𝐈̃^(-q) )^- (^(-q))𝐪_jq
- 1/p - 1[ ( [ _nq (z) ] ^(-q) + 𝐈̃^(-q) )^(-q)_j(q)^-(z) ] }] =0 ,
and
|
y_n β_j^2(z) β_j(z) {𝐪_j^⋆_j(z)
( [ _n(z) ] + 𝐈 )𝐪_j
- 1/p [ ( [ _n (z) ] + 𝐈 )_j(z) ] γ̂_j^2(z) }
- p - 1/nβ_j(q)^2(z) β_j(q)(z) {𝐪_jq^⋆ (^(-q))_j(q)^-(z) ( [ _nq(z) ] ^(-q) + 𝐈̃^(-q) )^- (^(-q))𝐪_jq
- 1/p - 1[ ( [ _nq (z) ] ^(-q) + 𝐈̃^(-q) )^(-q)_j(q)^-(z) ] }γ̂_j(q)^2(z)
| = o n^-3/2,
which is a consequence of Lemma <ref>.
For the term in (<ref>), we obtain the representation
T_n,1 = √(n)∑_j=1^n[ y_nβ_j^2(z) {𝐪_j^⋆_j(z) ( [ _n(z) ] + 𝐈 )𝐪_j
- 1/p ( [ _n (z) ] + 𝐈 )_j(z) }γ̂_j(z)
- p - 1/nβ_j(q)^2(z) {𝐪_jq^⋆ ( )_j(q)^-(z) ( [ _nq(z) ] + 𝐈̃^(-q) ) ()𝐪_jq
- 1/p - 1 ( [ _nq (z) ] + 𝐈̃^(-q) )^(-q)_j(q)(z) }γ̂_j(q)(z)
]
= √(n) z^2 ^2(z) ∑_j=1^n[ y_n{𝐪_j^⋆_j(z) ( [ _n(z) ] + 𝐈 )𝐪_j
- 1/p ( [ _n (z) ] + 𝐈 )_j(z) }γ̂_j(z)
- p - 1/n{𝐪_jq^⋆ ( )_j(q)^-(z) ( [ _nq(z) ] + 𝐈̃^(-q) ) ()𝐪_jq
- 1/p - 1 ( [ _nq (z) ] + 𝐈̃^(-q) )^(-q)_j(q)(z) }γ̂_j(q)(z)
] + o(1),
where we note that β_j(z) , β_j(z) , b_j(z), β_j(q)(z) , β_j(q)(z) , b_j(q)(z) and similarly defined quantities can be replaced by -z (z) resulting in an asymptotically uniformly negligible error using Lemma <ref> and Lemma 7.1.3 in <cit.>.
Similarly, we have for the term T_n,2 defined in (<ref>)
T_n,2(z,t) = - z^2 ^2(z)/n^3/2∑_j=1^n[ {_j(z)
( _n(z) + 𝐈)_j(z)
- ^-(z)
( _nq(z) + )^-(z) }] + o(1)
= - z^2 ^2(z)/n^3/2∑_j=1^n[ {_j(z)
( _n(z) + 𝐈)_j(z)
- ^-(z)
( _nq(z) + )^-(z) }] + o(1)
= o(1).
Thus, it is left to show that T_n,1 vanishes asymptotically. Then, equation (9.8.6) in <cit.> gives
T_n,1(z) = κz^2 ^2(z)/n^3/2∑_j=1^n[ {_j(z)
( _n(z) + 𝐈)
- ^-(z)
( _nq(z) + )}_j(z) ]
+ κz^2 ^2(z)/n^3/2∑_j=1^n[ ^-(z)
( _nq(z) + ){_j(z) - _j(q)^-(z) }]
+ z^2 ^2(z) (v_4 - κ - 1) /n^3/2∑_j=1^n[ {_j(z)
( _n(z) + 𝐈)
- ^-(z)
( _nq(z) + )}∘_j(z) ] + o(1)
+ z^2 ^2(z) (ν_4 - κ - 1 ) /n^3/2∑_j=1^n[ ^-(z)
( _nq(z) + )∘{_j(z) - _j(q)^-(z) }]
+ o(1)
=o(1).
Here, 𝐀∘𝐁 denotes the Hadamard product of two p × p matrices 𝐀 and 𝐁 and we have used the inequality | 𝐀∘𝐁 | ≤𝐀𝐀^⋆𝐁𝐁^⋆^1/2. Thus, we have T_n,1(z) + T_n,2(z) = o(1), which proves (<ref>) and completes the proof of Theorem <ref>.
§.§ Proofs of Theorem <ref> and results in Section <ref>
§.§.§ Proof of Theorem <ref>
Combining Theorem 1.4 of <cit.> and Theorem <ref>, the crucial point is to prove the asymptotic independence. Since the limiting distributions are Gaussian, it suffices to show that
lim_n→∞ ( X_n(f_1), X_n(f_2,q) ) = 0.
This implied by the convergence
lim_n→∞ ( M_n,q^(1)(z_1), M_n^(1)(z_2) ) = 0, z_1, z_2 ∈ℂ^+,
where M_n,q^(1)(z) is defined in (<ref>) and
M_n(z) = p s_F^ (z) - s_F^y_n, H_n (z) .
Following the discussion of Section 4 in <cit.> and Section <ref>, we need to verify that
lim_n→∞κ/n^3/2∑_j=1^n b_j(z_1) b_j(z_2) _j [ _qj(z_1) ] _j [ _j(z_2) ] = 0,
lim_n→∞v_3 - κ - 1/n^3/2∑_j=1^n b_j(z_1) b_j(z_2) _j [ _qj(z_1) ] ∘_j [ _j(z_2) ] = 0.
These results are a consequence of the inequalities
| _j [ _qj(z_1) ] _j [ _j(z_2) ] | ≲ 1,
| _j [ _qj(z_1) ] ∘_j [ _j(z_2) ] | ≲ 1,
which follow by similar arguments as given in the proof of Lemma <ref> below. Thus, the proof of Theorem <ref> is complete.
§.§.§ Proof of Proposition <ref>
The proof follows the idea of <cit.>, where the analogue formula for linear spectral statistics of sample covariance matrices was derived.
The fact that the random variables X(f_1, q_1) and X(f_2, q_2) are uncorrelated for two distinct integers q_1, q_2 follows from the proof of Theorem <ref>. Recall that we showed that σ^2 ( z_1, z_2, q_1, q_2) = τ^2 (z_1, z_2, q_1, q_2) = 0 if is a diagonal matrix and q_1 ≠ q_2.
Let us now consider the case q_1 = q_2 =q.
For = , we have
σ^2(z_1, z_2, q, q) = (1 + (z_1) + (z_2) + (1 + y) (z_1) (z_2) ) '(z_1) '(z_2) /(1 + (z_2) +
(z_1) + (1 - y) (z_1) (z_2) )^3,
τ^2(z_1, z_2, q, q) = '(z_1) '(z_2) /(1 + (z_1) )^2 (1 + (z_2) )^2.
In order to calculate the contour integrals giving the covariance structure, we define two non-overlapping contours through
z_j = z_j(ξ_j) = 1 + h ξ_j + h r_jξ_j + h^2 , j=1,2, r_2 > r_1 > 1, | ξ_j | =1.
It can be checked that when ξ_j runs anticlockwise on the unit circle, z_j will run a contour 𝒞_j enclosing the interval [ (1 - h)^2, (1+h)^2], j∈{1,2}.
Indeed, it suffices for 𝒞_1 and 𝒞_2 to enclose this interval, and we may neglect the discrete part at the origin appearing in the case y ≥ 1 <cit.>.
Using the identity (<ref>), we have for z∈𝒞, j∈{ 1 ,2}
(z_j) = - 1/t ( 1 + h r_j ξ_j ) ,
d z_j = h (r_j - r_jξ_j^-2 ) dξ_j.
Combining this with (<ref>), (<ref>) and (<ref>), we get the desired formula for the covaraince.
§.§.§ Proof of Corollary <ref>
We will use Theorem <ref> and Proposition <ref> to prove the assertion.
Let us first check that all assumptions of Theorem <ref> are satisfied. Besides <ref> and <ref>, the remaining conditions are also satisfies since = (see Remark <ref>).
We continue with the calculation of the centering term.
Using Example 2.11 in <cit.>, we obtain
p ∫log x d F^y_n (x) =
- p - n log ( 1 - y_n ) + p log 1 - y_n ,
and a Taylor's expansion implies
p ∫log x d F^y_n (x)
- (p-1) ∫log x d F^(p - 1)/n (x)
= - 1 + (n - p) log1 - p - 1/n/1 - p/n + log 1 - p - 1/n
= logn - p + 1/n + 𝒪 (n).
This implies
X_n( log (·), q_1)
= √(n)( log| _n | - log| _n^(-q_1)| - log( n - p + 1/n) ) + o(1).
Similarly, by using
p ∫ x d F^y_n (x) = 1, p ∫ x^2 d F^y_n (x) = 1 + y_n,
we obtain the other centering terms.
Note that (X(f_1,q_1) , X(f_2,q_2) ) = 0, q_1 ≠ q_2 by Proposition <ref>.
Using the representation in Proposition <ref>, it has become a standard task in the literature to calculate the resulting integrals using the residue theorem (see, e.g., <cit.>). Thus, the detailed calculation of (X( f ,q_1), X(f, q_1)), f(x)=log(x), f(x)=x or f(x)=x^2 is omitted for the sake of brevity.
§ AUXILIARY RESULTS
The following lemma ensures that the process (M̂_n(z))_z∈𝒞^+ defined in (<ref>) provides an appropriate approximation for the process (M_n(z))_z∈𝒞^+.
Let i∈{1,2}.
It holds with probability 1
| ∫_𝒞 f_i(z) M_n,q(z) - M̂_n,q(z) dz | = o(1), as n→∞, i=1,2.
The proof
follows by similar arguments
as given in the proof of Lemma 6.4.3
in <cit.> and the details are omitted for the sake of brevity. Note that M_n,q and its approximate M̂_n,q include an extra factor √(n) compared to M_n and M̂_n in <cit.>. This is considered by the definition of M̂_n,q in (<ref>).
The following bound is crucial: Due to the specific structure of _qj(z) as a difference of similar matrices, its trace can be shown to be of constant order instead of order n for each single summand. More precisely, we have the following result.
It holds for all α≥ 1
_qj(z) _qj(z)^⋆ ^α≲ 1,
where
_qj(z) = ^1/2_j(z) ^1/2 - ^1/2^(·,-q)_j(q)^-1(z) ^1/2^(-q,·).
To begin with, we note that
_qj(z) = ^1/2_j(z) - _j(q)^-(z) ^1/2.
Here, _j(q)^-(z) denotes the p× p dimensional matrix which has zeros in its qth row and column and otherwise the entries of the (p-1)× (p-1) dimensional matrix _j(q)(z) and similarly, _j(q)(z) denotes the corresponding version of _j(q)(z).
The p× p matrix _j(z) ^(q,·) contains the qth row of _j(z) and is elsewhere filled with zeros. We have
_j(q)^-(z) _j(q)(z) _j(z)
= _j(z) - _j(z) ^(q,·) ,
which yields for the difference of the resolvents
_j(z) - _j(q)^-(z)
=_j(q)^-(z) _j(q)(z) _j(z)
- _j(q)^-(z) _j(z) _j(z) + _j(z) ^(q,·)
= _j(q)^-(z) _j(q)(z) - _j(z) _j(z)
+ _j(z) ^(q,·).
Note that the difference _j(z) - _j(q)(z) contains the qth row and column of - z and is elsewhere filled with zeros. If ^(q,q)∈ℂ^p× p denotes any matrix with bounded spectral norm and non-zero entries only in the qth row and column and ∈ℂ^p× p is another matrix with bounded spectral norm, then
^(q,q) =
^(q,q)_qq + ^(q,q)^⊤^⊤_qq≲ || ^(q,q) || · || || ≲ 1.
We have
_qj(z) _qj(z)^⋆
= _qj(z) _qj(z )
= _j(z) - _j(q)^-(z) _j(z) - _j(q)^-(z) .
Note that the spectral norm of _j(z) and similarly defined matrices is bounded.
As the spectral norm of is bounded almost surely, the quantity _q1(z) _q1 (z)^⋆ is seen to be bounded almost surely using (<ref>) and (<ref>) (note that this bound is independent of j, n or p).
It holds for n→∞
| √(n)∑_j=1^n
(_j - _j-1 ) {β_j^2(z) γ̂_j(z) α_j(z) - β_j(z) 𝐫_j^⋆𝐃_j^-2(z) 𝐫_jγ̂_j^2(z)
- β_j(q)^2(z) γ̂_j(q)(z) α_j(q)(z) - β_j(q)(z) 𝐫_j^⋆𝐃_j(q)^-2(z) 𝐫_jγ̂_j(q)^2(z) }|^2
= o(1)
We restrict ourselves to a proof of
| √(n)∑_j=1^n
(_j - _j-1 )
{β_j^2(z) γ̂_j(z) α_j(z)
- β_j(q)^2(z) γ̂_j(q)(z) α_j(q)(z)
}|^2 =o(1).
Using similar arguments for the remaining terms, the assertion of Lemma <ref> follows.
For a proof of (<ref>), we decompose
β_j^2(z) γ̂_j(z) α_j(z)
- β_j(q)^2(z) γ̂_j(q)(z) α_j(q)(z)
= (β_j^2(z) - β_j(q)^2(z) ) γ̂_j(z) α_j(z)
- β_j(q)^2(z) γ̂_j(q)(z) α_j(q)(z)
- γ̂_j(z) α_j(z)
= T_1,j - T_2,j + T_3,j,
where
T_1,j = (β_j(z) - β_j(q)(z) )
(β_j(z) + β_j(q)(z) ) γ̂_j(z) α_j(z),
T_2,j = β_j(q)^2(z) {γ̂_j(q)(z) - γ̂_j(z) }α_j(q)(z) ,
T_3,j =β_j(q)^2(z) γ̂_j(z) {α_j(z) - α_j(q)(z) } .
Considering the first term, we write
- T_1,j = nβ_j(z) β_j(q)(z)
_qj(z)
(β_j(z) + β_j(q)(z) ) γ̂_j(z) α_j(z).
Using Lemma <ref> and (<ref>), we obtain
| √(n)∑_j=1^n ( _j - _j - 1 ) T_1,j |^2
≲ n∑_j=1^n | γ̂_j (z) α_j(z) |^2
= o(1).
Using again Lemma <ref> and (<ref>), it follows for the second term
| √(n)∑_j=1^n ( _j - _j - 1 ) T_2,j |^2
≲ n ∑_j=1^n | n_j^⋆_qj(z) _j - n_qj(z) |^4 | α_j(q)(z) |^4
≲ n^2 n^-2.5η_n n^-1.5 = o(1).
Similarly to Lemma <ref>, it can be shown that for any α≥ 1
_qj(z) _qj(z)^⋆ ^α≲ 1.
Combining this with the estimate (<ref>), we obtain for the third term
| √(n)∑_j=1^n ( _j - _j - 1 ) T_3,j |^2
≲
n ∑_j=1^n | n_j^⋆_qj(z) _j - n_qj(z) |^4 | γ̂_j(z) |^4
≲ n^2 n^-2.5η_n n^-1.5 = o(1).
It holds for sufficiently large n∈ and any 0 < δ≤ 1/2
max_1 ≤ j ≤ n| √(n)β_j(z) α_j(z) - β_j(q)(z) α_j(q)(z)|^2+ δ≲ n^- ( 1 + δ/2) .
We decompose
β_j(z) α_j(z) - β_j(q)(z) α_j(q)(z)
= β_j(z) - β_j(q)(z) α_j(z)
- β_j(q)(z) α_j(q)(z)
- α_j(z)
= - T_4,j - T_5,j ,
where
T_4,j = nβ_j(z) β_j(q)(z)
_qj(z) α_j(z),
T_5,j = β_j(q)(z) α_j(q)(z)
- α_j(z) .
Using (<ref>) and Lemma <ref>, it holds
| √(n) T_4,j|^2+δ≲ n^-(2+δ)| √(n)α_j(z) |^2+δ≲ n^-(2+δ) .
For the second term, we obtain using similar arguments
| √(n) T_5,j|^2+δ≲ n^1+δ/2| n_j^⋆_qj(z) _j - n_qj(z) | ^2+ δ≲ n^1+ δ /2
n^- ( 2 + δ) = n^- ( 1 + δ/2).
It holds
𝐇_q_1^Δ(z_1)
𝐇_q_2^Δ(z_2)
= { + (z_1) _q_1q_2
- (z_1) + s(z_1) ^(-q_2) + s(z_2) ^(-q_2)^-_q_1q_2}
×{ + (z_2) _q_2q_1
- (z_2)
+ s(z_2) ^(-q_1) + s(z_1) ^(-q_1)^-_q_2q_1}.
First, using the formula - = ( - ) and observing that
^(-q) + s(z) ^(-q)^-^(-q) + s(z) ^(-q) = ^(-q)≠
we rewrite the difference
𝐇^Δ_q(z) = + s(z) -
^(-q) + s(z) ^(-q)^-
= ^(-q) + s(z) ^(-q)^-^(-q) + s(z) ^(-q) + s(z)
- ^(-q) + s(z) ^(-q)^- + s(z) + s(z) + + s(z) ^(q, ·)
= ^(-q) + s(z) ^(-q)^-^(-q) + s(z) ^(-q) - + s(z) + s(z)
+ + s(z) ^(q, ·)
= - ^(-q) + s(z) ^(-q)^- + s(z) ^(q,q) + s(z)
+ + s(z) ^(q, ·)
= - s(z) ^(-q) + s(z) ^(-q)^-^(q,q) + s(z)
+ + s(z) ^(q, ·)
where + s(z) ^(q, ·) denotes the p× p matrix containing the qth row of + s(z) and is elsewhere filled with zeros.
In the following, we will calculate the terms appearing when inserting the representation (<ref>) for 𝐇^Δ_q(z) in 𝐇_q_1^Δ(z_1) 𝐇_q_2^Δ(z_2). Note that
( + s(z_1) ^(q_1, ·) + s(z_2) ^(q_2, ·) )
= ∑_i,l=1^p _iq_1 + s(z_1) _q_1l_lq_2 + s(z_2) _q_2i
= + (z_1) _q_1q_2 + (z_2) _q_2q_1.
Next, we have for 1 ≤ i,j ≤ p
^(-q) + s(z) ^(-q)^-^(q,q)_ij
= ∑_l=1^p ^(-q) + s(z) ^(-q)^-_il_lqδ_qj.
As a consequence, we have for 1≤ i,l ≤ p
^(-q) + s(z) ^(-q)^-^(q,q) + s(z) + s(z) ^(q, ·)_il
= ^(-q) + s(z) ^(-q)^-^(q,q)_iq + s(z) _ql.
Next, we have
^(-q_1) + s(z_1) ^(-q_1)^-^(q_1, q_1) + s(z_1) ^(-q_2) + s(z_2) ^(-q_2)^-^(q_2, q_2) + s(z_2)
= ∑_i,j,k,l=1^p
_ij^(-q_1) + s(z_1) ^(-q_1)^-^(q_1,q_1) + s(z_1) _jk
×_kl^(-q_2) + s(z_2) ^(-q_2)^-^(q_2,q_2) + s(z_2) _li
= ∑_i,j,k,l=1^p
_ij^(-q_1) + s(z_1) ^(-q_1)^-^(q_1,q_1)_jq_1 + s(z_1) _q_1k
×_kl^(-q_2) + s(z_2) ^(-q_2)^-^(q_2,q_2)_lq_2 + s(z_2) _q_2i
= + s(z_1) ^(-q_2) + s(z_2) ^(-q_2)^-^(q_2,q_2)_q_1q_2
× + s(z_2) ^(-q_1) + s(z_1) ^(-q_1)^-^(q_1,q_1)_q_2q_1
= + s(z_1) ^(-q_2) + s(z_2) ^(-q_2)^-_q_1q_2 + s(z_2) ^(-q_1) + s(z_1) ^(-q_1)^-_q_2q_1
For the mixed terms, we see that
{ + s(z_1) ^(q_1, ·)^(-q_2) + s(z_2) ^(-q_2)^-^(q_2,q_2) + s(z_2) }
= ∑_i,j,k=1^p
_iq_1 + s(z_1) _q_1j_jk^(-q_2) + s(z_2) ^(-q_2)^-^(q_2,q_2)_kq_2 + s(z_2) _q_2i
= + s(z_1) ^(-q_2) + s(z_2) ^(-q_2)^-_q_1q_2 + (z_2) _q_2q_1,
and, thus,
{^(-q_1) + s(z_1) ^(-q_1)^-^(q_1,q_1) + s(z_1) + s(z_2) ^(q_2, ·)}
= + s(z_2) ^(-q_1) + s(z_1) ^(-q_1)^-_q_2q_1 + (z_1) _q_1q_2.
Combining these calculations with (<ref>) concludes
the proof.
It holds
sup_n∈sup_z∈𝒞_n | a_n(z,z) | <1,
where a(z,z) is defined in (<ref>)
In Lemma 7.1.7 of <cit.>, it is shown that |a_n(z,z)|<1 holds point-wise for each z∈𝒞^+
and we will extend this bound
to a uniform bound with respect to z∈𝒞_n, n∈.
From the proof of this lemma, it follows that it sufficent to show that
inf_n∈inf_z∈𝒞_N (z) /_n^0(z) y_n∫λ^2 dH_n(λ)/| 1 + λ_n^0(z) |^2 >0 .
We note that
y_n∫λ^2 dH_n(λ)/| 1 + λ_n^0(z) |^2 = 1/n ( + _n^0(z) )( + (_n^0(z)) )
≲ || ||^2 || ( + _n^0(z) ) ||^2 ≲ 1,
where we used Lemma 7.7.2 of <cit.>. Applying Lemma 7.5.1 in <cit.>, the assertion in (<ref>) follows.
Acknowledgements.
This work was partially supported by the
DFG Research unit 5381 Mathematical Statistics in the Information Age, project number 460867398. The authors would like to thank Giorgio Cipolloni and László Erdős for some helpful discussions.
|
http://arxiv.org/abs/2306.07595v1
|
20230613074135
|
Investigating $D_s^+ \to π^0 \ell^+ ν_\ell$ decay process within QCD sum rule approach
|
[
"Hai-Jiang Tian",
"Hai-Bing Fu",
"Tao Zhong",
"Xuan Luo",
"Dan-Dan Hu",
"Yin-Long Yang"
] |
hep-ph
|
[
"hep-ph"
] |
[email protected] (corresponding author)
[email protected]
Department of Physics, Guizhou Minzu University, Guiyang 550025, P.R.China
School of Physics, Southeast University, Nanjing 210094, China
Department of Physics & Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 401331, P.R. China
Department of Physics, Guizhou Minzu University, Guiyang 550025, P.R.China
In this paper, the semileptonic decays D_s^+ →π^0ℓ^+ ν_ℓ with ℓ=(e,μ) are investigated by using the light-cone sum rule approach. Firstly, the neutral meson mixing scheme between π^0, η, η^' and pseudoscalar gluonium G is discussed in a unified way, which leads to the direct connection between two different channels for D_s^+→π^0ℓ^+ν_ℓ and D_s^+ →ηℓ^+ν_ℓ by the π^0-η mixing angle. Then we calculated the D_s→π^0 transition form factors (TFFs) within QCD light-cone sum rule approach up to next-to-leading order correction. At the large recoil point, we have f_+^D_s^+π^0(0)=0.0113_-0.0019^+0.0024 and f_-^D_s^+π^0(0)=0.0020_-0.0009^+0.0008. Furthermore, the TFFs are extrapolated to the whole physical q^2-region by using the simplified z(q^2)-series expansion. The behaviors of TFFs and related three angular coefficient functions a_θ_ℓ(q^2), b_θ_ℓ(q^2) and c_θ_ℓ(q^2) are given. The differential decay widths for D_s^+ →π^0ℓ^+ ν_ℓ with respect to q^2 and cosθ_ℓ are displayed, and also lead to the branching fractions B(D_s^+→π ^0e^+ν_e) =2.60_-0.51^+0.57× 10^-5 and B(D_s^+→π ^0μ^+ν _μ )= 2.58_-0.51^+0.56× 10^-5. These results show well agreement with the recent BESIII measurements and theoretical predictions. Then the differential distributions and integrated predictions for three angular observables, i.e. forward-backward asymmetries, q^2-differential flat terms and lepton polarization asymmetry are given separately. Lastly, we estimate the ratio for different decay channels R_π ^0/η^ℓ=1.108_-0.071^+0.039× 10^-3.
13.25.Hw, 11.55.Hx, 12.38.Aw, 14.40.Be
Investigating D_s^+ →π^0 ℓ^+ ν_ℓ decay process within QCD sum rule approach
Yin-Long Yang
July 31, 2023
===========================================================================
§ INTRODUCTION
Since the development of QCD reveals the observed mixing pattern of isospin mesons, the meson mixing effect is recognized as one of the topics of considerable interest, which can provide an explanation for the disparity between valence states of I=0 pseudoscalar and vector mesons <cit.>. There are two schemes that frequently adopted by researchers in dealing with meson mixing: the octet-singlet mixing scheme and the quark-flavor mixing scheme. These two schemes have been extensively investigated both from experimental side <cit.> and theoretical side <cit.>. In order to further understand dynamics and hadronic structure, mixing among pseudoscalar mesons will lead to QCD anomaly and is connected with chiral symmetry breaking. Without a doubt, one could understand the dynamics more clearly if the mixing parameters were specified with better fidelity. On the other hand, when neutral mesons have the same quantum numbers and hidden flavors, they will mix with each other through the strong and electromagnetic interactions, which can also be used to explain some particular heavy meson decay processes. Such as the systems π^0-η <cit.>, η-η^' <cit.>, ω-ϕ <cit.>, and ρ-ω <cit.>.
Recently in 2022, the BESIII collaboration reported its first observation for D_s^+ →π^0 e^+ ν_e decay process and reported measured upper limits of branching fraction, i.e. 6.4×10^-5 by using a data sample of electron-positron collisions corresponding to an integrated luminosity of 6.32 fb^-1 at center-of-mass energies between 4.178 and 4.226 GeV <cit.>. The D_s^+ →π^0e^+ ν_e decay process can be investigated by the using neutral meson mixing scheme, which resembles the D_s^+→ω e^+ν_e process. This process can be occurred via π^0-η meson mixing and nonperturbative weak annihilation (WA), which can provide an excellent platform for studying the meson mixing effect and its associated preference <cit.>. The D_s^+ meson composed of cs̅ system decays into π^0 meson, which is expected to relate to the small admixture of ss̅ in the wave function of the π^0 meson that originates from the mixing of π^0-η meson. Due to Okubo-Zweig-Iizuka rule (OZI) <cit.> and isospin violation <cit.>, the WA effect in the D_s^+ →π^0e^+ ν_e decay process is suppressed and its value is only have 10^-7-10^-8 order <cit.>. Meanwhile, π^0-η meson mixing to the process D_s^+ →π^0e^+ ν_e will reach to 10^-5 order, which is significantly different with WA effective. Thus, one can investgate the π^0-η mixing effect more accuracy rather than WA effect. According to the scheme of neutral meson mixing, the relationship between the D_s^+→π, D_s^+ →η transition form factors (TFFs) and mixing angle δ can be adapted. So the full analytical expression for the TFFs should be taking into consideration.
At present, there are various approaches to study TFFs, such as the lattice QCD (LQCD) <cit.>, traditional or covariant light-front quark model (LFQM) <cit.>, constituent quark model (CQM) <cit.>, covariant confined quark model (CCQM) <cit.>, QCD sum rules (QCDSR) <cit.> and light-cone sum rules (LCSR) <cit.>. Among these approaches, the LCSR affords an efficient method in making predictions for exclusive processes, which allows incorporating information about high-energy asymptotic correlation functions in QCD that change into light-cone distribution amplitudes (LCDAs). So the important component in TFFs is the meson's LCDAs, which are related to the nonlocal light-ray operators between the hadronic state and vacuum. In this paper, η-meson twist-2 LCDA that takes main contribution is calculated by using the QCDSR approach under background field theory.
The rest of the paper are organized as follows: In Sec. <ref>, we present the basic idea for the neutral mesons mixing mechanism and the decays D_s^+→π^0 ℓ^+ν_ℓ, and also give TFFs for the transition D_s^+→π^0. In Sec. <ref>, we present the numerical analysis. Section <ref> is a brief summary.
§ THEORETICAL FRAMEWORK
The decay process D_s^+ →π^0 ℓ^+ ν_ℓ can be represented by the four typical diagrams, which are shown in Fig. <ref>. The subdiagram Fig. <ref>(a) is the π^0-η mixing and the Fig. <ref>(b)-(c) are the nonperturbative WA effects with the radiation of a π^0 meson, respectively. The neutral meson mixing one is dominant, and its particular property is to mix the π^0, η, η^', G in each pair by a unified way <cit.>. Four physical states are taken to be line combinations with these flavor bases, i.e.
[ [ π ^0; η; η '; η _G; ]] =V[ [ π_q^0; η_q; η_s; G; ]],
where π_q^0=(uu̅-dd̅)/√(2), η_q = (uu̅+dd̅)/√(2), η_s = ss̅ and G as the pure pseudoscalar gluonium. Meanwhile, the (4× 4) real matrix V should have 6 independent parameters to keep the unitary and can be regarded as the mixing angles between these mesons. Due to the fact that angles for π^0_q-η_s and π^0_q-G mixing are isospin-violating, there exists a large mass gap among them. This will lead to mixing angles tending to zero approximately. Meanwhile, the η_q-G mixing angle also tends to zero, where detailed analysis are shown in Ref. <cit.>. For the other three mixing angles, we can explicitly write the sub-mixing matrix as follows <cit.>:
V_1(π^0-η_q) = [
+cosδ -sinδ 0 0
+sinδ +cosδ 0 0
0 0 1 0
0 0 0 1
],
V_2( η_q-η_s ) = [ 1 0 0 0
0 +cosϕ -sinϕ 0
0 +sinϕ +cosϕ 0
0 0 0 1
],
V_3( η_s-G ) = [ 1 0 0 0
0 1 0 0
0 0 +cosϕ_G +sinϕ_G
0 0 -sinϕ_G +cosϕ_G
].
In which, the symbol δ stands for the mixing angle of π^0_q and η_q, ϕ denotes the mixing angle of η_q and η_s, ϕ_G represents the mixing angle of η_s and G. After combing the above three mixing matrix, i.e. V=V_3V_1V_2 , one can get the following expression
V≃ [
1 -δcosϕ +δsinϕ 0
δ +cosϕ -sinϕ 0
0 +cosϕ_Gsinϕ +cosϕ_Gcosϕ +sinϕ_G
0 -sinϕ_Gsinϕ -sinϕ_Gcosϕ +cosϕ_G
].
The mixing angle δ is small due to the isospin-violating. Then, we can get the following four equations by comparing with Eqs. (<ref>) and (<ref>), respectively:
|π ^0⟩ =|π_q^0⟩ -δcosϕ |η_q⟩ +δsinϕ |η_s⟩,
|η⟩ =δ |π_q^0⟩ +cosϕ |η_q⟩ -sinϕ |η_s⟩,
|η'⟩ =cosϕ_Gsinϕ |η_q⟩+cosϕ_Gcosϕ |η_s⟩+sinϕ_G|G⟩,
|η_G⟩ =-sinϕ_Gsinϕ |η_q⟩ -sinϕ_Gcosϕ |η_s⟩+cosϕ_G|G⟩.
Then, one can obtain the relationships among the transition matrix elements of ⟨π^0|V_μ|D_s^+⟩, ⟨η|V_μ| D_s^+ ⟩, ⟨η_s|V_μ|D_s^+⟩ with the help of Eqs. (<ref>) and (<ref>),
⟨π ^0| V_μ | D_s^+ ⟩ =δsinϕ⟨η_s| V_μ | D_s^+ ⟩,
⟨η | V_μ | D_s^+ ⟩ = - sinϕ⟨η_s| V_μ | D_s^+ ⟩.
The transitions D^+_s→π^0 and D^+_s→η are induced via the component ss̅, which can be seen in Fig. <ref>(a). Due to the transition matrix elements ⟨ P| V_μ | D_s^+ ⟩ with P = (π^0, η_s) have the definition
⟨ P (p)|V_μ|D_s^+(p+q)⟩ =2f_+^D_s^+P(q^2) p_μ+f̃^D_s^+P(q^2) q_μ,
with relationship f̃^D_s^+P (q^2) = f_+^D_s^+P(q^2) + f_-^D_s^+P(q^2) and q being the momentum transfer.
Therefore, one can obtain the relationship between the two TFFs f^D_s^+ π^0 (η)_±(q^2) and f^D_s^+η_s_±(q^2):
f_±^D_s^+π^0(q^2) = δsinϕ f_±^D_s^+η_s(q^2),
f_±^D_s^+η(q^2) =-sinϕ f_±^D_s^+η_s(q^2).
By comparing Eqs. (<ref>) and (<ref>), we can acquire the relational expression:
f_±^D_s^+π^0(q^2)/f_±^D_s^+η(q^2)=-δ.
To calculate the mixing angle δ, there have two schemes. The first one is to expand the mixing angle as lowest-order δ^(2) and higher-order terms δ^(4), which can be expressed as δ=δ^(2)+δ^(4). The δ^(2) can be expressed in terms of quark mass ratios, and the higher-order term δ^(4) requires another scheme to obtain, whose detailed expression and calculation approach can be seen in the Refs. <cit.>. The second one is to use the ratio of η^'→π^+π^-π^0 and η^'→π^+π^-η branching fractions, where the former is G-parity violating that can only occur through π^0-η mixing. Due to second method can be determined from the experimental side and our calculation of this paper connected with the meson mixing scheme directly, so we will take the second scheme. The ratio of the decay branching fractions has the following form:
B(η '→π^+π^-π ^0 )/ B(η '→π^+π^-η ) =| ⟨π^+π^-π ^0|H| η ' ⟩/⟨π^+π^-η |H| η ' ⟩|^2ϕ _s( η '→π^+π^-π ^0 )/ϕ _s( η '→π^+π^-η )
=δ ^2ϕ _s( η '→π^+π^-π ^0 )/ϕ _s( η '→π^+π^-η ),
where ⟨π^+π^-π^0(η)| H | η ' ⟩ are the decay amplitudes of η^'→π^+π^-π^0 and η^'→π^+π^-η and H is Hamiltonian that induces the η^' three-body decays. One can obtain this result: ⟨π^+π^-π^0|H|η'⟩/⟨π^+π^-η | H |η'⟩=-δ according to the mixing scheme given in Eqs. (<ref>) and (<ref>). Furthermore, ϕ_s(η^'→π^+π^-π^0(η)) is the phase space volume of the decay model η^'→π^+π^-π^0(η). For the ratio ϕ_s(η^'→π^+π^-π^0)/ϕ_s(η^'→π^+π^-η)=17.0, it can be obtained directly from Refs. <cit.>. The CLEO and BESIII collaborations have measured the branching fraction of η^'→π^+π^-π^0, respectively. In 2018, the ratio B(η '→π^+π^-π ^0 )/ B(η '→π^+π^-η ) was analyzed based on the data from BESIII, and its value is determined to be (8.8±1.2)×10^-3 as in Ref. <cit.>. So, we can obtain the value of π^0-η mixing angle δ:
δ ^2=(5.18± 0.71) × 10^-4.
In order to study relevant physical observaboles, we adopt the explict expression for the full differential decay width distribution of D_s^+→ Pℓ ^+ν̅_ℓ as follows:
d^2Γ (D_s^+ → Pℓ ^+ν̅_ℓ )/dq^2 dcosθ_ℓ = a_θ _ℓ(q^2)+b_θ_ℓ(q^2) cosθ_ℓ
+c_θ_ℓ(q^2) cos^2θ_ℓ,
where the three q^2-dependent angular coefficient functions have the following expressions <cit.>
a_θ_ℓ(q^2) = N_ ewλ^3/2(1 - m_ℓ^2/q^2)^2 [ |f^D_s^+P_+|^2
+1/λm_ℓ^2/q^2(1-m_π^0^2/m_D_s^+^2)^2 |f^D_s^+P_0|^2],
b_θ_ℓ(q^2) =2 N_ ewλ(1-m_ℓ^2/q^2)^2m_ℓ^2/q^2(1-m_P^2/m_D_s^+^2)^2
×Re[f^D_s^+P_+(q^2) f^D_s^+P*_0(q^2) ],
c_θ_ℓ(q^2) =- N_ ewλ^3/2(1-m_ℓ^2/q^2)^3 |f^D_s^+P_+|^2.
Here, the scalar form factor f_0^D_s^+P(q^2)=f_+^D_s^+P(q^2) + q^2/(m_D_s^+^2-m_π^0^2)f_-^D_s^+P(q^2). For convenience, we have introduced the following shorthand notations, N_ ew = G_F^2|V_cs|^2m_D_s^+^3/(256π^3) and
λ≡λ (1,m_P^2/m_D_s^+^2,q^2/m_D_s^+^2) with
λ (a,b,c) ≡ a^2 + b^2 + c^2 -2(ab+ac+bc).
In this paper, the symbol P can be taken as a π^0 meson and G_F=1.166 × 10^-5 GeV^-2 is the Fermi coupling constant. |V_cs| is the Cabibbo-Kobayashi-Maskawa (CKM) matrix element <cit.>. With the help of resultant three q^2-dependent angular coefficient functions, one can calculate the three differential distribution of angle observables of the semileptonic decay D_s^+→π^0ℓ^+ν_ℓ for the forward-backward asymmetries, the q^2-differential flat terms, and lepton polarization asymmetry, i.e. 𝒜_FB^D_s^+→π^0ℓ^+ν_ℓ(q^2), ℱ_H^D_s^+→π^0ℓ^+ν_ℓ(q^2) and 𝒜_λ_ℓ^D_s^+→π^0ℓ^+ν_ℓ(q^2), respectively. The three observables are extremely sensitive to the leptonic mass and effects of physics beyond the standard model, while a subset of these observables also seemed to be sensitive to the hadrnoic uncertainties <cit.>. The detailed expressions can be found in Ref. <cit.>.
Nextly, to derive the D_s^+→π^0 TFFs, one can use the QCD LCSR approach. After considering the relationship between TFFs from different channels, i.e. Eq. (<ref>), we will take the following correlation function to derive the f_±^D_s^+π^0(q^2) <cit.>:
Π_μ(p,q)= -iδ∫d^4e^iq· x⟨η(p)| T{j_μ(x),j_5^†(0)}|0⟩,
where j_μ(x)=s̅(x)γ_μ c(x), j_5(x)=m_cs̅(x)iγ _5c(x). In the time-like q^2-region, we can insert the complete intermediate states that have the same quantum numbers as the current operator (c̅ iγ_5s) into the hadron current of the correlation function. After isolating the pole term of the lowest pseudoscalar D_s-meson, we can reach the hadronic representation. The dispersion integrations can be replaced with the sum of higher resonances and continuum states. Meanwhile, we work in the space-like q^2-region, where the c-quark operator needs to contract by applying a propagator with the gluon field correction <cit.>. For the desired sum rule for TFFs, we need to use the OPE method by considering the meson LCDAs <cit.>. After using the Borel transformation and substracting the contribution from higher resonances and continuum states, the LCSR for TFFs can be achieved, it finally read off:
f_+^D_s^+π^0(q^2) = -δe^m_D_s^+^2/M^2/2m_D_s^+^2f_D_s^+
×[F_0(q^2,M^2,s_0)+ α_ s C_F/4πF_1(q^2,M^2,s_0)],
f̃^D_s^+π^0(q^2) = -δe^m_D_s^+^2/M^2/m_D_s^+^2f_D_s^+
×[F̃_0(q^2, M^2, s_0) + α_ s C_F/4πF̃_1(q^2, M^2, s_0) ].
The leading-order and next-to-leading order invariant amplitudes F_0(q^2, M^2, s_0)/F̃_0(q^2, M^2, s_0) and F_1(q^2,M^2,s_0)/F̃_1(q^2,M^2,s_0) are given in Ref. <cit.>. The specific detailed expressions are consistent with literature <cit.>, which is also discussed in our previous work <cit.>
§ NUMERICAL RESULTS AND DISCUSSIONS
Before proceeding further calculation, the following choice of input parameters are required. The charm-quark mass is m_c = 1.27 ± 0.02 GeV, s-quark mass m_s = 0.093 GeV, and the masses of D_s, η, π^0-meson m_D_s = 1.9685 GeV, m_η = 0.5478 GeV, m_π^0 = 0.13498 GeV. All of them are taken from the Particle Data Group (PDG) <cit.>. The D_s, η-meson decay constants are taken as f_D_s = 0.274 ± 0.013 ± 0.007 GeV <cit.>, f_η = 0.130 ± 0.003 GeV <cit.>.
Furthermore, the twist-2, 3, 4 LCDAs for η-meson are needed. For the twist-2 LCDAs ϕ_2;η(x,μ), we calculated its first three ξ-moments ⟨ξ _2;η^n⟩ |_μ with n=(2,4,6) by using the QCD sum rule within background field theory, where the accuracy is up to dimension-six nonperturbative vacuum condensates and next-to-leading QCD correction for the perturbative part. The values are
⟨ξ _2;η^2 ⟩ |_μ_k=0.231^+0.010_-0.013,
⟨ξ _2;η^4 ⟩ |_μ_k=0.109^+0.007_-0.007,
⟨ξ _2;η^6 ⟩ |_μ_k=0.066^+0.006_-0.006,
where the typical scale in this paper is taken as μ_k = (m_D_s^+^2 - m_c^2)^1/2≈ 1.5 GeV. Thus, we can obtain higher-order Gergenbauer moments: a^2_2;η(μ_k)=0.089_0.035^+0.030, a^4_2;η(μ_k)=0.025_-0.010^+0.003, a^6_2;η(μ_k)=0.033_-0.049^+0.054. The detailed analysis and calculation processes for ⟨ξ _2;η^n⟩ |_μ are shown in our recent work <cit.>. The twist-3 and twist-4 LCDAs expressions and corresponding parameters are mainly taken from Refs. <cit.>. One could run those hadronic parameters of the twist-2,3,4 LCDAs from the initial factorization scale to other scale, which also requires using the renormalization group equation,
c_i(μ_k) =ℒ^γ_c_i/β_0c_i(μ_0),
where ℒ =α _s(μ_k) /α_s(μ_0) , β _0=11-2/3n_f, and the one-loop anomalous dimensions γ_c_i can be seen in Ref. <cit.>.
Next, in order to determine the continuum threshold and Borel parameters for the D_s^+ →π^0 TFFs, one can follow the four criteria: (a) The continuum contributions are less than 30% of the total results; (b) The contribution from the twist-4 LCDAs do not exceed 5%; (c) We reuire the variations of the TFF within the Borel window be less than 10%; (d) The continuum threshold s_0 should be closer to the squared mass of the first excited state of D_s-meson. Based on the fourth term of the criteria, we take s_0 to be close to the squared mass of the excited state of D_s-meson D_s0(2590), i.e. s_0 = 6.7(0.2) GeV^2. The reasonable Borel window is found to be M^2=25(2)GeV^2.
Based on the parameters that have been determined, we can get the D_s^+ →π^0 TFF at large recoil point f_±^D_s^+π^0(0) with respect to each different input parameters, which can be arranged as follows,
f_+^D_s^+π^0( 0 ) =0.0113+( _-0.0008^+0.0008) _δ+( _-0.0001^+0.0002) _s_0
+( _-0.0000^+0.0001) _M^2+( _-0.0007^+0.0009) _m_c,f_D_s+( _-0.0002^+0.0003) _f_η
+( _-0.0000^+0.0001) _a_2;η^2+( _-0.0000^+0.0000) _a_4;η^2+( _-0.0001^+0.0000) _a_6;η^2
=0.0113_-0.0019^+0.0024,
f_-^D_s^+π^0( 0 ) =0.0020+( _-0.0002^+0.0001) _δ+( _-0.0001^+0.0001) _s_0
+( _-0.0000^+0.0000) _M^2+( _-0.0003^+0.0003) _m_c,f_D_s+( _-0.0001^+0.0001) _f_η
+( _-0.0001^+0.0001) _a_2;η^2+( _-0.0000^+0.0000) _a_4;η^2+( _-0.0001^+0.0001) _a_6;η^2
=0.0020_-0.0009^+0.0008.
The physical allowable range for the TFFs are m_ℓ^2 ⩽ q^2 ⩽ (m_D_s-m_π^0)^2 ≈ 3.36 GeV^2. Theoretically, the LCSRs approach for D_s^+ →π^0ℓ^+ν_ℓ TFFs are applicable in low and intermediate q^2-regions, i.e. q^2 ∈ [0,1.3] GeV^2 of π^0-meson. One can extrapolate the TFFs in all physically allowable q^2-region via z(q^2,t) converging the simplified series expansion (SSE), i.e. the TFFs are expanded as <cit.>:
f_±^D_s^+π^0(q^2) =1/1-q^2/m_D_s^2∑_k=0,1,2β _kz^k( q^2,t_0 )
where β_k are real coefficients and z(q^2,t) is the function,
z^k( q^2,t_0 ) =√(t_+-q^2)-√(t_+-t_0)/√(t_+-q^2)+√(t_+-t_0),
with t_± = (m_D_s± m_π)^2 and t_0=t_±(1-√(1-t_-/t_+)). The SSE method possesses superior merit, which keeps the analytic structure correct in the complex plane and ensures the appropriate scaling, f_±^D_s^+π^0(q^2)∼ 1/q^2 at large q^2. And the quality of fit Δ is devoted to take stock of the resultant of extrapolation, which is defined as
Δ =∑_t| F_i(t) -F_i^fit( t ) |/∑_t| F_i(t) |× 100.
After making extrapolation for the TFFs f_±^D_s^+π^0(q^2) to the whole physical q^2-region, we listed the coefficients β_0,1,2 and Δ in Table <ref>. The quality of fit is lower than 1.4%, which shows the higher agreement between SSE and LCSR results. Then, the behaviors of D_s^+→π^0 TFFs in the whole physical region with respect to squared momentum transfer are given in Fig. <ref>, where the darker and lighter bands stand for the LCSR results and SSE of our predictions. As a comparison, we also present the predictions from theoretical and experimental groups, such as the LCSR 2013 <cit.>, the LCSR 2015 <cit.>, and the two set of BESIII collaboration <cit.>. Here, we have a notation that the theoretical and experimental results are coming from the relationship Eq. (<ref>) with the help of D_s^+→η TFFs. The type-1 set of BESIII result stands for η→γγ channel and type-2 is η→π^0π^+π^- channel. The curves show that our results are in good agreement with other theoretical and experimental predictions within uncertainties. Furthermore, we display the behaviors of the three angular coefficients functions a_θ_ℓ(q^2), b_θ_ℓ(q^2) and c_θ_ℓ(q^2) uncertainties with unit 10^-17-order level in Fig. <ref>. In which, the negative of c_θ_ℓ is given for convenience to compare the three angular coefficients. As can be seen from the figure, the values of a_θ_ℓ(q^2) and c_θ_ℓ(q^2) are very closer with uncertainties, and absolute value for b_θ_ℓ(q^2) is smaller than that of a_θ_ℓ(q^2) and c_θ_ℓ(q^2).
For the next stage, we comment on some phenomenological results for semileptonic decay D_s^+→ηℓ^+ν_ℓ, i.e. the decay width, branching fraction, lepton-flavor universality, and other observables. In which the CKM matrix element |V_cs| is required. Here, we mainly take the average value of the leptonic and semileptonic decay processes for c→ s, which comes from PDG <cit.>, i.e |V_cs|=0.987± 0.011. With the resultant TFFs, we draw the curves of D_s^+→π^0ℓ^+ν_ℓ full differential decay width with respect to the two kinematic variables: squared momentum transfer q^2 and cosine angle cosθ_ℓ in Fig. <ref>, which has the following notations:
* As a comparison, we present the predictions from the LCSR in 2013 <cit.> and 2015 <cit.>, the BESIII <cit.> in Fig. <ref>(a), which are also obtained from the D_s^+ →ηℓ^+ν_ℓ by using the expression Eq. (<ref>).
* In Fig. <ref>(a), our predictions have agreement with other LCSR results and BESIII data within errors in the region 0 ⩽ q^2 ⩽ 1.95 GeV^2. And the curves of our predictions tend to zero when the squared momentum transfer leans towards the small recoil region.
* In Fig. <ref>(b), we exhibit the angular distribution dΓ(D_s^+ →π^0 ℓ^+ ν_ℓ)/dcosθ_ℓ in the region of -1⩽cosθ_ℓ⩽ 1, and the curve is asymmetry.
* The uncertainties of our predictions are mainly coming from each input theoretical parameters.
After integrating over the whole q^2-region, i.e. m_ℓ^2 ⩽ q^2 ⩽ (m_D_s-m_π^0)^2 ≈ 3.36 GeV^2 for differential decay widths, we obtain the total decay widths for D_s^+ →π^0 ℓ^+ ν_ℓ with two different channels
Γ( D_s^+→π ^0e^+ν_e ) =0.0339_-0.0066^+0.0074× 10^-15 GeV,
Γ( D_s^+→π ^0μ ^+ν _μ) =0.0337_-0.0066^+0.0074× 10^-15 GeV,
which have slight different with each other. Furthermore, after using the lifetime of initial state D_s^+-meson, i.e. τ _D_s^+=( 0.504± 0.007 ) ps <cit.>, we can get the branching fraction for the semileptonic decay channels D_s^+ →π^0 ℓ^+ ν_ℓ with ℓ=(e,μ). The results are listed in Table <ref>. The neutral meson mixing effect (NMME) from Li and Yang <cit.>, and also the BESIII collaboration upper limits <cit.> are present as a comparison. The result for D_s^+ →π^0 e^+ ν_e channel shows that our prediction have agreement with Li and Yang, which both in the reasonable region predicted by BESIII collaboration. We present the D_s^+ →π^0 μ^+ ν_μ simultaneously.
As a further step, the three differential distribution of angle observables of the semileptonic decay D_s^+→π^0ℓ^+ν_ℓ with ℓ = (e,μ), i.e. the forward-backward asymmetries 𝒜_FB^D_s^+→π^0ℓ^+ν_ℓ(q^2), the q^2-differential flat terms ℱ_H^D_s^+→π^0ℓ^+ν_ℓ(q^2), and lepton polarization asymmetries 𝒜_λ_ℓ^D_s^+→π^0ℓ^+ν_ℓ(q^2) are presented in Fig. <ref>, which shows that
* Their center values are nearly equal to the upper/lower limits in the region 0⩽ q^2⩽ 2.0 GeV^2, and have slight difference in 2.0 GeV^2< q^2⩽ 3.36 GeV^2. This is agreement with the B→π(K)ℓν_ℓ cases <cit.>
* Due to the massless of electron, the distribution of lepton polarization asymmetry within uncertainties equal to 1.
* The electron channel is about 10^-5-order lower than the muon channel for the forward-backward asymmetries and flat terms.
The integrated results of the three angular observables are
𝒜_FB^D_s^+→π^0μ^+ν_μ = 1.22_-0.01^+0.01× 10^-1,
𝒜_FB^D_s^+→π^0e^+ν_e = 7.49_-0.02^+0.02× 10^-6,
ℱ_H^D_s^+→π^0μ^+ν_μ = 0.48_-0.01^+0.01,
ℱ_H^D_s^+→π^0e^+ν_e = 1.77_-0.14^+0.13× 10 ^-5,
𝒜_λ_ℓ^D_s^+→π^0μ^+ν_μ = 2.49_-0.02^+0.01,
𝒜_λ_ℓ^D_s^+→π^0e^+ν_e = 3.36_-0.00^+0.00.
Finally, the specific value of the ratio for the different decay channels R _π ^0/η^ℓ is presented as follows:
R _π ^0/η^e = ℬ (D_s^+→π ^0e^+ν_e )/ℬ ( D_s^+→η e^+ν_e)
= 1.108_-0.071^+0.039× 10^-3,
where the branching fraction is ℬ( D_s^+→η e^+ν_e )=2.346_-0.331^+0.418×10^-2, which are taken from our previous work <cit.>. This can be considered as a good test for the correctness of the considered D_s-meson internal structure, and also the mixing angle between π^0 and η states.
§ SUMMARY
In order to have a deeper insight into heavy-to-light decay, we carry out the study of semileptonic decay D_s^+ →π^0 e^+ ν_e in this paper. Firstly, the mechanism of neutral meson mixing effect is introduced briefly, the D_s^+→π^0 TFFs f_±^D_s^+→π^0(q^2) are investigated within LCSR approach up to NLO correction, the η-meson with ss̅ component twist-2 LCDA is researched by QCD sum rule under background field theory up to full dimension-six accuracy. Secondly, we make the extrapolation for f_±^D_s^+→π^0(q^2) to the whole q^2-region m_ℓ^2⩽ q^2 ⩽ (m_D_s-m_π^0)^2 by using SSE, and make a comparison with BESIII and other theoretical group. The behaviors of three TFFs related angular coefficients functions a_θ_ℓ, b_θ_ℓ, c_θ_ℓ are presented.
Then, the differential decay width for D_s^+ →π^0 ℓ^+ ν_ℓ versus q^2 and cosθ_ℓ within uncertainties are presented in Fig. <ref>. Our result shows well agreement with BESIII collaboration and other LCSR predictions. The total decay width results are given in Eqs. (<ref>) and (<ref>). Furthermore, after considering the lifetime of initial state, we obtained the branching fraction for the semileptonic decay channels D_s^+ →π^0 ℓ^+ ν_ℓ with ℓ=(e,μ). The results are presented in Table <ref>. Our prediction have agreement with Li and Yang, which both in the reasonable region predicted by BESIII collaboration. Finally, we make analysis about the forward-backward asymmetries, the q^2-differential flat terms, lepton polarization asymmetries, and also the ratio for different decay channel R _π ^0/η^e=1.108_-0.071^+0.039× 10^-3.
This work was supported in part by the National Natural Science Foundation of China under Grant No.12265010, No.12265009, the Project of Guizhou Provincial Department of Science and Technology under Grant No.ZK[2021]024 and No.ZK[2023]142, the Project of Guizhou Provincial Department of Education under Grant No.KY[2021]030.
99
Khodjamirian:2020btr
A. Khodjamirian,
Hadron Form Factors: From Basic Phenomenology to QCD Sum Rules,
CRC Press, Taylor & Francis Group, 2020,
ISBN 978-1-138-30675-2, 978-1-315-14200-5
CELLO:1990klc
H. J. Behrend et al. [CELLO Collaboration],
A Measurement of the π^0, η and η^' electromagnetic form-factors,
https://doi.org/10.1007/BF01549692
Z. Phys. C 49, 401 (1991).
TPCTwoGamma:1990dho
H. Aihara et al. [TPC/Two Gamma],
Investigation of the electromagnetic structure of η and η^' mesons by two photon interactions,
https://doi.org/10.1103/PhysRevLett.64.172
Phys. Rev. Lett. 64, 172 (1990).
KLOE:2002jed
A. Aloisio et al. [KLOE Collaboration],
Measurement of Γ (ϕ→η^'γ) / Γ (ϕ→ηγ) and the pseudoscalar mixing angle,
https://doi.org/10.1016/S0370-2693(02)02145-7
Phys. Lett. B 541, 45 (2002).
[https://arxiv.org/abs/hep-ex/0206010
hep-ex/0206010]
Muller:2004vf
S. E. Muller [KLOE Collaboration],
KLOE results at the Frascati phi-factory DAPHNE,
https://doi.org/10.1142/S0217751X05023566
Int. J. Mod. Phys. A 20, 1888 (2005).
[https://arxiv.org/abs/hep-ex/0411081
hep-ex/0411081]
BaBar:2006ash
B. Aubert et al. [BABAR Collaboration],
Measurement of the η and η^' transition form-factors at q^2 = 112 GeV^2,
https://doi.org/10.1103/PhysRevD.74.012002
Phys. Rev. D 74, 012002 (2006).
[https://arxiv.org/abs/hep-ex/0605018
hep-ex/0605018]
KLOE:2006guu
F. Ambrosino et al. [KLOE Collaboration],
Measurement of the pseudoscalar mixing angle and eta-prime gluonium content with KLOE detector,
https://doi.org/10.1016/j.physletb.2007.03.032
Phys. Lett. B 648, 267 (2007).
[https://arxiv.org/abs/hep-ex/0612029
hep-ex/0612029]
Anisovich:1997dz
V. V. Anisovich, D. V. Bugg, D. I. Melikhov and V. A. Nikonov,
η-η^' glueball mixing from photon - meson transition form-factors and decay ratio D(s) →η l ν / η^' l ν,
https://doi.org/10.1016/S0370-2693(97)00607-2
Phys. Lett. B 404, 166 (1997).
[https://arxiv.org/abs/hep-ph/9702383
hep-ph/9702383]
Hu:2021zmy
D. D. Hu, H. B. Fu, T. Zhong, L. Zeng, W. Cheng and X. G. Wu,
η ^(' )-meson twist-2 distribution amplitude within QCD sum rule approach and its application to the semi-leptonic decay D_s^+ →η ^(' )ℓ ^+ ν _ℓ,
https://doi.org/10.1140/epjc/s10052-021-09958-0
Eur. Phys. J. C 82, 12(2022).
[https://arxiv.org/abs/2102.05293
arXiv:2102.05293]
Huang:2006as
T. Huang and X. G. Wu,
Determination of the η and η' Mixing Angle from the Pseudoscalar Transition Form Factors,
https://doi.org/10.1140/epjc/s10052-007-0245-3
Eur. Phys. J. C 50, 771 (2007).
[https://arxiv.org/abs/hep-ph/0612007
hep-ph/0612007]
Feldmann:2002kz
T. Feldmann and P. Kroll,
Mixing of pseudoscalar mesons,
https://doi.org/10.1238/Physica.Topical.099a00013
Phys. Scripta T 99, 13-22 (2002).
[https://arxiv.org/abs/hep-ph/0201044
hep-ph/0201044]
Kroll:2004rs
P. Kroll,
Mixing of pseudoscalar mesons and isospin symmetry breaking,
https://doi.org/10.1142/S0217751X0502149X
Int. J. Mod. Phys. A 20, 331-340 (2005).
[https://arxiv.org/abs/hep-ph/0409141
hep-ph/0409141]
Ball:1995zv
P. Ball, J. M. Frere and M. Tytgat,
Phenomenological evidence for the gluon content of eta and eta-prime,
https://doi.org/10.1016/0370-2693(95)01287-7
Phys. Lett. B 365, 367 (1996).
[https://arxiv.org/abs/hep-ph/9508359
hep-ph/9508359]
Feldmann:1998su
T. Feldmann,
Mixing and decay constants of pseudoscalar mesons: Octet singlet versus quark flavor basis,
https://doi.org/10.1016/S0920-5632(99)00152-8
Nucl. Phys. B Proc. Suppl. 74, 151 (1999).
[https://arxiv.org/abs/hep-ph/9807367
hep-ph/9807367]
Feldmann:1998vh
T. Feldmann, P. Kroll and B. Stech,
Mixing and decay constants of pseudoscalar mesons,
https://doi.org/10.1016/S0920-5632(99)00152-8
Phys. Rev. D 58, 114006 (1998).
[https://arxiv.org/abs/hep-ph/9802409
hep-ph/9802409]
Feldmann:1998sh
T. Feldmann, P. Kroll and B. Stech,
Mixing and decay constants of pseudoscalar mesons: The Sequel,
https://doi.org/10.1016/S0370-2693(99)00085-4
Phys. Lett. B 449, 339 (1999).
[https://arxiv.org/abs/hep-ph/9812269
hep-ph/9812269]
Tippens:2001fq
W. B. Tippens, V. Abaev, M. Batinic, V. Bekrenev, W. J. Briscoe, R. E. Chrien, M. Clajus, D. Isenhower, N. Kozlenko and S. Kruglov, et al.
Measurement of charge symmetry breaking by the comparison of π^+ d → p pη with π^- d → nn η,
https://doi.org/10.1103/PhysRevD.63.052001
Phys. Rev. D 63, 052001 (2001).
Li:2020ylu
H. B. Li and M. Z. Yang,
Semileptonic decay of D_s^+→π^0 ℓ^+ ν_ℓ via neutral meson mixing,
https://doi.org/10.1016/j.physletb.2020.135879
Phys. Lett. B 811, 135879 (2020).
[https://arxiv.org/abs/2006.15798
arXiv:2006.15798]
BESIII:2022jcm
M. Ablikim et al. [BESIII Collaboration],
Search for the semileptonic decay D_s^+→π^0e^+ν_e,
https://doi.org/10.1103/PhysRevD.106.112004
Phys. Rev. D 106, 112004 (2022).
[https://arxiv.org/abs/2206.13870
arXiv:2206.13870]
Benayoun:1999fv
M. Benayoun, L. DelBuono, S. Eidelman, V. N. Ivanchenko and H. B. O'Connell,
Radiative decays, nonet symmetry and SU(3) breaking,
https://doi.org/10.1103/PhysRevD.59.114027
Phys. Rev. D 59, 114027 (1999).
[https://arxiv.org/abs/hep-ph/9902326
hep-ph/9902326]
Ricciardi:2012xu
G. Ricciardi,
Semileptonic D decays and η-η^' mixing,
https://doi.org/10.1103/PhysRevD.86.117505
Phys. Rev. D 86, 117505 (2012).
[https://arxiv.org/abs/1209.3386
arXiv:1209.3386]
Ke:2010htz
H. W. Ke, X. Q. Li and Z. T. Wei,
Determining the η-η' mixing by the newly measured ℬ R( D( D_s ) ) →η( η ' ) +ℓ̅+ν _ℓ,
https://doi.org/10.1140/epjc/s10052-010-1383-6
Eur. Phys. J. C 69, 133 (2010).
[https://arxiv.org/abs/0912.4094
arXiv:0912.4094]
Choi:2010zb
H. M. Choi,
Exclusive Rare B_s→ (K,η,η')ℓ^+ℓ^- Decays in the Light-Front Quark Model,
https://doi.org/10.1088/0954-3899/37/8/085005
J. Phys. G 37, 085005 (2010).
[https://arxiv.org/abs/1002.0721
arXiv:1002.0721]
Gronau:2009mp
M. Gronau and J. L. Rosner,
ω-ϕ mixing and weak annihilation in D_s decays,
https://doi.org/10.1103/PhysRevD.79.074006
Phys. Rev. D 79, 074006 (2009).
[https://arxiv.org/abs/0902.1363
arXiv:0902.1363]
Kucukarslan:2006wk
A. Kucukarslan and U. G. Meissner,
ω-ϕ mixing in chiral perturbation theory,
https://doi.org/10.1142/S0217732306020743
Mod. Phys. Lett. A 21, 1423 (2006).
[https://arxiv.org/abs/hep-ph/0603061
hep-ph/0603061]
Gronau:2008kk
M. Gronau and J. L. Rosner,
B decays dominated by ω-ϕ mixing,
https://doi.org/10.1016/j.physletb.2008.07.016
Phys. Lett. B 666, 185 (2008).
[https://arxiv.org/abs/0806.3584
arXiv:0806.3584]
Maltman:1995nq
K. Maltman,
Two model independent results for the momentum dependence of ρ-ω mixing,
https://doi.org/10.1016/0370-2693(95)01208-8
Phys. Lett. B 362, 11 (1995).
[https://arxiv.org/abs/nucl-th/9506024
nucl-th/9506024]
Maltman:1996kj
K. Maltman, H. B. O'Connell and A. G. Williams,
Analysis of ρ - ω interference in the pion form-factor,
https://doi.org/10.1016/0370-2693(96)00293-6
Phys. Lett. B 376, 19 (1996).
[https://arxiv.org/abs/hep-ph/9601309
hep-ph/9601309]
OConnell:1997ggd
H. B. O'Connell, A. W. Thomas and A. G. Williams,
Extracting the ρ-ω mixing amplitude from the pion form-factor,
https://doi.org/10.1016/S0375-9474(97)88425-4
Nucl. Phys. A 623, 559 (1997).
[https://arxiv.org/abs/hep-ph/9703248
hep-ph/9703248]
Gardner:1997yx
S. Gardner, H. B. O'Connell and A. W. Thomas,
ρ-ω mixing and direct CP violation in hadronic B decays,
https://doi.org/10.1103/PhysRevLett.80.1834
Phys. Rev. Lett. 80, 1834 (1998).
[https://arxiv.org/abs/hep-ph/9705453
hep-ph/9705453]
Okubo:1963fa
S. Okubo,
ϕ meson and unitary symmetry model,
https://doi.org/10.1016/S0375-9601(63)92548-9
Phys. Lett. 5, 165 (1963).
Zweig:1964ruk
G. Zweig,
An SU(3) model for strong interaction symmetry and its breaking. Version 1,2,.
Iizuka:1966fk
J. Iizuka,
Systematics and phenomenology of meson family,
https://doi.org/10.1143/PTPS.37.21
Prog. Theor. Phys. Suppl. 37, 21 (1966).
Bali:2014pva
G. S. Bali, S. Collins, S. Dürr and I. Kanamori,
D_s →η, η' semileptonic decay form factors with disconnected quark loop contributions,
https://doi.org/10.1103/PhysRevD.91.014503
Phys. Rev. D 91, 014503 (2015).
[https://arxiv.org/abs/1406.5449
arXiv:1406.5449]
Cheng:2017pcq
H. Y. Cheng and X. W. Kang,
Branching fractions of semileptonic D and D_s decays from the covariant light-front quark model,
https://doi.org/10.1140/epjc/s10052-017-5170-5
Eur. Phys. J. C 77, 587 (2017).
[https://arxiv.org/abs/1707.02851
arXiv:1707.02851]
Verma:2011yw
R. C. Verma,
Decay constants and form factors of s-wave and p-wave mesons in the covariant light-front quark model,
https://doi.org/10.1088/0954-3899/39/2/025005
J. Phys. G 39, 025005 (2012).
[https://arxiv.org/abs/1103.2973
arXiv:1103.2973]
Wei:2009nc
Z. T. Wei, H. W. Ke and X. F. Yang,
Interpretation of the `f_D_s puzzle' in SM and beyond,
https://doi.org/10.1103/PhysRevD.80.015022
Phys. Rev. D 80, 015022 (2009).
[https://arxiv.org/abs/0905.3069
arXiv:0905.3069]
Melikhov:2000yu
D. Melikhov and B. Stech,
Weak form-factors for heavy meson decays: An Update,
https://doi.org/10.1103/PhysRevD.62.014006
Phys. Rev. D 62, 014006 (2000).
[https://arxiv.org/abs/hep-ph/0001113
hep-ph/0001113]
Soni:2018adu
N. R. Soni, M. A. Ivanov, J. G. Körner, J. N. Pandya, P. Santorelli and C. T. Tran,
Semileptonic D_(s)-meson decays in the light of recent data,
https://doi.org/10.1103/PhysRevD.98.114031
Phys. Rev. D 98, 114031 (2018).
[https://arxiv.org/abs/1810.11907
arXiv:1810.11907]
Ivanov:2019nqd
M. A. Ivanov, J. G. Körner, J. N. Pandya, P. Santorelli, N. R. Soni and C. T. Tran,
Exclusive semileptonic decays of D and D_s mesons in the covariant confining quark model,
https://doi.org/10.1007/s11467-019-0908-1
Front. Phys. (Beijing) 14, 64401 (2019).
[https://arxiv.org/abs/1904.07740
arXiv:1904.07740]
Colangelo:2001cv
P. Colangelo and F. De Fazio,
D_s decays to η and η' final states: A Phenomenological analysis,
https://doi.org/10.1016/S0370-2693(01)01112-1
Phys. Lett. B 520, 78 (2001).
[https://arxiv.org/abs/hep-ph/0107137
hep-ph/0107137]
Offen:2013nma
N. Offen, F. A. Porkert and A. Schäfer,
Light-cone sum rules for the D_(s)→η^(')ℓν_l form factor,
https://doi.org/10.1103/PhysRevD.88.034023
Phys. Rev. D 88, 034023 (2013).
[https://arxiv.org/abs/1307.2797
arXiv:1307.2797]
Duplancic:2015zna
G. Duplancic and B. Melic,
Form factors of B,B_s→η^(') and D,D_s→η^(') transitions from QCD light-cone sum rules,
https://doi.org/10.1007/JHEP11(2015)138
JHEP 11, 138 (2015).
[https://arxiv.org/abs/1508.05287
arXiv:1508.05287]
DeFazio:2000my
F. De Fazio and M. R. Pennington,
Radiative ϕ meson decays and η - η^' mixing: A QCD sum rule analysis,
https://doi.org/10.1088/1126-6708/2000/07/051
JHEP 07, 051 (2000).
[https://arxiv.org/abs/hep-ph/0006007
hep-ph/0006007]
Gross:1979ur
D. J. Gross, S. B. Treiman and F. Wilczek,
Light Quark Masses and Isospin Violation,
https://doi.org/10.1103/PhysRevD.19.2188
Phys. Rev. D 19, 2188 (1979).
Gasser:1984ux
J. Gasser and H. Leutwyler,
Low-Energy Expansion of Meson Form-Factors,
https://doi.org/10.1016/0550-3213(85)90493-6
Nucl. Phys. B 250 (1985), 517-538.
Ecker:1999kr
G. Ecker, G. Muller, H. Neufeld and A. Pich,
π^0-η mixing and CP violation,
https://doi.org/10.1016/S0370-2693(00)00213-6
Phys. Lett. B 477 (2000), 88-92.
[https://arxiv.org/abs/hep-ph/9912264
hep-ph/9912264]
Cheng:2018smm
X. D. Cheng, H. B. Li, R. M. Wang and M. Z. Yang,
Study of the isospin breaking decay Y(2175)→ϕ f_0(980)→ϕηπ^0 at BESIII,
https://doi.org/10.1103/PhysRevD.99.014024
Phys. Rev. D 99, 014024 (2019).
[https://arxiv.org/abs/1812.00410
arXiv:1812.00410]
Fang:2017qgz
S. S. Fang, A. Kupsc and D. H. Wei,
An overview of η and η^' decays at BESIII,
https://doi.org/10.1088/1674-1137/42/4/042002
Chin. Phys. C 42, 042002 (2018)
[https://arxiv.org/abs/1710.05173
arXiv:1710.05173]
Becirevic:2016hea
D. Becirevic, S. Fajfer, I. Nisandzic and A. Tayduganov,
Angular distributions of B̅→ D^(∗)ℓν̅_ℓ decays and search of New Physics,
https://doi.org/10.1016/j.nuclphysb.2019.114707
Nucl. Phys. B 946, 114707 (2019).
[https://arxiv.org/abs/1602.03030
arXiv:1602.03030]
Cui:2022zwm
B. Y. Cui, Y. K. Huang, Y. L. Shen, C. Wang and Y. M. Wang,
Precision calculations of B_d,s→π,K decay form factors in soft-collinear effective theory,
https://doi.org/10.1103/JHEP03(2023)140
JHEP 03, 140 (2023)
[https://arxiv.org/abs/2212.11624
arXiv:2212.11624]
Descotes-Genon:2019bud
S. Descotes-Genon, A. Khodjamirian and J. Virto,
Light-cone sum rules for B→ Kπ form factors and applications to rare decays,
https://doi.org/10.1007/JHEP12(2019)083
JHEP 12, 083 (2019).
[https://arxiv.org/abs/1908.02267
arXiv:1908.02267]
Ball:2006wn
P. Ball, V. M. Braun and A. Lenz,
Higher-twist distribution amplitudes of the K meson in QCD,
https://doi:10.1088/1126-6708/2006/05/004
JHEP 05, 004 (2006).
[https://arxiv.org/abs/hep-ph/0603063
hep-ph/0603063]
Fu:2020uzy
H. B. Fu, W. Cheng, R. Y. Zhou and L. Zeng,
D → P(π,K) helicity form factors within light-cone sum rule approach,
https://doi:10.1088/1674-1137/abae4f
Chin. Phys. C 44, 113103 (2020).
[https://arxiv.org/abs/2002.11279
arXiv:2002.11279]
Duplancic:2008ix
G. Duplancic, A. Khodjamirian, T. Mannel, B. Melic and N. Offen,
Light-cone sum rules for B→π form factors revisited,
htpps://doi:10.1088/1126-6708/2008/04/014
JHEP 04 (2008), 014.
[https://arxiv.org/abs/0801.1796
arXiv:0801.1796]
ParticleDataGroup:2020ssz
P. A. Zyla et al. [Particle Data Group],
Review of Particle Physics,
https://doi:10.1093/ptep/ptaa104
PTEP 2020, 083C01 (2020).
Azizi:2010zj
K. Azizi, R. Khosravi and F. Falahati,
Exclusive D_s → (η,η^') ℓν decays in light cone QCD,
https://doi:10.1088/0954-3899/38/9/095001
J. Phys. G 38, 095001 (2011).
[https://arxiv.org/abs/1011.6046
arXiv:1011.6046]
Ball:2004ye
P. Ball and R. Zwicky,
New results on B →π, K, η decay formfactors from light-cone sum rules,
https://doi:10.1103/PhysRevD.71.014015
Phys. Rev. D 71, 014015 (2005).
[https://arxiv.org/abs/hep-ph/0406232
hep-ph/0406232]
Belyaev:1994zk
V. M. Belyaev, V. M. Braun, A. Khodjamirian and R. Ruckl,
D^* D π and B^* B π couplings in QCD,”
https://doi:10.1103/PhysRevD.51.6177
Phys. Rev. D 51, 6177 (1995).
[https://arxiv.org/abs/hep-ph/9410280
hep-ph/9410280]
CLEO:2008fxt
P. Naik et al. [CLEO Collaboration],
Observation of η' decays to π^+ π^- π^0 and π^+ π^- e^+ e^-,
https://doi.org/10.1103/PhysRevLett.102.061801
Phys. Rev. Lett. 102, 061801 (2009).
[https://arxiv.org/abs/0809.2587
arXiv:0809.2587]
BESIII:2012aa
M. Ablikim et al. [BESIII Collaboration],
First observation of η(1405) decays into f_0(980)π^0,
https://https://doi.org/10.1103/PhysRevLett.108.182001
Phys. Rev. Lett. 108, 182001 (2012).
[https://arxiv.org/abs/1201.2737
arXiv:1201.2737]
BESIII:2016tdb
M. Ablikim et al. [BESIII Collaboration],
Amplitude Analysis of the Decays η^'→π^+π^-π^0 and η^'→π^0π^0π^0,
https://doi.org/10.1103/PhysRevLett.118.012001
Phys. Rev. Lett. 118, 012001 (2017).
[https://arxiv.org/abs/1606.03847
arXiv:1606.03847]
Huang:2001xb
T. Huang, Z. H. Li and X. Y. Wu,
Improved approach to the heavy to light form-factors in the light cone QCD sum rules,
https://doi:10.1103/PhysRevD.63.094001
Phys. Rev. D 63, 094001 (2001).
Bharucha:2015bzk
A. Bharucha, D. M. Straub and R. Zwicky,
B→ Vℓ^+ℓ^- in the Standard Model from light-cone sum rules,
https://doi:10.1007/JHEP08(2016)098
JHEP 08 (2016) 098.
[https://arxiv.org/abs/1503.05534
arXiv:1503.05534]
BESIII:2019qci
M. Ablikim et al. [BESIII Collaboration],
Measurement of the Dynamics of the Decays D_s^+ →η^(') e^+ ν_e,
https://doi:10.1103/PhysRevLett.122.121801
Phys. Rev. Lett. 122, 121801 (2019).
[https://arxiv.org/abs/1901.02133
arXiv:1901.02133]
Narison:2014ska
S. Narison,
Improved f_D*_(s), f_B*_(s) and f_B_c from QCD Laplace sum rules,
https://doi.org/10.1142/S0217751X1550116X
Int. J. Mod. Phys. A 30, 1550116 (2015).
[https://arxiv.org/abs/1404.6642
arXiv:1404.6642]
|
http://arxiv.org/abs/2306.03074v1
|
20230605175029
|
A General Perspective on Objectives of Reinforcement Learning
|
[
"Long Yang"
] |
cs.LG
|
[
"cs.LG"
] |
A General Perspective on Objectives of Reinforcement Learning
Long Yang
School of Artificial Intelligence, Peking University, Beijing, China
July 31, 2023
In this lecture, we present a general perspective on reinforcement learning (RL) objectives, where we show three versions of objectives.
The first version is the standard definition of objective in RL literature. Then we extend the standard definition to the λ-return version, which unifies the standard definition of objective.
Finally, we propose a general objective that unifies the previous two versions.
The last version provides a high level to understand of RL's objective, where it shows a fundamental formulation that connects some widely used RL techniques (e.g., TD(λ) and GAE), and this objective can be potentially applied to extensive RL algorithms.
page0
empty
page-1
empty
§ INTRODUCTION
Although reinforcement learning (RL) is widely applied to extensive fields, there is stills lack a work that establishes the objective of starting from RL from the Markov decision process, which is very unfriendly to beginners.
To fill the gap in this view, in this lecture, we provide a self-contained, teachable technical introduction to the objectives of RL, where each section tackles a particular line of work from the transition probability matrix over the Markov decision process, reward, Bellman equation, discounted state distribution, and objectives.
Concretely, this lecture provides three equivalent versions of objectives.
The first version is presented in Theorem <ref>, where it shows the objective as the expectation with respect to the random variable (s,a,s^').
Theorem <ref> illustrates all the random factors in the Markov decision process (MDP), and we refer to it as the standard objective of MDP.
Furthermore, Theorem <ref> extends and unifies the objective that appears in Theorem <ref>.
Theorem <ref> is traceable to TD(λ) <cit.>, and we present it as the expectation with respect to the random variable the state s, where the state s follows the λ-version of discounted state distribution.
Finally, we present a general objective that unifies the previous two versions (see Theorem <ref>), which provides a high level to understand of RL's objective, where it shows a fundamental formulation that connects some widely used RL techniques (e.g., TD(λ) and GAE), and this objective can be potentially applied to extensive RL algorithms.
For example, <cit.> apply the main technique of Theorem <ref> to obtain the surrogate function with respect to GAE <cit.>.
Although GAE has been widely used in RL, it lacks a theoretical analysis of the related algorithms.
Theorem <ref> provides a possible way to establish GAE and empirical results by rigorous analysis.
To clarify this view, we present a surrogate function with respect to GAE, see Section <ref>, where it provides a theoretical fundament for policy optimization with GAE.
§ MARKOV DECISION PROCESS
Reinforcement learning (RL) <cit.> is often formulated as
a Markov decision process (MDP) <cit.>.
In this section, we review some necessary notation w.r.t. MDP.
An MDP is described as a tuple
ℳ=(𝒮,𝒜,ℙ,r,ρ_0,γ).
* 𝒮 is the state space;
* 𝒜 is the action space;
* (·|·,·):××→[0,1], each ℙ(s^'|s,a) denotes the probability
of state transition from s to s^' underplaying the action a;
* r(·|·,·):××→; each r(s^'|s,a) denotes the reward
of state transition from s to s^' underplaying the action a;
* ρ_0(·):𝒮→[0,1] is the initial state distribution;
* γ∈(0,1) is the discount factor.
The probability and reward satisfy Markov property, i.e., ℙ(s^'|s,a) and r(s^'|s,a) only depend on the immediately preceding state s and action a, not at all on earlier states and actions.
A stationary Markov policy π is a probability distribution defined on 𝒮×𝒜, π(a|s) denotes the probability of playing a in state s.
We use Π to denote the set that collects all the stationary Markov policies.
Let
τ={s_t, a_t, r_t+1}_t≥0∼π
be the trajectory generated by π,
where
s_0∼ρ_0(·), a_t∼π(·|s_t), s_t+1∼ℙ(·|s_t,a_t), and r_t+1=r(s_t+1|s_t,a_t).
§.§ Single-Step Transition Probability Matrix
Let 𝐏_π∈^||×|| be a state transition probability matrix, and their components are:
𝐏_π[s,s'] =∑_a∈𝒜π(a|s)ℙ(s'|s,a)=:_π(s^'|s),
which denotes one-step state transformation probability from s to s^' by executing π. To better understand the one-step state transition under a policy π, we illustrate it in the next Figure <ref>.
|
http://arxiv.org/abs/2306.10098v1
|
20230616174934
|
Differentiable Instruction Optimization for Cross-Task Generalization
|
[
"Masaru Isonuma",
"Junichiro Mori",
"Ichiro Sakata"
] |
cs.CL
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] |
SLACK: Stable Learning of Augmentations
with Cold-start and KL regularization
Juliette Marrie^1,2
Michael Arbel^1
Diane Larlus^2
Julien Mairal^1
^1 Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK ^2 NAVER LABS Europe
Submitted 9 June 2023.
============================================================================================================================================================================
Instruction tuning has been attracting much attention to achieve generalization ability across a wide variety of tasks.
Although various types of instructions have been manually created for instruction tuning, it is still unclear what kind of instruction is optimal to obtain cross-task generalization ability.
This work presents instruction optimization, which optimizes training instructions with respect to generalization ability.
Rather than manually tuning instructions, we introduce learnable instructions and optimize them with gradient descent by leveraging bilevel optimization.
Experimental results show that the learned instruction enhances the diversity of instructions and improves the generalization ability compared to using only manually created instructions.
§ INTRODUCTION
Recently, significant progress has been made in developing models that can generalize to arbitrary tasks by following natural language descriptions <cit.>.
Instruction tuning has been a region of interest as a training technique to obtain such generalization ability <cit.>.
By finetuning pretrained language models on a variety of tasks with their instructions, models can generalize to arbitrary tasks unseen during training.
Many previous studies witnessed the effectiveness of instruction tuning <cit.>.
Various instructions have been created for instruction tuning, such as task name, task definition, positive/negative exemplars of a task, explanations of why each positive/negative exemplar is correct/incorrect, etc.
However, <cit.> showed that the definition and positive exemplars of tasks are sufficient for instruction tuning, and the effect of adding other types of instruction is negligible or sometimes has a negative impact on the generalization performance.
Seeking an optimal instruction for cross-task generalization is an important issue for instruction tuning, while it requires much human effort (100+ researchers have participated in previous studies).
Furthermore, human-interpretable instructions are not necessarily optimal for obtaining cross-task generalization ability.
Against this background, we propose instruction optimization, which introduces learnable instructions and optimizes them w.r.t. the cross-task generalization ability.
As shown in Figure <ref>, a model θ is optimized to maximize the performance on meta-train tasks following learnable instructions.
By contrast, learnable instructions ϕ are trained to maximize the meta-test performance of the trained model θ^*(ϕ).
This optimization is called bilevel optimization and is frequently used in hyperparameter optimization <cit.>, meta-learning <cit.>, and neural architecture search <cit.>.
We regard training instructions as a special type of hyperparameter and optimize them with gradient descent by relaxing the search space to be continuous.
To create learnable instructions, we propose two methods: instruction embedder, which generates the embeddings of instructions, and instruction extractor, which selects an optimal task exemplar.
Recently, prompt engineering has drawn attention to seek the optimal prompt to achieve a task <cit.>.
Some work studies continuous prompts that perform prompting in the embedding space of tokens <cit.>, whereas others retrieve optimal exemplars as a testing prompt for in-context learning <cit.>.
Our instruction embedder and instruction extractor follow the idea of continuous prompts and prompt retrievers, respectively.
Whereas previous work optimizes prompts to solve an individual task on the test, our study differs in the target and aim of optimization.
We optimize the training prompts to maximize the cross-task generalization ability of the trained model.
In the experiment, we confirmed that the instruction extractor successfully extracted appropriate instruction, providing proof of concept.
Regarding the comparison with instruction tuning, the instruction embedder enhances the diversity of instructions and improves the generalization ability compared to using only manually created instructions.
In contrast, the instruction extractor does not contributes to the performance gain, which shows that using the same task exemplar across instances is unexpectedly preferable for cross-task generalization.
This study provides a basis for exploring the optimal instructions for instruction tuning.
§ PRELIMINARIES
Instruction tuning trains a model θ to minimize the training loss defined in Eq. (<ref>):
θ^* =_θℒ(θ)
=_θ∑_t ∈𝒯_train∑_i=1^N_t-log p_θ(y_t^(i)|[I_t; X_t^(i)])
where X_t^(i) and I_t denote the embedding matrix of the i-th input and instruction of the task t, respectively.
y_t^(i) is a sequence of tokens that represents a class label or reference text.
Instruction tuning regards all tasks as the conditional text generation given the concatenation of the instruction and task input [I_t; X_t].
By prepending the instruction to the task input, the trained model θ^* can generalize to a variety of unseen tasks t ∉𝒯_train.
The optimal training instructions have been sought by manually creating various types of instruction for instruction tuning <cit.>.
However, <cit.> showed that task definition and task exemplars are sufficient for instruction tuning, while adding other types of instruction is negligible or sometimes negatively affects the generalization performance.
This observation motivates us to automatically optimize training instructions, rather than manually tuning them.
We introduce learnable instructions and optimize them with gradient descent by leveraging bilevel optimization.
The next section provides the details of instruction optimization.
§ INSTRUCTION OPTIMIZATION
Instruction optimization splits training tasks 𝒯_train into two sets: meta-train tasks 𝒯_meta-train and meta-test tasks 𝒯_meta-test.
Subsequently, a model θ is trained to minimize the inner loss on meta-train tasks following learnable instructions I_ϕ in Eq. (<ref>).
θ^*(ϕ) = _θℒ_in(θ, ϕ)
=_θ∑_t ∈𝒯_meta-train∑_i=1^N_t-log p_θ(y_t^(i)|[I_ϕ; X_t^(i)])
where ϕ is a parameter for learnable instructions.
I_ϕ is constructed using an instruction embedder (Section <ref>) or an instruction extractor (Section <ref>), which will be explained later.
If the learnable instruction I_ϕ is randomly created, the trained model θ^*(ϕ) performs poorly on unseen tasks.
Therefore, we optimize ϕ such that the trained model θ^*(ϕ) achieves high performance on meta-test tasks, which are not shown during training.
ϕ is updated to minimize the outer loss in Eq. (<ref>).
ϕ^*
= _ϕℒ_out(θ^*(ϕ))
=_ϕ∑_t ∈𝒯_meta-test∑_i=1^N_t-log p_θ^*(y_t^(i)|[I_t; X_t^(i)])
This optimization is called bilevel optimization and is commonly used in hyperparameter optimization.
Note that we use the manually created instruction I_t to measure the meta-test performance because we aim to develop a model that can accept arbitrary human-created instructions.
§.§ Instruction Embedder
This section presents a method for creating learnable instructions I_ϕ.
As shown in Figure <ref> (left), the instruction embedder replaces manually created instructions with the embeddings of learnable instructions or prepends them to manually created instructions.
We consider the following two types of parameterizations of learnable instructions:
Direct Parameterization (DP)
We parameterize the learnable instruction I_ϕ by preparing a learnable matrix for each task: I_ϕ = W_t ∈ℛ^l × d where l denotes the arbitrary length of a learnable instruction, and d is the dimension of the embeddings in the model θ.
Although this parameterization is very simple, the size of the parameter ϕ (|𝒯_train| × l × d) increases when many training tasks exist.
Moreover, as each learnable matrix W_t is updated only when task t is used for computing the meta-train loss, the matrices are updated infrequently when the number of training task is large.
Therefore, we propose another parameterization method that is scalable for a large number of training tasks.
Instance Conversion (IC)
Another parameterization method is to convert a task instance z_t^(i) into I_ϕ as shown in Eq. (<ref>) and (<ref>).
h_t^(i) = avgpool(z_t^(i)V_ϕ)
I_ϕ = W_ϕh_t^(i)
where the task instance z_t^(i) is a sequence of tokens defined as “Input: x_t^(i) Output: y_t^(i)”, where x_t^(i) and y_t^(i) represents the i-th input and output of a task t, respectively.
V_ϕ∈ℛ^v × d' is an word embedding matrix where v denotes the vocabulary size, and avgpool denotes the average-pooling operation across the embedded tokens.
h_t^(i)∈ℛ^d' denotes a latent representation of z_t^(i), and W_ϕ∈ℛ^l × d × d' is a learnable tensor to convert the latent representation into an instruction[We attempted to use T5 encoder for obtaining h_t^(i); however, it makes bilevel optimization unstable due to a large number of parameters.].
We assume that V_ϕ and W_ϕ are optimized to generate an optimal instruction given a task instance.
As the parameters are shared across all training tasks, this parameterization is scalable for a large number of training tasks.
§.§ Instruction Extractor
We consider another type of instruxction that has multiple candidates to use.
A task exemplar is one example because every task instance j ∈{1, …, N_t} in the training set can be used as a task exemplar.
While instruction tuning randomly selects a task exemplar as instruction, an optimal task exemplar would exist for cross-task generalization.
We explore how to select the optimal task exemplar that maximizes the performance on unseen tasks.
An outline of the instruction extractor is shown in Figure <ref> (right).
We parameterize the probability p_ϕ(z_t^(j)), where the j-th instance is selected as an exemplar of task t.
Similar to the instruction embedder, we consider the following two parameterizations:
Direct Parameterization (DP)
We parameterize the logits of p_ϕ(z_t^(j)) by using a learnable vector v_t ∈ℛ^N_t for each task t.
The logits are converted into probabilities using softmax function in Eq. (<ref>).
p_ϕ(z_t^(j)) = exp(v_t^(j))/∑_j=1^N_texp(v_t^(j))
This parameterization is simple but not scalable when the number of training tasks is large.
Instance Conversion (IC)
While direct parameterization parameterizes p_ϕ(z_t^(j)) regardless of the task instance (i.e., task input and output), instance conversion considers the conditional probability given a task instance.
Specifically, instance conversion parameterizes the probability where z_t^(j) is selected as the exemplar of instance z_t^(i) in Eq. (<ref>).
p_ϕ(z_t^(j)|z_t^(i)) = exp(h_t^(j)W_ϕh_t^(i))/∑_j=1^N_texp(h_t^(j)W_ϕh_t^(i))
where W_ϕ∈ℛ^d' × d' denotes a learnable matrix, and h_t^(j)∈ℛ^d' is a latent representation of the task instance z_t^(j) obtained by Eq. (<ref>).
This parameterization assumes that V_ϕ and W_ϕ are optimized to select an optimal exemplar given a task instance.
As the parameters ϕ are shared across all training tasks, this parameterization is also scalable for a large number of training tasks.
Subsequently, an instance with the highest probability is extracted as an instruction as shown in Eq. (<ref>) and (<ref>).
z_t = _j p_ϕ(z_t^(j))
I_ϕ = z_t V_θ
where V_θ∈ℛ^v × d is the word embedding matrix of the model θ.
Since operation is not differentiable, we use the straight-through estimator <cit.> to approximate the gradient in the backward pass[We also tried to compute I_ϕ using the expectation of z_t^(j): I_ϕ=𝐄_p_ϕ[z_t^(j)V_θ] instead of operation; however, it significantly underperforms.].
As computing the probability of all instances requires a high computational cost when the number of instances is significant, we set a constant value as N_t=N and randomly sampled N instances from all training instances.
§.§ Efficiently Solving Bilevel Optimization
Directly solving bilevel optimization requires a substantial computational cost because it includes a nested formulation.
As shown in Alg. <ref>, approximating the inner optimization in Eq. (<ref>) by K-gradient steps significantly reduces the computational cost, where K is large enough to reach the optimal points of the inner-loop <cit.>.
Computing the hypergradient ∇_ϕℒ_out(θ^(K)) still requires large memory space 𝒪(K|θ| + |ϕ|) as it needs to store K-step gradients <cit.>, and the language model θ contains a lot of parameters.
Using the implicit function theorem in Eq. (<ref>) and (<ref>), the hypergradient can be computed without storing the intermediate gradients <cit.>.
∇_ϕℒ_out(θ^(K)(ϕ)) =∂ℒ_out(θ^(K))/∂θ^(K)∂θ^(K)(ϕ)/∂ϕ
∂θ^(K)(ϕ)/∂ϕ=- [ ∂ℒ_in(θ, ϕ)/∂θ∂θ]^-1∂ℒ_in(θ, ϕ)/∂θ∂ϕθ^(K),ϕ
However, it is impractical to compute the inverse of the Hessian matrix in Eq. (<ref>) as exactly inverting Hessian often requires 𝒪(|θ|^3) computational cost.
We thus approximate the inverse-Hessian using the Neumann approximation, which is introduced in the hyperparameter optimization <cit.>.
The inverse of the Hessian matrix can be approximated as shown in Eq. (<ref>).
[ ∂ℒ_in(θ, ϕ)/∂θ∂θ]^-1=lim_M →∞γ∑_m=0^M[E-γ∂ℒ_in(θ, ϕ)/∂θ∂θ]^m
where E denotes an identity matrix.
γ∈ℛ is sufficiently small to satisfy E-γ∂ℒ_in(θ, ϕ)/∂θ∂θ < 1 in the operator norm.
Consequently, the computational cost of the hypergradient considerably decreases to 𝒪(|θ| + |ϕ|) as shown in <cit.>.
§ EXPERIMENTS
§.§ Experimental Setup[The code is available at <https://github.com/misonuma/instopt>.]
Dataset
In this experiment, we used Super-NaturalInstructions <cit.> as a benchmark to measure cross-task generalization.
Sup-NatInst consists of over 1,600 diverse tasks and their instructions across multiple languages.
We used English tasks and their instructions, resulting in 876 tasks in total.
We used the same test split of tasks (12 types; 119 tasks) and 100 instances for each task as <cit.>.
The remaining 60 task types (757 tasks) were used for meta-train, meta-test, and validation.
The validation set consisted of 10 instances across all 757 tasks, which were used to determine hyperparameters including meta-train/test split.
Based on the validation performance, we split the 60 task types into 50 and 10 types, which were used for the meta-train and meta-test set, respectively.
We used 100 instances of each task for the meta-train/test set.
Table <ref> summarizes the statistics for each split.
The task types in each split are listed in Appendix <ref>.
Evaluation & Baselines
We assessed the cross-task generalization in two settings: a zero-shot setting that uses task definition as testing instruction, and a one-shot setting that uses a task exemplar (n=1) as testing instruction.
We adopted ROUGE-L <cit.> to evaluate all tasks.
<cit.> shows that the human evaluation results align quite well with ROUGE-L across a variety of tasks.
For baseline training instructions, we used manually created instructions (e.g., task definition), exemplars randomly selected for each task or each instance.
Learnable instructions induced by the instruction embedder or optimal exemplars selected by the instruction extractor were compared.
Implementation Details
In our experiment, we used pretrained T5 <cit.> as the model θ.
Specifically, we use the LM-adapted version of the original T5-base (220M)[<https://huggingface.co/google/t5-base-lm-adapt>], which is further trained with a language modeling objective <cit.>.
The hyperparameters of model θ were tuned based on the validation performance of instruction tuning (baselines), and the same hyperparameters were used for instruction optimization.
The hyperparemters of learnable instructions ϕ were determined w.r.t. the validation performance of instruction optimization.
Further details are provided in Appendix <ref>.
§.§ Proof of Concept
Before moving on to the comparison with instruction tuning, we show that our instruction extractor successfully optimizes the training instruction.
We trained models with two types of training instructions: one of which is a task exemplar, and the other is a blank text.
Then, we evaluated them on the test set, where a task exemplar is used as the testing instruction.
As shown in Figure <ref> (left), the model trained with a task exemplar achieves nearly 40% ROUGE-L (black), whereas the model trained with blank text significantly declines to approximately 20% ROUGE-L (gray).
Following these preliminary results, we verified that our instruction extractor appropriately selects a task exemplar from the two training instructions and obtains sufficient generalization ability.
Figure <ref> (left) shows that our instruction extractor achieves competitive performance with the model trained with a task exemplar.
Specifically, the instance conversion (IC; blue) converges faster than the direct parameterization (DP; light blue).
Figure <ref> (right) presents the percentage of training instances where a task exemplar is selected as the training instruction.
Regarding the DP, the percentage increases smoothly, whereas it saturates at approximately 50%.
In contrast, the IC reaches almost 100%, though the increase is slightly unstable.
These results indicate that our instruction extractor successfully selects an appropriate training instruction.
Note that the training time of instruction optimization is reasonable compared to instruction tuning, as shown in Appendix <ref>.
§.§ Main Results
Here, we examine the effectiveness of instruction optimization by comparing it with the baselines.
In Table <ref> and <ref>, we show the average performance across 8 different random seeds and 95% confidence intervals w.r.t. the t-distribution.
Table <ref> shows the average ROUGE-L across all test tasks where the task definition is used as the testing instruction, while varying the training instruction.
As the baseline of training instructions, we used manually created task definitions concatenated with positive/negative exemplars and explanations about each positive/negative exemplar.
When using only learnable instructions generated by the instruction embedder, the performance is considerably worse than that of baselines.
This underperformance suggests that the learned instructions cannot alternate with manually created instructions.
However, concatenating learnable instruction with task definition leads to performance gain, whereas prepending other instructions (positive/negative exemplars and explanations) has a negative effect.
As will be elaborated in Section <ref>, adding learnable instructions improves the diversity of instructions and achieves higher generalization performance.
In Table <ref>, we show the results where a task exemplar is used as the testing instruction.
Unfortunately, our instruction extractor underperforms exemplars randomly selected for each task (i.e., the same exemplar is used for each instance).
To investigate the reason for the worse performance, we added another baseline, which randomly selects an exemplar for each instance (i.e., different exemplars are used for each instance).
Unexpectedly, the random exemplars yield considerably worse ROUGE-L when they are selected for each instance.
This result indicates that using the same exemplar across all instances of each task is preferable for cross-task generalization.
As the instruction extractor (DP and IC) updates the optimal exemplar during the optimization, it performs worse than exemplars randomly selected for each task.
In particular, as IC varies the optimal exemplar for each instance, it results in a lower performance.
The evaluation results of each test task type are shown in Appendix <ref>.
§ DISCUSSION
§.§ Analysis of Learned Instruction
We discuss how the learned instruction contributes to the improvement of cross-task generalization.
As the instruction embedder directly generates instruction embeddings in a continuous space, the learned instruction is difficult to interpret.
Following <cit.>, we computed the nearest neighbors of each token in the learned instruction from the vocabulary of the model θ; however, we could not find explicit patterns for the nearest tokens.
Therefore, we computed the embeddings of the learned instructions and visuzalized them at a two-dimensional space using t-SNE <cit.>.
The embeddings were obtained by the average pooling across the last hidden states encoded by the T5 encoder.
In Figure <ref>, we show the embeddings of top 20 task types with respect to the number of tasks in the meta-train set.
The embeddings of the task definition (left) are closely clustered by the task type, and training tasks do not cover some spaces.
On the other hand, the embeddings of learned instructions (right) are roughly clustered, and some task types are scattered over the embedding space (e.g., sentiment analysis and toxic language detection).
As learned instructions enhance the diversity of instructions and cover a broader embedding space, the trained model can generalize to wider variety of instructions.
Thus, learned instructions improve the generalization performance on unseen tasks.
Figure <ref> shows the generalization performance concerning the length of the learnable instruction prepended to the task definition.
The model’s performance saturates when the length is 2^6=64.
When the instruction is longer than 64, the performance declines significantly.
As bilevel optimization tends to be unstable for large-scale hyperparameters, a large instruction length leads to low generalization performance.
§.§ Analysis of Meta-train/test Split
We study how meta-train/test split affects the generalization performance of the trained model.
Number of Meta-train/test Tasks
Figure <ref> shows the performance with different numbers of task types in the meta-train/test split: 1/59, 10/50, 20/40, 30/30, 40/20, 50/10, and 59/1.
In each split, meta-train/test tasks were randomly chosen.
The trained model achieves the best generalization performance when the number of categories in the meta-test is 10.
The performance worsens as the number of meta-test tasks increases, while the number of meta-train tasks decreases correspondingly.
Diverse vs. Not Diverse
We examine whether meta-test tasks should be diverse or not diverse.
If meta-test tasks are diverse, the generalization performance would be improved because the instruction is trained to achieve higher performance on various tasks.
However, it also increases the risk that some of meta-test tasks are similar to meta-train tasks, which would negatively affect the performance on unseen tasks.
It is not obvious whether meta-test tasks should be diverse or not diverse.
To answer this question, we prepared two types of meta-test splits.
One comprises randomly selected tasks, whereas the other consists of tasks that are grouped by k-means clustering.
We prepared 16 different random splits, while k-means divided the tasks into 16 groups based on the embeddings of the task definition.
Then, for both random split and k-means, the best split for the validation set was chosen from the 16 splits.
Experimental results show that model trained on the random split achieves 36.1 ROUGE-L, while that of k-means scores 35.0 ROUGE-L on the test set.
Although the margin is not significant, we confirmed that diverse meta-test tasks are more preferable for cross-task generalization.
§ RELATED WORK
Instruction Tuning
Instruction tuning has attracted considerable attention to achieve models that are generalizable across a variety of tasks <cit.>.
By prepending either a few exemplars <cit.> or text-based instructions <cit.> to multi-task learning, the trained model can generalize to tasks unseen during training.
Further progress has been made by scaling the number of tasks <cit.>, scaling the model size <cit.>, and improving the training strategy <cit.>.
In contrast, our work is the first study to optimize training instructions to improve the cross-task generalization ability.
Although Super-NaturalInstructions <cit.> is used as the benchmark for measuring cross-task generalization in our study, our instruction optimization can be applied to other cross-task benchmarks, such as CROSSFIT <cit.> and PromptSource <cit.>.
Prompt Engineering
Recent instruction-based NLP has evolved prompt engineering, which seeks the most appropriate prompt to achieve a task <cit.>.
While there are numerous studies to search for an optimal prompt in a discrete token space <cit.>, some work studies continuous prompts that perform prompting in the embedding space of tokens <cit.>.
Other studies retrieve appropriate exemplars as a testing prompt for in-context learning and achieve better performance than randomly selected exemplars <cit.>.
Whereas the aforementioned methods optimize prompts to achieve an individual task in the test, our study differs in the target and aim of optimization; we optimize the training prompts to maximize the generalization performance of the trained model.
Bilevel Optimization
Bilevel optimization has been used to optimize hyperparameters <cit.>, initial model weights <cit.>, and model architectures <cit.>.
We optimize the training instructions by regarding them as a special type of hyperparameters.
Learnable instructions are constructed by many hyperparameters, which makes bilevel optimization difficult in terms of computational cost and stability.
Recent studies <cit.> significantly reduce the computational cost and improve the stability by combining the implicit function theorem with efficient inverse Hessian approximations.
We leverage this idea for instruction optimization, achieving instruction optimization at a reasonable computational cost and stability.
§ CONCLUSION
This study presents instruction optimization, which optimizes training instructions concerning generalization ability.
The experimental results showed that our instruction extractor successfully extracted appropriate instruction, providing proof of concept.
Regarding the comparison with instruction tuning, the instruction embedder enhanced the diversity of instructions and improved the generalization ability than using only manually created instructions.
In contrast, the instruction extractor did not contribute to the performance gain because using the same task exemplar across instances is unexpectedly preferable for cross-task generalization.
This study provides a basis for exploring the optimal instructions for instruction tuning.
§ LIMITATIONS
Our study used T5-base (220M) due to the capacity of our computational resources (Tesla V100 32GB).
Thus, it is unclear whether our method is also effective for larger models, such as T5-XL/XXL.
<cit.> argues that continuous prompts are particularly effective for large T5 models.
Following their results, our instruction embedder is also expected to be effective for larger models.
As shown in Figure <ref>, instruction optimization is slightly unstable to converge.
Some studies tackled the unstable convergence of bilevel optimization by L2-normalization, early stopping <cit.>, or perturbation of hyperparameters <cit.>.
These methods might be effective in stabilizing the instruction optimization.
§ ETHICS STATEMENT
Our study complies with the ACL Ethics Policy.
We used S2ORC <cit.>, PyTorch <cit.> and HuggingFace Transformers <cit.> as scientific artifacts.
Our study was conducted under the licenses and terms of the scientific artifacts.
Our model is trained on a set of publicly available datasets <cit.>, in which undesirable data distribution, such as disinformation, bias, or offensive content, might present.
Such potential risks need to be recognized.
§ ACKNOWLEDGEMENTS
We would like to thank the anonymous reviewers for their valuable feedback.
This work was supported by JST ACT-X JPMJAX1904, JST CREST JPMJCR21D1, NEDO JPNP20006, and JSPS KAKENHI 23K16940, Japan.
acl_natbib
§ APPENDIX
§.§ Task Split
The task types used in the meta-train/meta-test/test split are listed in Table <ref>.
We prepared 16 random splits of meta-train/test and used the one that achieved the best validation performance.
§.§ Implementation Details
We trained model θ for three epochs using Adam <cit.> with a learning rate of 1.0×10^-5 with linear decay, warmup steps of 8000, and a batch size of 2.
The maximum input and output length were set to 1024 and 128, respectively.
Learnable instructions ϕ were trained using Adam with a batch size of 8.
The learning rate was set to 1.0×10^-5 for instruction embedder (DP), 1.0×10^-6 for instruction embedder (IC), 5.0×10^-5 for instruction extractor (DP), 1.0×10^-5 for instruction extractor (IC) with linear decay.
The length of learnable instruction was l=64, the number of inner optimization steps was K=20 in Alg. <ref>, the hyperparameters for the Neumann approximation were M=1 and γ=1.0×10^-5 in Eq. (<ref>).
The maximum input length in Eq. (<ref>) was 128, and we randomly sampled N=32 instances for the candidates of the instruction extractor.
Our code is implemented with Python v3.8.13, PyTorch v1.12.0 <cit.>, and transformers v4.18.0 <cit.>.
Our code is based on the script published by <cit.>[<https://github.com/yizhongw/Tk-Instruct>].
ROUGE-L is computed using the Python package distributed by Google[<https://pypi.org/project/rouge-score/>].
§.§ Computatinal Time
Our experiments were conducted with a single Tesla V100 (32GB).
Each training run takes approximately 8 hours for instruction optimization, while it takes 5 hours for instruction tuning, without validation.
However, the training time of instruction optimization depends on the number of inner training steps K.
It reduces to 6 hours when K=100, while slightly deteriorating the performance.
§.§ Experimental Results for Each Test Task
Table <ref> and Table <ref> shows the zero-shot and one-shot evaluation for each test task type, respectively.
We show the average performance across 8 different random seeds and 95% confidence intervals w.r.t. the t-distribution.
|
http://arxiv.org/abs/2306.02237v1
|
20230604023446
|
Frobenius distributions of low dimensional abelian varieties over finite fields
|
[
"Santiago Arango-Piñeros",
"Deewang Bhamidipati",
"Soumya Sankar"
] |
math.NT
|
[
"math.NT"
] |
Generative Flow Network for Listwise Recommendation
Kun Gai
===================================================
Given a g-dimensional abelian variety A over a finite field _q, the Weil conjectures imply that the normalized Frobenius eigenvalues generate a multiplicative group of rank at most g. The Pontryagin dual of this group is a compact abelian Lie group that controls the distribution of high powers of the Frobenius endomorphism. This group, which we call the Serre–Frobenius group, encodes the possible multiplicative relations between the Frobenius eigenvalues. In this article, we classify all possible Serre–Frobenius groups that occur for g ≤ 3. We also give a partial classification for simple ordinary abelian varieties of prime dimension g>3.
§ INTRODUCTION
Let E be an elliptic curve over a finite field _q of characteristic p>0. The zeros α_1, α_1 of the characteristic polynomial of Frobenius acting on the Tate module of E are complex numbers of absolute value √(q). Consider u_1 α_1/√(q) and u_1 the normalized zeros in the unit circle (1). The curve E is ordinary if and only if u_1 is not a root of unity, and in this case, the sequence (u_1^r)_r=1^∞ is equidistributed in (1). Further, the normalized Frobenius traces x_r u_1^r + u_1^r are equidistributed on the interval [-2,2] with respect to the pushforward of the probability Haar measure on (1) via u ↦ u + u, namely
λ_1(x) dx/π√(4-x^2),
where x is
the restriction of the Lebesgue measure to [-2,2] (see <cit.>).
In contrast, if E is supersingular, the sequence (u_1^r)_r=1^∞ generates a finite cyclic subgroup of order m, C_m ⊂(1). In this case, the normalized Frobenius traces are equidistributed with respect to the pushforward of the uniform measure on C_m.
This dichotomy branches out in an interesting way for abelian varieties of higher dimension g > 1: potential non-trivial multiplicative relations between the Frobenius eigenvalues α_1, α_1, …, α_g, α_g increase the complexity of the problem of classifying the distribution of normalized traces of high powers of Frobenius,
x_r (α_1^r + α_1^r + ⋯ + α_g^r + α_g^r)/q^r/2∈ [-2g, 2g], r ≥ 1.
In analogy with the case of elliptic curves, we identify a compact abelian subgroup of (1)^g controlling the distribution of Sequence (<ref>) via pushforward of the Haar measure. In this article, we provide a complete classification of this subgroup, which we call the Serre–Frobenius group, for abelian varieties of dimension up to 3. We do this by classifying the possible multiplicative relations between the Frobenius eigenvalues. This classification provides a description of all the possible distributions of Frobenius traces in these cases (see Corollary <ref>). We also provide a partial classification for simple ordinary abelian varieties of odd prime dimension.
Let A be an abelian variety of dimension g over _q. Let α_1, α_2 …, α_g, α_1, α_2 …α_g denote the eigenvalues of Frobenius, ordered such that (α_i) ≥(α_j) if g ≥ i>j ≥ 1. Let u_i = α_i/√(q) denote the normalized Frobenius eigenvalues. The Serre–Frobenius group of A, denoted by (A), is the closure of the subgroup of (1)^g generated by the vector 𝐮 (u_1, …, u_g).
We classify the Serre–Frobenius groups of abelian varieties of dimension g ≤ 3.
[Elliptic curves]
Let E be an elliptic curve defined over _q. Then
* E is ordinary if and only if (E) = (1).
* E is supersingular if and only if (E) ∈C_1, C_3, C_4, C_6, C_8, C_12.
Moreover, each one of these groups is realized for some prime power q.
We note that the classification of supersingular Serre–Frobenius groups of elliptic curves follows from Deuring <cit.> and Waterhouse's <cit.> classification of Frobenius traces (see also <cit.> and <cit.>).
[Abelian surfaces]
Let S be an abelian surface over _q. Then, S has
Serre–Frobenius group according to Figure <ref>. The possible options for the connected component of the identity, (S)^∘, and the size of the cyclic component group (S)/(S)^∘ are given below. Further, each one of these groups is realized for some prime power q.
[Abelian threefolds]
Let X be an abelian threefold over _q. Then, X has
Serre–Frobenius group according to Figure <ref>. The possible options for the connected component of the identity, (X)^∘, and the size of the cyclic component group (X)/(X)^∘ are given below. Further, each one of these groups is realized for some prime power q.
If g is an odd prime, we have the following classification for simple ordinary abelian varieties. In the following theorem, we say that an abelian variety A splits over a field extension _q^m if A is isogenous over _q^m to a product of proper abelian subvarieties.
[Prime dimension]
Let A be a simple ordinary abelian variety defined over _q of
prime dimension g > 2. Then, exactly one of the following
conditions holds.
* A is absolutely simple.
* A splits over a degree g extension of _q as a power of an elliptic curve, and (A) ≅(1)× C_g.
* 2g + 1 is prime (i.e., g is a Sophie Germain prime) and A splits over a degree 2g + 1 extension of _q as a power of an elliptic curve, and (A) ≅(1)× C_2g+1.
Key to our results is the relation between the Serre–Frobenius group and the multiplicative subgroup of U_A ⊂ generated by the normalized eigenvalues u_1, …, u_g. Indeed, an equivalent definition of the former is via the Pontryagin dual of the latter (see Lemma <ref>). The rank of the group U_A is called the angle rank of the abelian variety and the order of the torsion subgroup is called the angle torsion order. The relation between (A) and the group generated by the normalized eigenvalues gives us the following structure theorem.
Let A be an abelian variety defined over _q. Then
(A) ≅(1)^δ× C_m,
where δ = δ_A is the angle rank and m = m_A is the angle torsion order. Furthermore, the connected component of the identity is
(A)^∘ = (A__q^m).
§.§ Application to distributions of Frobenius traces
Our results can be applied to understanding the distribution of Frobenius traces of an abelian variety over _q as we range over finite extensions of the base field. Indeed, for each integer r ≥ 1, we may rewrite Equation (<ref>) as
x_r = u_1^r +u_1^r + ⋯ + u_g^r +u_g^r ∈ [-2g, 2g]
denote the normalized Frobenius trace of the base change of an abelian variety A to _q^r.
In <cit.>, the authors study Jacobians of smooth projective genus g curves with maximal angle rank[In their notation, this is the condition that the Frobenius angles are linearly independent modulo 1.] and show that the sequence (x_r/2g)_r = 1^∞ is equidistributed on [-1,1] with respect to an explicit measure. The Serre–Frobenius group enables us to remove the assumption of maximal angle rank.
Let A be a g-dimensional abelian variety defined over
_q. Then, the sequence (x_r)_r=1^∞ of normalized traces of
Frobenius is equidistributed in [-2g,2g] with respect to the
pushforward of the Haar measure on (A) ⊆(1)^g via
(A) ⊆(1)^g → [-2g,2g], (z_1, …, z_g) ↦ z_1 + z_1 + ⋯ + z_g + z_g.
The classification of the Serre–Frobenius groups in our theorems can be used to distinguish between the different Frobenius trace distributions occurring in each dimension.
Let S be a simple abelian surface over _q with Frobenius eigenvalues R_S = α_1,α_2, α_1, α_2 and suppose that S_(2) S ×__q_q^2 is isogenous to E^2 for some ordinary elliptic curve E/_q^2. In this case, α_1^2, α_1^2 = R_E = α_2^2, α_2^2.
Normalizing, and using the fact the S is simple, we see that either (1) u_2 = -u_1, or (2) u_2 = -u_1. The Serre–Frobenius groups in these cases can be calculated as follows.
* When u_2 = -u_1, the vector of normalized eigenvalues 𝐮 = (u_1, u_2) = (u_1, -u_1) generates the group
(S) = (u_1^m, -u_1^m) m ∈ =
(u,-u) : u ∈(1)⊂(1)^2.
Extending scalars to _q^2, we get:
(S_(2)) =
(u_1^2m,(-u_1)^2m) : m ∈
= (u, u) u ∈(1)⊂(1)^2.
* When u_2 = -u_1, the vector of normalized eigenvalues 𝐮 = (u_1, u_2) = (u_1, -u_1) generates the group (S) = (u,-u) : u ∈(1)⊂(1)^2. Similar to the case above, (S_(2)) = (u,u^-1) : u ∈(1).
In both cases, the sequence of normalized traces is given by
x_r = u_1^r + u_1^r + (-1)^ru_1^r +(-1)^ru_1^r ∈ [-4,4].
In particular, x_r = 0 when r is odd, and x_r = 2u_1^r + 2u_1^r when r is even. Extending the base field to _q^2 yields the sequence of normalized traces x_r(S_(2)) = x_2r(S) = 2x_r(E). The equality of the trace distributions is a consequence of the fact the (S) in both cases is isomorphic to (1)× C_2. The data of the embedding (S) ⊆(1)^2 precisely captures the (non-trivial) multiplicative relations between the Frobenius eigenvalues.
In both cases (1) and (2), the normalized traces x_r(S) are equidistributed with respect to the pushforward of the Haar measure under the map (S) ⊆(1)^2 → [-4,4] given by (z_1, z_2) ↦ z_1 + z_1 + z_2 + z_2. This can be computed explicitly as
12δ_0 + x/2π√(16-x^2) x/π√(16-x^2)
for S and S_(2) respectively, where x is the restriction of the Haar measure to [-4,4], and δ_0 is the Dirac measure supported at 0.
For instance, choose the surface S to be in the isogeny class with LMFDB label[Recall the https://www.lmfdb.org/Variety/Abelian/Fq/Labelslabelling convention for isogeny classes of abelian varieties over finite fields in the LMFDB: where is the dimension, is the cardinality of the base field, and specifies the isogeny class by writing the coefficients of the Frobenius polynomial in base 26.] https://www.lmfdb.org/Variety/Abelian/Fq/2/5/a_ab and Weil polynomial P(T) = T^4 - T^2 + 25. This isogeny class is ordinary and simple, but not geometrically simple. Indeed, S_(2) is in the isogeny class ^2 = https://www.lmfdb.org/Variety/Abelian/Fq/2/25/ac_bz corresponding to the square of an ordinary elliptic curve. The corresponding a_1-histograms describing the frequency of the sequence (x_r)_r=1^∞ are depicted in Figure <ref>. Each graph represents a histogram of 16^6 = 16777216 samples placed into 4^6 = 4096 buckets partitioning the interval [-2g,2g]. The vertical axis has been suitably scaled, with the height of the uniform distribution, 1/4g, indicated by a gray line.
§.§ Relation to other work
The reason for adopting the name “Serre-Frobenius group” is that the Lie group (A) is closely related to Serre's Frobenius torus <cit.>, as explained in Remark <ref>.
§.§.§ Angle rank
In this article, we study multiplicative relations between Frobenius eigenvalues, a subject studied extensively by Zarhin <cit.>. Our classification relies heavily on being able to understand multiplicative relations in low dimension, and we use results of Zarhin in completing parts of it. The number of multiplicative relations is quantified by the angle rank, an invariant studied in <cit.>, <cit.> for absolutely simple abelian varieties by elucidating its interactions with the Galois group and Newton polygon of the Frobenius polynomial. We study the angle rank as a stepping stone to classifying the full Serre–Frobenius group. While our perspective differs from that in <cit.>, the same theme is continued here: the Serre–Frobenius groups depend heavily on the Galois group of the Frobenius polynomial. It is worth noting that here that the results about the angle rank in the non-absolutely simple case cannot be pieced together by knowing the results in the absolutely simple cases (see for instance, see Zywina's exposition of Shioda's example <cit.>).
§.§.§ Sato–Tate groups
The Sato–Tate group of an abelian variety defined over a number field controls the distribution of the Frobenius of the reduction modulo prime ideals, and it is defined via it's ℓ-adic Galois representation (see <cit.>). The Serre–Frobenius group can also be defined via ℓ-adic representations in an analogous way: it is conjugate to a maximal compact subgroup of the image of Galois representation ρ_A,ℓ(_q/_q) →(V_ℓ A)⊗, where V_ℓA is the ℓ-adic Tate vector space. Therefore it is natural to expect that the Sato–Tate and the Serre–Frobenius group are related to each other. The following observations support this claim:
* Assuming standard conjectures, the connected component of the identity of the Sato–Tate group can be recovered from knowing the Frobenius polynomial at two suitably chosen primes (<cit.>).
* Several abelian Sato–Tate groups (see <cit.>) appear as Serre–Frobenius groups of abelian varieties over finite fields. The ones with maximal angle rank are:
* (1) is the Sato–Tate group of an elliptic curve with complex multiplication over any number field that contains the CM field (see https://www.lmfdb.org/SatoTateGroup/1.2.B.1.1a). It is also the Serre–Frobenius group of any ordinary elliptic curve (see Figure <ref>), and the a_1-moments coincide.
* (1)^2 is the Sato–Tate group of weight 1 and degree 4 (see https://www.lmfdb.org/SatoTateGroup/1.4.D.1.1a). It is also the Serre–Frobenius group of an abelian surface with maximal angle rank (see Figure <ref>), and the a_1-moments coincide.
* (1)^3 is the Sato–Tate group of weight 1 and degree 6 (see https://www.lmfdb.org/SatoTateGroup/1.6.H.1.1a). It is also the Serre-Frobenius group of abelian threefolds with maximal angle rank (see Figure <ref>), and the a_1-moments coincide.
This is not unexpected, since (1)^g embeds into USp_2g() and composition with the trace map gives the normalized traces (x_r)_r=1^∞.
§.§ Outline
In Section <ref>, we give some background on abelian varieties over finite fields, expand on the definition of the Serre–Frobenius group, and describe how it controls the distribution of traces of high powers of Frobenius.
In Section <ref>, we prove some preliminary results on the geometric isogeny types of abelian varieties of dimension g ≤ 3 and g odd prime. We also recall some results about Weil polynomials of supersingular abelian varieties, and Zarhin's notion of neatness. In Sections <ref>, <ref>, and <ref>, we give a complete classification of the Serre–Frobenius group for dimensions 1, 2, and 3 respectively. In Section <ref>, we discuss the case of simple ordinary abelian varieties of odd prime dimension. A list of tables containing different pieces of the classification follows this section.
§.§ Notation
Throughout this paper, A will denote a g-dimensional abelian variety over a finite field _q of characteristic p. The polynomial P_A(T) = ∑_i=1^2g a_iT^2g - i will denote the characteristic polynomial of Frobenius acting on the Tate module of A, and h_A(T) its minimal polynomial. The set of roots of P_A(T) is denoted by R_A. We usually write α_1,α_1 …, α_g, α_g ∈ R_A for the Frobenius eigenvalues. In the case that P_A(T) is a power of h_A(T), we will denote by e_A this power (See <ref>). The subscript (·)_(r) will denote the base change of any object or map to _q^r. The group U_A will denote the multiplicative group generated by the normalized eigenvalues of Frobenius, δ_A its rank and m_A the order of its torsion subgroup. The group Γ_A will denote the multiplicative group generated by {α_1, α_2 …α_g, q}. In Section <ref>, S will be used to denote an abelian surface, while in Section <ref>, X will be used to denote a threefold.
§.§ Acknowledgements
We would like to thank David Zureick-Brown, Kiran Kedlaya, Francesc Fité, Brandon Alberts, Edgar Costa and Andrew Sutherland for useful conversations about this paper. We thank Yuri Zarhin for providing us with useful references. We would also like to thank Everett Howe for helping us with a missing piece of the puzzle in Theorem <ref>. This project started as part of the Rethinking Number Theory workshop in 2021. We would like to thank the organizers of the workshop for giving us the opportunity and space to collaborate, and the funding sources for the workshop, AIM, the Number Theory Foundation, and the University of Wisconsin-Eau Claire Department of Mathematics. We would also like to thank Rachel Pries for her guidance at the beginning of the workshop, which helped launch this project.
§ FROBENIUS MULTIPLICATIVE GROUPS
In this section we introduce the Serre–Frobenius group of A and explain how
it is related to Serre's theory of Frobenius tori
<cit.>. We do this from the perspective of the theory of
algebraic groups of multiplicative type, as in <cit.>. We start by recalling some facts about abelian varieties over finite fields.
§.§ Background on Abelian varieties over finite fields
Fix A a g dimensional abelian variety over _q. A q-Weil number is an algebraic integer α such that |ϕ(α)| = √(q) for every embedding ϕ(α) →. Let P_A(T) denote the characteristic polynomial of the Frobenius endomorphism acting on the ℓ-adic Tate module of A. The polynomial P_A(T) is monic of degree 2g, and Weil <cit.> showed that its roots are q-Weil numbers; we denote the set of roots of P_A(T) by R_A {α_1, α_2 …, α_g, α_g+1, …, α_2g} with α_g+j = q/α_j for j ∈1, …, g. We index the first g roots according to non-decreasing angles; that is (α_j) ≤(α_i) if j < i. The seminal work of Honda
<cit.> and Tate <cit.>
<cit.> classifies the isogeny decomposition
type of A in terms of the factorization of P_A(T). In particular, if A is simple, we have that P_A(T) = h_A(T)^e_A where h_A(T) is the minimal polynomial of the Frobenius endomorphism and e_A is the degree, i.e., the square root of the dimension, of the central simple algebra End^0(A) End(A)⊗ over its center. The Honda–Tate theorem gives a bijective correspondence between isogeny classes of simple abelian varieties over _q and conjugacy classes of q-Weil numbers, sending the isogeny class determined by A to the set of roots R_A. Further, if A ∼ A_1 × A_2 …× A_k, then P_A(T) = ∏_i=1^k P_A_i(T).
Writing P_A(T) = ∑_i=0^2g a_i T^2g-i, the q-Newton polygon of A is the lower convex hull of the set of points {(i, ν(a_i)) ∈^2 : a_i ≠ 0} where ν is the p-adic valuation normalized so that ν(q)=1. The Newton polygon is isogeny invariant. Define the p-rank of A as the number of slope 0 segments of the Newton polygon. An abelian variety is called ordinary if it has maximal p-rank, i.e. its p-rank is equal to g. It is called supersingular if all the slopes of the Newton polygon are equal to 1/2.
The field L = L_A (α_1, …, α_g) is the splitting field of the Frobenius polynomial. By definition, the Galois group (L/) acts on the roots R_A by permuting them.
Whenever A is fixed or clear from context, we will omit the subscript corresponding to it from the notation described above. In particular, we will use P(T), h(T) and e instead of P_A(T), h_A(T) and e_A.
§.§ Angle groups
Denote by ΓΓ_A the multiplicative subgroup
of generated by the set of Frobenius eigenvalues
R_A, and let
Γ_(r)Γ_A_(r) for every r ≥
1. Since α↦ q/α is a permutation of R_A, the set
α_1, …, α_g, q generates Γ
is a set of
generators for Γ; that is, every γ∈Γ can be
written as
γ = q^k ∏_j=1^g α_j^k_j
for a some (k, k_1, …, k_g) ∈^g+1.
Since Γ is a subgroup of , it is naturally a (/)-module. However, this perspective is not necessary for our applications. This group is denoted as Φ_A in <cit.>.
We define the angle group of A to be U U_A, the multiplicative subgroup of (1) generated by the unitarized eigenvalues { u_j α_j/√(q): j = 1, …, g}. When A is fixed, for every r ≥ 1 we abbreviate U_(r) U_A_(r).
The angle rank of an abelian variety A/_q is the rank of
the finitely generated abelian group U_A. It is denoted by
δ_A U_A. The angle torsion order m_A is the order of the torsion subgroup of U_A, so
that U_A ≅^δ_A⊕/m_A.
The angle rank δ is by definition an integer between 0 and g. When δ = g, there are no multiplicative relations among the normalized eigenvalues. In other words, there are no additional relations among the generators of Γ_A apart from the ones imposed by the Weil conjectures. If A is absolutely simple, the maximal angle rank condition also implies that the Tate conjecture holds for all powers of A (see Remark 1.3 in <cit.>). On the other extreme, δ = 0 if and only if A is supersingular (See Example 5.1 <cit.>).
The angle rank is invariant under base extension:
δ(A) = δ(A_(r)) for every r ≥ 1. Indeed, any multiplicative relation between u_1^r, …, u_g^r is a multiplicative relation between u_1, …, u_g. We have
that
U_A/Tors(U_A) ≅
U_A_(r)/Tors(U_A_(r)) for every positive integer
r. In particular, U_A/Tors(U_A) ≅ U_A_(m) where m = m_A is the angle torsion order of A.
[Extension and restriction of scalars]
Let A/_q be an abelian variety with Frobenius polynomial P_A(T) = ∏ (T-α) ∈[T] and circle group U_A = ⟨ u_1, …, u_g⟩. Then, the extension of scalars A_(r) has Frobenius polynomial P_(r)(T) = ∏(T-α^r) and circle group U_A_(r) = ⟨ u_1^r, …, u_g^r ⟩⊂ U_A. On the other hand, if B/_q^r is an abelian variety for some r ≥ 1, and A/_q is the Weil restriction of B to _q, then P_A(T) = P_B(T^r) and U_A = ⟨ U_B, ζ_r⟩⊃ U_B. See <cit.>.
§.§ The Serre–Frobenius group
For every locally compact abelian group G, denote by G its Pontryagin dual; this is the topological group of continuous group homomorphisms G →(1). It is well known that G ↦G gives an anti-equivalence of categories from the category of locally compact abelian groups to itself. Moreover, this equivalence preserves exact sequences, and every such G is canonically isomorphic to its double dual via the evaluation isomorphism. See <cit.> for the original reference and <cit.> for a gentle introduction.
Recall that we defined the Serre–Frobenius group of A as the topological group generated by the vector 𝐮 = (u_1, …, u_g) of normalized eigenvalues (see Definition <ref>). This explicit description of the group is practical for calculating examples, but the following equivalent definition is conceptually advantageous.
The Serre–Frobenius group of an abelian variety A has character group U_A. In particular, (A) ≅U_A canonically via the evaluation isomorphism.
We have an injection U_A →(A) given by mapping γ to the character ϕ_γ that maps 𝐮 to γ. To see that this map is surjective, observe that by the exactness of Pontryagin duality, the inclusion (A) ↪(1)^g induces a surjection ^g = (1)^g →(A). Explicitly, this tells us that every character of (A) is given by ϕ(z_1, …, z_g) = z_1^m_1… z_g^m_g for some (m_1, ⋯, m_g) ∈^g. By continuity, every character ϕ of (A) is completely determined by ϕ(𝐮). In particular, we have that ϕ(𝐮) = u_1^m_1… u_g^m_g∈ U_A.
The following theorem should be compared to <cit.>
Let A be an abelian variety defined over _q. Then
(A) ≅(1)^δ× C_m,
where δ = δ_A is the angle rank and m = m_A is the angle torsion order. Furthermore, the connected component of the identity is
(A)^∘ = (A_(m)).
Since every finite subgroup of (1) is cyclic, the torsion part
of the finitely generated group U_A is generated by some primitive
m-th root of unity ζ_m. The group U_(m) is torsion free by Remark <ref>. We thus have the split short exact sequence
1 [r] ⟨ζ_m ⟩[r] U_A [r,
"u ↦ u^m"] U_(m)[r] 1.
After dualizing, we get:
1 [r] (A_(m)) [r] (A) [r] ⟨ζ_m ⟩[r] 1.
We conclude that (A)^∘ = (A_(m)) and
(A)/(A)^∘≅⟨ζ_m ⟩.
By definition, U_A is the image of Γ_A under the radial
projection ψ→(1), z ↦ z/|z|. Thus, we
have a short exact sequence
1 [r] Γ_A∩_>0[r] Γ_A [r,
"ψ|_Γ"] U_A [r] 1,
which is split by the section u_j ↦α_j. The kernel Γ∩_>0 is free of rank 1 and contains the group q^. The relation between the Serre–Frobenius group (A) and Serre's Frobenius Torus (see <cit.>, <cit.>) can be understood via their character groups.
* The (Pontryagin) character group of (A) is U_A.
* The (algebraic) character group of the Frobenius torus of A is the torsion free part of Γ_A.
§.§ Equidistribution results
Let (Y,μ) be a measure space in the sense of Serre (see Appendix A.1 in <cit.>). Recall that a sequence (y_r)_r = 1^∞⊂ Y is μ-equidistributed if for every continuous function f Y → we have that
∫_Y f μ = lim_n →∞1/n∑_r=1^n f(y_r) .
In our setting, Y will be a compact abelian Lie group with probability Haar measure μ. We have the following lemma.
Let G be a compact group, and h ∈ G. Let H be the closure of
the group generated by h. Then, the sequence
(h^r)_r = 1^∞ is equidistributed in H with respect to the
Haar measure μ_H.
For a non-trivial character ϕ H →, the image
of the generator ϕ(h) = u ∈(1) is not trivial. We see
that
lim_n→∞1/n∑_r=1^n ϕ(h^r) = lim_n→∞1/n∑_r=1^n u^r = 0,
both when u has finite or infinite order. The latter case follows form Weyl's equidistribution theorem in (1). The result follows from Lemma 1 in <cit.> and the Peter–Weyl theorem.
[Corollary <ref>]
Let A be a g-dimensional abelian variety defined over
_q. Then, the sequence (x_r)_r=1^∞ of normalized traces of
Frobenius is equidistributed in [-2g,2g] with respect to the
pushforward of the Haar measure on (A) ⊆(1)^g via
(A) ⊆(1)^g → [-2g,2g], (z_1, …, z_g) ↦ z_1 + z_1 + ⋯ + z_g + z_g.
By Lemma <ref>, the sequence
(u^r)_r=1^∞ is equidistributed in (A) with
respect to the Haar measure μ_(A). By definition, the
sequence (x_r)_r=1^∞ is equidistributed with respect
to the pushforward measure.
When A has maximal angle rank δ = g, the
Serre–Frobenius group is the full torus (1)^g, and the sequence
of normalized traces of Frobenius is equidistributed with respect to
the pushforward of the measure μ_(1)^g; which we denote by
λ_g(x) following the notation[Beware of the
different choice of normalization. We chose to use the interval
[-2g,2g] instead of [-1,1] to be able to compare our
distributions with the Sato–Tate distributions of abelian
varieties defined over number fields.] in
<cit.>.
§ PRELIMINARY RESULTS
For this entire section, we let A be an abelian variety over _q, where q=p^d for some prime p.
§.§ Splitting of simple ordinary abelian varieties of odd
prime dimension
Recall from Section <ref> that an abelian variety A splits over a field extension _q^m if A ∼_(m) A_1 × A_2 and A_1, A_2 < A, i.e., if A obtains at least one isogeny factor when base-changed to _q^m. We say that A splits completely over _q^m if A_(m)∼ A_1 × A_2 ×…× A_k, where each A_i is an absolutely simple abelian variety defined over _q^m. In other words, A acquires its geometric isogeny decomposition over _q^m.
In this section, we analyze the splitting behavior of simple ordinary abelian varieties of
prime dimension g > 2. Our first result is analogous to <cit.> for odd primes.
Let A be a simple ordinary abelian variety defined over _q of
prime dimension g > 2. Then, exactly one of the following
conditions holds.
* A is absolutely simple.
* A splits over a degree g extension of _q as a power of an elliptic curve, and (A) ≅(1)× C_g.
* 2g + 1 is prime (i.e., g is a Sophie Germain prime) and A splits over a degree 2g + 1 extension of _q as a power of an elliptic curve, and (A) ≅(1)× C_2g+1.
Let α = α_1 be a Frobenius eigenvalue of A, and denote by
K = (α)≅[T]/P(T) the number field generated by
α. Since A is ordinary, (α^n) ≠ is a
CM-field over for every positive
integer n, and P(T) is irreducible and therefore
[(α):] = 2g. Suppose that A is not absolutely
simple, and let m be the smallest positive integer such that
A_(m) splits; by <cit.> this is also
the smallest m such that
(α^m) ⊊(α). Since (α^m) is
also a CM field, it is necessarily a quadratic imaginary number
field.
Observe first that m must be odd. Indeed, if
m was even, then (α^m/2) = (α) and
[(α^m/2):(α^m)] = 2. This contradicts the fact
that [(α):] = 2 g, since g is an odd prime. By <cit.>, there are two possibilities:
* P(T) ∈[T^m],
* K = (α^m, ζ_m).
If <ref> holds and P(T) = T^2m + bT^m + q^g, we
conclude that m = g and b=a_g. In this case, the minimal polynomial of α^g has degree 2 and is of the form h_(g)(T) = (T-α^g)(T-α^g). Note that α^g and α^g are distinct, since A is ordinary. Thus, P_g(T) = h_(g)(T)^g and A must split over a degree g extension.
If <ref> holds, we have that φ(m) | 2g. Since m>1
is odd and φ(m) takes even values, we have two possible
options: either φ(m) = 2 or φ(m) = 2g. If
φ(m) = 2, then [K:(α^m)] ≤ 2 which contradicts
the fact that (α) is a degree 2g extension of
. Therefore, necessarily, φ(m) = 2g, and (α) = (ζ_m).
Recall from
elementary number theory that the solutions to this equation are
(m, g) = (9,3) or (m, g) = (2g + 1,g) for g a
Sophie Germain prime.
* (g > 3) In this case, <ref> only occurs when
2g+1 is prime.
* (g = 3) In this case, either m = 7 or m = 9. To
conclude the proof, we show that m = 9 does not occur. More
precisely, we will show that if A splits over a degree 9
extension, it splits over a degree 3 extension as well. In fact,
suppose that K = (ζ) = (α) for some primitive
9th root of unity. The subfield F = (ζ^3) is the only
quadratic imaginary subfield of K, so if a power of α
does not generate K, it must lie in F. Suppose α^9 lies
in F. Let σ be the generator of (K/F) sending
ζ to ζ^4. The minimal polynomial of α over F
divides T^9 - α^9, so σ(α) = α·ζ^j
for some j, and σ^2(α) = αζ^5j. Since the
product of three conjugates of α over F must lie in F,
we have that
α^3·ζ^6j =
(α)(α·ζ^j)(α·ζ^5j) ∈ F, which
implies that α^3 ∈ F and we conclude that A splits over
a degree-3 extension of the base field.
We thank Everett Howe for explaining to us why the case m=9 above does not occur.
§.§ Zarhin's notion of neatness
In this section we discuss Zarhin's notion of neatness, a
useful technical definition closely related to the angle rank.
Define
R_A' u_j^2 : α_j ∈ R_A.
Note that
according to our numbering convention, we have that
u_j = u_j = u_j+g for every j ∈1,…, g.
Let A be an abelian variety defined over _q. We say that A
is neat if it satisfies the following conditions:
* Γ_A is torsion free.
* For every function e R_A'→
satisfying
∏_β∈ R_A'β^e(β)= 1,
then e(β) = e(β) for every β∈ R_A'.
* If A is supersingular and Γ_A is
torsion free, then A is neat. Indeed, in this case we have that
R_A' = 1 and condition <ref> is trivially
satisfied.
* Suppose that the Frobenius eigenvalues of
A are distinct and not supersingular. Some base extension of A
is neat if and only if A has maximal angle rank.
* In general, maximal angle rank always implies neatness.
§.§ Behavior of Serre–Frobenius groups in products
We begin by stating an important lemma, attributed to Bjorn Poonen in <cit.>.
If E_1, …, E_n are n pairwise
absolutely non-isogenous elliptic curves over _q, then their
eigenvalues of Frobenius α_1, …, α_n are
multiplicatively independent.
In fact, for abelian varieties that split completely as products of elliptic curves, we can give an explicit description of the Serre–Frobenius group.
Let A be a g-dimensional abelian variety over _q that splits completely as a product of elliptic curves. Let r be the degree of the smallest extension such that A ∼_(r) A_1 × B_1 × B_2 …× B_s, satisfying
* A_1 is supersingular or trivial,
* each B_j splits over _q^rm_j as the power of an ordinary elliptic curve E_j/_q^rm_j, and
* E_j is not geometrically isogenous to E_i for i ≠ j.
Let n_1≥ 1 be the smallest integer such that A_1 is isogenous to a power of an elliptic curve E over _q^rn_1. Then, (A) = (1)^s × C_m_A, where
m_A = r ( n_1 m_E, m_1, m_2, …, m_s ).
The proof of this proposition follows from the following lemmas.
Let B/_q be an abelian variety such that B splits completely over _q^m as a power of an ordinary elliptic curve, for some m≥ 1. Then, (B) = (1)× C_m.
Angle rank is invariant under base change, so δ_B = δ_E^g = 1. It remains to show that the angle torsion order m_B is equal to m. Since B_(m)∼ E^g, we have that P_B,(m)(T) = P_E(T)^g. If we denote by γ_1, γ_1, …γ_g, γ_g and π_1, π_1 the Frobenius eigenvalues of B and E respectively, we have that γ_1^m, γ_1^m, …, γ_g^m, γ_g^m = π_1, π_1. Possibly after relabelling, we have that γ_j = ζ_m^ν_jγ_1 for j = 1, …, g and at least one ζ_m^ν_j is a primitive m-th root. This shows that C_m ⊂ U_B, so that m | m_B. On the other hand, we have that (B_(m)) = (E^g) ≅(1) is connected. This implies that m_B | m and the result follows.
Let A = A_1 × B be an abelian variety over _q such that A_1 is supersingular with angle torsion order m_A_1 = m_1 and B is simple and splits completely over _q^m as the power of an ordinary elliptic curve. Then, (A)^∘≅(1) and m_A = (m_1, m).
From the discussion above, we see that U_A = ⟨ζ_m_1, ζ_m, v_1 ⟩, where v_1 = γ_1/√(q) and all the other roots γ_j can be written as ζ_m^ν_jγ_1 with at least one ζ_m^ν_j primitive. It follows that U_A = C_(m_1,m)⊕⟨ v_1 ⟩ so that δ_A =1 and m_A = (m_1,m).
If B/_q is an ordinary abelian variety such that B ∼_(r) B_1×⋯× B_s and satisfying
* each B_j splits over _q^m_j the power of an ordinary elliptic curve E_j/_q^m_j, and
* E_j is not geometrically isogenous to E_i for i ≠ j.
then (B) ≅(1)^s× C_m_B with m_B = r(m_1,…, m_s).
This follows from combining Lemma <ref> with the fact that the Serre–Frobenius group of B is connected over an extension of degree (m_1, m_2, …, m_s). The proof then proceeds as in Lemma <ref>.
§.§ Supersingular Serre–Frobenius groups
Recall that a q-Weil number α is called supersingular
if α/√(q) is a root of unity. In <cit.>, Zhu classified the minimal polynomials
h(T) of supersingular q-Weil numbers. Let Φ_r(T) denote the
rth cyclotomic polynomial, φ(r) Φ_r(T) the Euler totient function, and ab the
Jacobi symbol. Then the possibilities for the minimal polynomials of supersingular q-Weil numbers are given in Table <ref>.
[Table <ref>]
In case (Z-1), m is any positive integer. In cases (Z-2) and
(Z-3), m additionally satisfies m ≢2 4, and
n m/(2,m).
The symbol ζ_m denotes the primitive m-th root of unity given by e^2 π i/m, and ζ_m^ν is also primitive.
Note that in this case,
φ(n) = φ(m)/(2,m). Following the notation in <cit.>, given a
polynomial f(T)∈ K[T] for some field K, and a constant
a ∈ K, let
f^[a](T) a^ ff(T/a).
Given any supersingular abelian variety A defined over _q, the
Frobenius polynomial P_A(T) is a power of the minimal polynomial
h_A(T), and this minimal polynomial is of type (Z-1), (Z-2), or
(Z-3) as above. We say that A is of type Z-i if
the minimal polynomial h_A(T) is of type (Z-i) for i = 1,2,3.
Since U_A is finite in the supersingular case, we
have that (A) ≅ U_A. In
particular, we can read off the character group U_A from the fourth
column in Table <ref>. For instance, if m=3 and d is even, then we have a polynomial of type Z-1, and the Serre–Frobenius group is isomorphic to C_3. On the other hand, if m=3 and we have a polynomial of type Z-2, then the Serre–Frobenius group is isomorphic to C_6. Given a
q-Weil polynomial f(T) ∈[T] with roots α_1, ⋯, α_2n, the associated normalized polynomial
f̃(T) ∈[T] is the monic polynomial with roots
u_1 = α_1/√(q), …, u_2n = α_2n/√(q). Table <ref> allows us to go back and forth between
q-Weil polynomials f(T) and the normalized polynomials
f̃(T).
* If h(T) is the minimal
polynomial of a supersingular q-Weil number of type Z-1, the
normalized polynomial h̃(T) is the cyclotomic polynomial
Φ_m(T). Conversely, we have that
h(T) = h̃^[√(q)](T).
* If h(T) is the
minimal polynomial of a supersingular q-Weil number of type Z-2,
the normalized polynomial h̃(T) is the polynomial
Φ_n(T^2). Conversely,
h(T) = h̃^[q](T).
§ ELLIPTIC CURVES
The goal of this section is to prove Theorem <ref>. Furthermore, we give a thorough description of the set of
possible orders m for the supersingular Serre–Frobenius groups
(E) = C_m in terms of p and q = p^d.
The isogeny classes of elliptic curves over _q were classified
by Deuring <cit.> and Waterhouse <cit.>. Writing the characteristic polynomial of
Frobenius as P(T) = T^2 + a_1T + q, the Weil bounds give
|a_1| ≤ 2√(q). Conversely, the integers a in the interval
|a| ≤ 2√(q) corresponding to the isogeny class of an elliptic
curve are the following.
Let p be a prime and q = p^d. Let a ∈ satisfy
|a|≤ 2√(q).
* If p ∤ a, then a is the
trace of Frobenius of an elliptic curve over _q. This is the
ordinary case.
* If p | a, then a is the trace of Frobenius of an
elliptic curve over _q if and only if one of the following
holds:
* d is even and a = ± 2√(q),
* d is even and a = √(q) with
p ≢1 3,
* d is even and a = -√(q) with
p ≢1 3,
* d is even and a = 0 with
p ≢1 4,
* d is odd and a=0,
* d is odd, a = ±√(2q) with
p = 2.
* d is odd, a = ±√(3q) with
p = 3.
This is the supersingular case.
0.2
(0,0)A [A](0:2)B
[A](60:4)C
[fill=gray!10,size=.3](A,B,C)
[fill=gray!10, size=.5](B,A,C) (A,B
B,C C,A) (A,C)√(q)
[below](A,B)-a/2
[above](C)α_1 [left](A)0
[color=blue, size=0.5](B,A,C)
[color=blue, pos=1](B,A,C)ϑ_1
(A,B,C)
0.76
In the ordinary case, the normalized Frobenius eigenvalue u_1 is not
a root of unity, and thus (E) = (1). In the supersingular
case, the normalized Frobenius eigenvalue u_1 is a root of unity,
and thus (E) = C_m is cyclic, with m equal to the order of
u_1. For each value of q and a in Theorem <ref>
part (2), we get a right triangle of hypotenuse of length √(q)
and base -a/2, from which we can deduce the angle ϑ_1 and thus the order m of the corresponding root of unity u_1. We thus obtain the following restatement of Theorem <ref> in terms of the classification of Serre–Frobenius groups for elliptic curves.
There are seven Serre–Frobenius groups for elliptic curves, and they correspond to seven possible
Frobenius distributions of elliptic curves over finite fields. For
ordinary elliptic curves (as explained in Section
<ref>), the sequence of normalized traces (x_r)_r=1^∞ is
equidistributed in the interval [-2,2] with respect to the measure λ_1(x) (Equation <ref>) obtained as the
pushforward of the Haar measure μ_(1) under z ↦ z + z. See Figure <ref>.
The remaining six Serre–Frobenius groups are finite and cyclic; they
correspond to supersingular elliptic curves. For a given
C_m = ⟨ζ_m⟩⊂(1), denote by δ_m the measure obtained by pushforward along z ↦ z + z of the normalized counting
measure,
μ_C_m(f)
∫ f μ_C_m1/m∑_j=1^m f(ζ_m^j).
§ ABELIAN SURFACES
The goal of this section is to classify the possible Serre–Frobenius
groups of abelian surfaces (Theorem <ref>). The proof is
a careful case-by-case analysis, described by Flowchart <ref>.
We separate our cases first according to p-rank, and then
according to simplicity. In the supersingular and almost ordinary
cases this stratification is enough. In the ordinary case, we have to
further consider the geometric isogeny type of the surface.
§.§ Simple ordinary surfaces
We restate a theorem of Howe and Zhu in our notation. Suppose that
P(T) = T^4 + a_1T^3 + a_2T^2 + qa_1T + q^2 is the Frobenius
polynomial of a simple ordinary abelian surface S defined over
_q. Then, exactly one of the following conditions holds:
* S is absolutely
simple.
* a_1 = 0 and S
splits over a quadratic extension.
* a_1^2 = q + a_2 and S splits
over a cubic extension.
*
a_1^2 = 2a_2 and S splits over a quartic extension.
* a_1^2 = 3a_2 -3q and S splits
over a sextic extension.
Let S be a simple ordinary abelian surface over _q. Then,
exactly one of the following conditions holds:
* S is
absolutely simple and (S) ≅(1)^2.
* S splits over a quadratic
extension and (S) ≅(1)× C_2.
* S splits over a cubic
extension and (S) ≅(1)× C_3.
* S splits over a quartic
extension and (S) ≅(1)× C_4.
* S splits over a sextic
extension and (S) ≅(1)× C_6.
(a) From <cit.>, we conclude that some finite base extension of an absolutely simple abelian surface is neat and therefore has maximal angle rank by Remark <ref>. Alternatively, this also follows from the proof of <cit.> for Jacobians of genus 2 curves, which generalizes to any abelian surface. Theorem <ref> then implies that (S) = (1)^2.
(b,c,d,e) Denote by m the smallest degree of the extension
_q^m⊃_q over which S splits. By Theorem
<ref> we know that m ∈2,3,4,6. Let
α∈α_1, α_1, α_2,
α_2 be a Frobenius eigenvalue of S. From
<cit.> and since S is ordinary, we have that
[(α):(α^m)] = [(α^m):] = 2. In
particular, the minimal polynomial h_(m)(T) of α^m is
quadratic, and P_(m)(T) = h_(m)(T)^2. This implies that
α_1^m, α_1^m = α_2^m,
α_2^m, so that there is a
primitive m-th root of unity ζ giving one of the following
multiplicative relations:
α_2 = ζα_1, α_2 = ζα_1.
We note here that ζ must be a primitive m-th root, since otherwise, P_n(T) would
split for some n ≤ m, contradicting the minimality of
m.
If α_2 = ζα_1, then
(S) = ⟨ (u_1, ζ u_1) ⟩ = (u, ζ^k u) : u ∈(1), k ∈/m≅(1)× C_m
and (S)^∘ embeds diagonally in (1)^2. Similarly, if α_2 = ζα_1, then (S) ≅(1)× C_m with embedding
(S) = (u, ζ^k u) : u ∈(1), k ∈/m⊂(1)^2.
§.§ Non-simple ordinary surfaces
Let S be a non-simple ordinary abelian surface defined over
_q. Then, S is isogenous to a product of two ordinary elliptic
curves E_1× E_2. As depicted in Figure <ref>, we
consider two cases:
(S-B)
E_1 and E_2 are not isogenous over
_q.
(S-C) E_1 and E_2
become isogenous over some base extension
_q^m_1⊇_q, for m_1≥ 1.
Let S be an abelian surface defined over _q
such that S is isogenous to E_1 × E_2, for E_1 and E_2
absolutely non-isogenous ordinary elliptic curves. Then S has
maximal angle rank δ = 2 and (S) = (1)^2.
The proof is a straightforward application of Lemma <ref>.
Let S be an abelian surface defined over _q such that S is
isogenous to E_1 × E_2, for E_1 and E_2 absolutely
isogenous ordinary elliptic curves. Then S has angle rank
δ = 1 and (S) = (1)× C_m for
m ∈1,2,3,4,6. Furthermore, m is precisely the degree of
the extension of _q over which E_1 and E_2 become
isogenous.
Let α_1,α_1 and
α_2,α_2 denote the Frobenius eigenvalues of E_1 and E_2 respectively. Let m_1 be the smallest
positive integer such that E_1 ∼_(m_1) E_2. From Proposition <ref>, we immediately have that (S) ≅(1) × C_m, where m=m_1.
In order to find the value of m, observe that
α_1^m,α_1^m =
α_2^m,α_2^m, from which we get one of
the following multiplicative relations:
α_2 = ζα_1, α_2 = ζα_1,
for some primitive m-th root of unity ζ. Since the curves E_1 and
E_2 are
ordinary, the number fields (α_1) and (α_2) are imaginary quadratic and (α_1) = (α_1^m) = (α_2^m) = (α_2). Hence, ζ∈(α_1) and thus φ(m) = [(ζ):] ∈1,2; therefore m ∈1,2,3,4,6. Depending on whether α_2 = ζα_1 or
α_2 = ζα_1, the group
(S) = (1) × C_m embeds in (1)^2 as
(u,ζ^r) ↦ (u, ζ^r u) or (u,ζ^r) ↦ (u, ζ^r u^-1).
§.§ Simple almost ordinary surfaces
An abelian variety is called almost ordinary if the
set of slopes of the Newton polygon is 0, 1/2, 1 and the slope
1/2 has length 2. In <cit.> Lenstra and Zarhin carried out a careful study
of the multiplicative relations of Frobenius eigenvalues of simple
almost ordinary varieties, which was later generalized in <cit.>. In particular, they
prove that even-dimensional simple almost ordinary abelian varieties
have maximal angle rank (<cit.>). Since
every abelian surface of p-rank 1 is almost ordinary, their result
allows us to deduce the following:
Let S be a simple and almost ordinary abelian surface defined over
_q. Then, S has maximal angle rank δ = 2 and
(S) = (1)^2.
§.§ Non-simple almost ordinary surfaces
If S is almost ordinary and not simple, then S is isogenous to the
product of an ordinary elliptic curve E_1 and a supersingular
elliptic curve E_2.
Let S be a non-simple almost ordinary abelian surface defined over
_q. Then, S has angle rank δ =1 and
(S) ≅(1)× C_m for some m ∈1,3,4,6,8,12.
Let E_1 be an ordinary elliptic curve and E_2 a supersingular
elliptic curve such that
S ∼ E_1× E_2. By Proposition <ref>, (S) = (E_1)×(E_2) ≅(1)× C_m with m in
the list of possible orders of Serre–Frobenius groups of
supersingular elliptic curves.
§.§ Simple supersingular surfaces
Since every supersingular abelian variety is geometrically isogenous to a power of an elliptic curve, the Serre–Frobenius group only depends on the extension over which this occurs (Proposition <ref>). We separate our analysis into the simple and non-simple cases.
The classification of Frobenius polynomials of supersingular abelian
surfaces over finite fields was completed by Maisner and Nart
<cit.> building on work of Xing
<cit.> and Rück <cit.>. Denoting by
(a_1, a_2) the isogeny class of abelian surfaces over _q with
Frobenius polynomial P_S(T) = T^4 + a_1T^3+ a_2T^2 + qa_1T + q^2,
the following lemma gives the classification of Serre–Frobenius groups
of simple supersingular surfaces.
Let S be a simple supersingular abelian surface defined over
_q. The Serre–Frobenius group of S is classified according to
Table <ref>.
The notation for polynomials of type Z-3
is taken from <cit.>, where the authors
classify simple supersingular Frobenius polynomials for g≤ 7. We
have
Ψ_5,1(T) ∏_a∈(/5)T-a5ζ_5^a =
T^4 + √(5)T^3 + 3T^2 + √(5)T + 1,
and
Ψ_2,3(T) ∏_a∈(/3)T-ζ_8ζ_3^aT-ζ_8ζ_3^a=
T^4 + √(2)T^3 + T^2 + √(2)T + 1.
We exhibit the proof of the second line in Table <ref> for exposition. The remaining cases can be checked similarly. If (a_1,a_2) = (0,0), p ≠ 2 and q is an odd power of p: then, P(T) = T^4 + q^2 = √(q)^4Φ_8(T/√(q)) = q^2Φ_4(T^2/q) and h̃(T) = Φ_8(T). Thus U_S is generated by a primitive 8th root of unity.
§.§ Non-simple supersingular surfaces
If S is a non-simple supersingular surface, then S is isogenous to
a product of two supersingular elliptic curves E_1 and E_2. If m_E_1 and m_E_2 denote the torsion orders of E_1 and E_2 respectively, then the extension over which E_1 and E_2 become isogenous is precisely (m_E_1, m_E_2). Thus, by Proposition <ref>, we have the following result, depending on the values of q=p^d as in Table <ref>.
Let S be a non-simple supersingular abelian surface defined over
_q. Then, S has angle rank δ = 0 and (S) = C_m
for m in the set M = M(p, d) described in Figure
<ref>.
§ ABELIAN THREEFOLDS
In this section, we classify the Serre–Frobenius groups of abelian threefolds (see Figure <ref>). Let X be an abelian variety of
dimension 3 defined over _q. For our analysis, we will first
stratify the cases by p-rank and then by simplicity. Before we proceed, we make some observations about simple threefolds that will be useful later.
§.§ Simple abelian threefolds
If X is a simple abelian threefold, there are only two
possibilities for the Frobenius polynomial P_X(T) = h_X(T)^e:
P_X(T) = h_X(T)
P_X(T) = h_X(T)^3.
Indeed, if h_X(T) were a linear or cubic polynomial, it would have a real
root, ±√(q). By an argument of Waterhouse (<cit.>), the q-Weil numbers ±√(q) must come from simple abelian varieties of dimension 1 or 2. Further, Xing <cit.> showed
that <ref> can only happen in very special cases (see also <cit.>).
Let X be a simple abelian threefold over _q. Then,
P_X(T) = h_X(T)^3 if and only if 3 divides log_p(q) and
h_X(T) = T^2 + aq^1/3T+q with (a,p) = 1.
Note that in this case, X is non-supersingular and has Newton Polygon as in Figure <ref>.
Further, putting these observations together gives us that every simple abelian threefold is either absolutely simple or is isogenous over an extension to the cube of an elliptic curve. Thus, we have the following fact.
If X is an abelian threefold defined over _q that is not ordinary or supersingular, then X is simple if and only if it is absolutely simple.
§.§ Simple ordinary threefolds
In this section, X will denote a simple ordinary threefold defined
over _q. As a corollary to Theorem <ref>, we have the following.
Let X be a simple ordinary abelian threefold defined over
_q. Then, exactly one of the following conditions is satisfied.
* X is absolutely simple.
* X splits over a degree 3 extension and
P_X(T) = T^6+a_3T^3 + q^3.
* X splits over a degree 7 extension and the number field of
P_X(T) is (ζ_7).
Let X be an absolutely simple abelian threefold defined over
_q. Then X has maximal angle rank δ = 3 and
(X) = (1)^3.
Let m = m_X be the order of the torsion subgroup of Γ_X. By
<cit.>, we have that X_(m) is
neat. Since X_(m) is ordinary and simple, its Frobenius
eigenvalues are distinct and non-real. Remark <ref>
implies that X_(m) has maximal angle rank. Since angle rank is
invariant under base extension (Remark
<ref>) we have that
δ(X) = δ(X_(m)) = 3 as we wanted to show.
Let X be a simple ordinary abelian threefold over _q that is
not absolutely simple. Then X has angle rank 1 and
* (X) = (1)× C_3 if X splits over a degree 3
extension, or
* (X) = (1)× C_7 if X splits over a degree 7
extension.
From the proof of Theorem <ref>, we have that the
torsion free part of U_X is generated by a fixed normalized root
u_1 = α_1/√(q), and all other roots u_j for 1 < j ≤ g
are related to u_1 by a primitive root of unity of order 3 or
7 respectively.
The isogeny class
https://www.lmfdb.org/Variety/Abelian/Fq/3/2/ad_f_ah
is ordinary and absolutely simple. According to Lemma
<ref>, its Serre–Frobenius group is
the full torus (1)^3 and the following histogram approximates
the distribution corresponding to the measure λ_3.
The isogeny class
https://www.lmfdb.org/Variety/Abelian/Fq/3/2/a_a_ad
is ordinary and simple, but it splits over a degree 3 extension as
^3. According to Lemma
<ref>, its Serre–Frobenius group is
(1)× C_3, and the histogram corresponding to this group is
the following.
The isogeny class
https://www.lmfdb.org/Variety/Abelian/Fq/3/2/ae_j_ap
is ordinary and simple, but it splits over a degree 7 extension as
^3. According to Lemma
<ref>, its Serre–Frobenius group is
(1)× C_7, and the histogram corresponding to this group is
the following.
§.§ Non-simple ordinary threefolds
Let X be a non-simple ordinary threefold defined over _q. Then
X is isogenous to a product S× E, for some ordinary surface
S and some ordinary elliptic curve E.
The Frobenius
polynomial of X is the product of the Frobenius polynomials of S
and E. Further, exactly one of the following is true for S: either it is absolutely simple, or it is simple and geometrically isogenous to the power of a single elliptic curve, or it is not simple (see observation after <ref>). The Serre–Frobenius group of X depends its geometric isogeny decomposition, of which there are five possibilities:
* X is geometrically isogenous to E^3.
* X is geometrically isogenous to
E_1^2× E, for some ordinary elliptic curve E_1, with E_1 ≁__q E.
* X is geometrically isogenous to
E_1 × E_2 × E, for ordinary and pairwise geometrically
non-isogenous elliptic curves E_1, E_2 and E.
* X is geometrically isogenous to S× E
for an absolutely simple ordinary surface S and an ordinary elliptic curve E.
Let X be a non-simple ordinary abelian threefold over _q. The
Serre–Frobenius group of X is given by Table
<ref>.
Recall that X ∼ S× E over _q.
<ref> If X is geometrically isogenous to E^3, then S is geometrically isogenous to E^2. By Proposition <ref> (X) = (1) × C_m, where m is the smallest extension over which S ∼_(m) E^2. By <cit.>, we have that m ∈{1, 2, 3, 4,6}.
<ref>
In this case, by Proposition <ref>, (X) = (1)^2 × C_m, where m is the smallest extension over which S ∼_(m) E_1^2. As in the previous case, m ∈{1,2,3,4,6}.
<ref> In this case S ∼ E_1× E_2 over the base field. By Lemma <ref> we conclude that δ_X = 3.
<ref> In this case, X ∼ S × E with S absolutely
simple. By <cit.>, we know that X
is neat. Since X is ordinary and S is simple, all Frobenius
eigenvalues are distinct and not
supersingular. By Remark <ref>, we conclude that
δ_X = 3.
[Non-simple ordinary threefolds of splitting type <ref>]
(m=1) The isogeny class
https://www.lmfdb.org/Variety/Abelian/Fq/3/2/ad_j_an
is isogenous over the field of definition to ^3.
(m=2) The base change of
https://www.lmfdb.org/Variety/Abelian/Fq/3/2/ab_f_ad
over a quadratic extension is ^3.
(m=3) The base change of
https://www.lmfdb.org/Variety/Abelian/Fq/3/2/a_a_af
over a cubic extension is ^3.
(m=4) The base change of
https://www.lmfdb.org/Variety/Abelian/Fq/3/5/ak_bv_afc
over a quartic extension is ^3.
(m=6) The base change of
https://www.lmfdb.org/Variety/Abelian/Fq/3/7/ao_di_alk
over a degree 6 extension is ^3.
[Non-simple ordinary threefolds of splitting type <ref>]
(m=1) The isogeny class
https://www.lmfdb.org/Variety/Abelian/Fq/3/3/af_r_abi
is isogenous to ^2 ×.
(m=2) The base change of
https://www.lmfdb.org/Variety/Abelian/Fq/3/2/ab_b_b
over a quadratic extension is
^2 ×.
(m=3) The base change of
https://www.lmfdb.org/Variety/Abelian/Fq/3/3/ad_d_ac
over a cubic extension is
^2 ×.
(m=4) The base change of
https://www.lmfdb.org/Variety/Abelian/Fq/3/3/af_p_abg
over a quartic extension is
^2×.
(m=6) The base change of
https://www.lmfdb.org/Variety/Abelian/Fq/3/2/ae_k_ar
over a sextic extension is
^2 ×.
§.§ Simple almost ordinary threefolds
Let X be a simple and almost ordinary abelian threefold over _q. Recall that X is in fact absolutely simple, so that the Frobenius polynomial P_(r)(T) is irreducible for every positive integer r.
Let X be a simple almost ordinary abelian threefold over _q. The Serre–Frobenius group of X can be read from Table <ref>.
Let m m_X be the torsion order of U_X, and consider the base extension Y X_(m). By <cit.>, we know that δ_X = δ_Y ≥ 2. Furthermore, since Y is absolutely simple, by the discussion in Section <ref>, the roots of P_Y(T) = P_(m)(T) are distinct and non-supersingular. If Y is neat, Remark <ref> implies that δ_X = δ_Y = 3. Assume then that Y is not neat, so that δ_X = 2. Let α = α_1 be a Frobenius eigenvalue of X. By <cit.> and the discussion thereafter, we have that the sextic CM-field (α) = (α^m) contains a quadratic imaginary field B, and (u_1u_2u_3)^2m = Norm_(α)/B(u_1^2m) = 1. Since U_Y has no torsion, this implies that (u_1u_2u_3)^m = 1. Moreover, this means that u_1u_2u_3 = ζ for some primitive[The primitivity of ζ follows from the fact that m is the minimal positive integer such that U_(m) is torsion free.] m-th root of unity ζ. Therefore,
ζ^2 = Norm_(α)/B(u_1^2) ∈ B.
If m is odd, ζ^2 is also primitive, so that φ(m) ≤ 2 and m ∈1,3. If m is even, then we may distinguish between two cases. If √(q)∈(α), we know that u_1 ∈(α) so that in fact ±ζ = Norm_(α)/B(u_1) ∈ B and φ(m)≤ 2 implies that m ∈2,4,6. If √(q)∉(α), then ζ^2 is a primitive m/2-root of unity and m/2 ∈1,2,3,4,6.
§.§ Non-simple almost ordinary threefolds
Since X is not simple, we have that X∼ S× E for some surface
S and some elliptic curve E. For this section, we let
π_1, π_1, π_2, π_2 and
α, α be the Frobenius eigenvalues of S and
E respectively. The normalized eigenvalues will be denoted by
u_1 π_1/√(q), u_2 = π_2/√(q) and
u α/√(q).
Instead of paragraph below: if X has a geometric supersingular factor, by Honda–Tate theory, it must have a supersingular factor over the base field; and without loss of generality we may assume that this
factor is E.
Let X ∼ S× E be a non-simple almost ordinary abelian
threefold over _q. The Serre–Frobenius group of X can be read
from Flowchart <ref>. In particular, if X has no supersingular
factor, then δ_X = 3. If E is supersingular, then
δ_X ∈1,2 and m_X = (m_S, m_E).
The list of possible torsion orders m_X in this case is given by:
* δ_X = 1, d even: M(p,d) = 1,2,3,4,6, 12.
* δ_X=1, d odd: M(p,d) = 4, 12, 24.
* δ_X=2: All possible orders in Table <ref>.
First, suppose that X has no supersingular factor. Thus E is
ordinary and S is almost ordinary and absolutely simple. This
implies that (π_1^r) and (α^r) are CM-fields of
degrees 4 and 2 respectively, for every positive integer r. In
particular,
#π_1^r, π_1^r, π_2^r, π_2^r,
α^r, α^r = 6 for every r. Let m = m_X
and consider the base extension X_(m). Since X_(m) is not
simple, <cit.> implies that
X_(m) is neat. The eigenvalues of X_(m) are all distinct and
not supersingular, so that δ(X) = δ(X_(m)) = 3 by
Remark <ref>.
Now, suppose that X does have a supersingular factor, namely E. This implies that δ_X ≤ 2 since
u = α/√(q) = ζ_m_E is a root of unity. Since S is
ordinary in this case, we have that the sets u_1, u and
u_2, u are multiplicatively independent, so that
δ_X =1 2 depending on the rank of the subgroup
U_S ⊂ U_X. Similarly, we see that
U_X[tors] = ⟨ζ_m_S, ζ_m_E⟩ and
m_X = (m_S, m_E). If S is simple, the result follows from
Lemma <ref>. If S is not simple, the result follows from
<ref>.
§.§ Abelian threefolds of K3-type
In this section X will be an abelian threefold defined over _q of p-rank 1. The q-Newton polygon of such a variety is give in Figure <ref>. This is the three-dimensional instance of abelian varieties of K3 type, which were studied by Zarhin in <cit.> and <cit.>.
An abelian variety A defined over _q is said to be of K3-type if the set of slopes is either 0,1 or 0,1/2,1, and the segments of slope 0 and 1 have length one.
By <cit.>, simple abelian varieties of K3-type have maximal angle rank. As a corollary, we have another piece of the classification.
Let X be a simple abelian threefold over _q of p-rank 1. Then X has maximal angle rank and (X) ≅(1)^3.
Now assume that X is not simple, so that
X ∼ S × E for some surface S and elliptic curve E.
Let X ∼ S× E be a non-simple abelian threefold over _q of p-rank 1. The Serre–Frobenius group of X is given by Table <ref>.
As in Section
<ref>, we let
π_1, π_1, π_2, π_2 and
α, α be the Frobenius eigenvalues of S and
E respectively. Denote the normalized eigenvalues by
u_1 π_1/√(q), u_2 = π_2/√(q) and
u α/√(q). We consider three cases:
* S is simple and almost ordinary, and E is supersingular.
* S is non-simple and almost ordinary, and E is supersingular.
* S is supersingular and E is ordinary.
Suppose first that X is of type <ref>. By Lemma <ref>, the set u_1,u_2 is multiplicatively independent. Since u is a root of unity, U_X = ⟨ u_1, u_2, u⟩ = U_S⊕ U_E ≅^2⊕ C_m for m ∈ M = 1,3,4,6,8,12 the set of possible torsion orders for supersingular elliptic curves. Thus, (X) ≅(1)^2 × C_m in this case.
If X is of type <ref>, then S∼ E_1× E_2 with E_1 ordinary and E_2 supersingular. By Proposition <ref>, (X) ≅(1) × C_m, with m in the set of possible torsion orders of non-simple supersingular surfaces.
If X is of type <ref>, we have U_X = U_E ⊕ U_S ≅⊕ C_m for m in the set M = 1, 2,3,4,5,6,8,10,12,24 of possible torsion orders of supersingular surfaces from Lemmas <ref> and <ref>.
§.§ Absolutely simple p-rank 0 threefolds
In this section, X will be a non-supersingular p-rank 0 abelian
threefold over _q. From the q-Newton polygon of the Frobenius polynomial P(T) = P_X(T) (see Figure
<ref>) we see that X is absolutely simple, since the
slope 1/3 does not occur for abelian varieties of smaller
dimension. Let e_r^2 denote the dimension of
End^0(X_(r)) over its center. We consider two cases:
* There exists r≥ 1 such that e_r = 3. In
this case we have P_(r)(T) = h_(r) (T)^3 and h_(r)(T) is
as in Theorem <ref>, so that 3 divides r·log_p(q).
* e_r = 1 for every positive integer r.
Let X be an absolutely simple abelian threefold of p-rank 0
defined over _q. Then, the Serre–Frobenius group of X is
classified according to Table <ref>. Furthermore, X
is of type <ref>, m_X is the smallest positive integer
r such that e_r = 3.
The techniques for proving the Generalized Lenstra–Zarhin result in <cit.>, cannot be applied to this case. Thus, even the angle rank analysis in this case is particularly interesting.
Suppose first that X is of type <ref>, and let m be
the minimal positive integer such that e_m = 3. Maintaining previous notation,
P_(m)(T) = h_(m)(T)^3 implies that
α_2 = ζ·α_1 and α_3 = ξ·α_1
for primitive m-th roots of unity ζ and ξ. By Proposition
<ref>, this implies that
(X) ≅(1)× C_m. We conclude that δ_X = 1 and
m = m_X. To calculate the set M of possible torsion orders,
assume that m_X = m > 1. Then (α_1^m) is a quadratic
imaginary subextension of (α_1)⊃, and we can
argue as in the proof of Theorem <ref> (with
ℓ =3) to conclude that m ∈3,7.
Assume now that X is of type <ref>. This implies
that (α_1^r) is a degree 6 CM-field for every positive
integer r. If m := m_X, the base extension X_(m) is neat and
the Frobenius eigenvalues are distinct and not supersingular. By
Remark <ref> we have that δ_X = 3 and m = 1.
[Histograms for X of type <ref>]
(m_X=1) The isogeny class https://www.lmfdb.org/Variety/Abelian/Fq/3/8/ag_bk_aea satisfies m_X=1. Note that 3 divides m_X·log_2(8).
(m_X=3) The isogeny class https://www.lmfdb.org/Variety/Abelian/Fq/3/2/a_a_ac has angle rank 1 and irreducible Frobenius polynomial P(T) = T^6 - 2T^3 + 8. The cubic base extension gives the isogeny class https://www.lmfdb.org/Variety/Abelian/Fq/3/8/ag_bk_aea with reducible Frobenius polynomial P_(3)(T) = (T^6 - 2T^3 + 8)^3. Note that 3 divides m_X·log_2(2).
(m_X = 7) The isogeny class https://www.lmfdb.org/Variety/Abelian/Fq/3/8/ai_bk_aeq has angle rank 1 and irreducible Frobenius polynomial P(T) = T^6 - 8T^5 + 36T^4 - 120T^3 + 288T^2 - 512T + 512. It's base change over a degree m_X = 7 extension is the isogeny class with Frobenius polynomial
P_(7)(T) = (T^2 - 1664T + 2097152)^3.
In this example, q = 8, so that 3 divides m_X·log_2(8).
§.§ Simple supersingular threefolds
Nart and Ritzensthaler <cit.> showed that the only degree
6 supersingular q-Weil numbers are the conjugates of:
±√(q)ζ_7, ±√(q)ζ_9, when q is a square, and
7^d/2ζ_28, 3^d/2ζ_36, when q is not a square.
Building on their work, Haloui <cit.>
completed the classification of simple supersingular threefolds. This
classification is also discussed in <cit.>;
and we adapt their notation for the polynomials of Z-3 type. Denoting by
(a_1, a_2, a_3) the isogeny class of abelian threefolds over _q
with Frobenius polynomial
P_X(T) = T^6 + a_1T^5+ a_2T^4 + a_3T^3+ qa_2T^2 + q^2a_1T + q^3, the
following lemma gives the classification of Serre–Frobenius groups
of simple supersingular threefolds, which is a corollary of Haloui's result.
Let X be a simple supersingular abelian threefold defined over
_q. The Serre–Frobenius group of X is classified according to
Table <ref>.
By Xing's theorem <ref>, we know that the Frobenius
polynomial of all supersingular threefolds P_X(T) coincides with
the minimal polynomial h_X(T) and e=1 in every row of the table.
The first four rows of Table <ref>
correspond to isogeny classes of type (Z-1). By the discussion in
Section <ref>, the minimal polynomials are of the
form[Recall that
f^[a](T) a^ f f(T/a).]
Φ_m^[√(q)](T) and the normalized polynomials are just the
cyclotomic polynomials Φ_m(T).
The last four rows of Table <ref>
correspond to isogeny classes of type (Z-3). The normalized
Frobenius polynomials are
h_7,1(± T) = T^6 ±√(7)T^5 + 3T^4 ±√(7)T^3
+3T^2 ±√(7)T + 1, and
h_3,3(± T) = T^6 ±√(3)T^3 + 1. Noting that
h_7,1(T)h_7,1(-T) = Φ_28(T) and
h_3,3(T)h_3,3(-T) = Φ_36(T) we conclude that the unit
groups U_X are generated by ζ_28 and ζ_36
respectively.
§.§ Non-simple supersingular threefolds
If X is a non-simple supersingular threefold over _q, then there are two cases:
* X ∼ S × E, with S a simple supersingular surface over _q and E a supersingular elliptic curve.
* X ∼ E_1 × E_2 × E_3, where each E_i is a supersingular elliptic curve.
The classification of the Serre–Frobenius group in these cases can be summarized in the following lemma.
If X is a non-simple supersingular threefold as in Case <ref>, then (X) ≅ C_m, for m ∈ M(p,d), where
* If d is even, M(p,d) = {3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30 },
* If d is odd, M(p,d) = {4, 8, 12, 20, 24}.
In this case, m = (m_S, m_E), since this is the degree of the smallest extension over which the Serre–Frobenius group becomes connected. The list of values for m_E and m_S come from Tables <ref> and <ref>.
If X is a non-simple supersingular threefold as in Case <ref>, then (X) ≅ C_m, for m ∈ M(p,d), where
* If d is even, M(p,d) = { 1,3,4,6,12},
* If d is odd, M(p,d) = { 4,8,12}.
By Proposition <ref>, m is the degree of the extension over which all the elliptic curve factors E_i become isogenous. This is precisely the least common multiple of the m_E_i's. From Table <ref>, we can calculate the various possibilities for the 's depending on the parity of d.
§ SIMPLE ORDINARY ABELIAN VARIETIES OF ODD DIMENSION
We conclude this article with a corollary of Theorem <ref>.
Let g>2 be prime, and let A be a simple ordinary abelian variety of dimension g over _q that is not absolutely simple. Then A has angle rank 1 and
* A splits over a degree g extension and (A)/(A)^∘≅ C_g, or
* 2g+1 is prime, A splits over a degree 2g+1 extension and (A)/(A)^∘≅ C_2g+1.
The proof of this lemma is the same as the proof of Lemma <ref>, so we do not repeat it here. However, it would be interesting to have a more complete result for simple ordinary abelian varieties of prime dimension; that is, whether every ordinary absolutely simple abelian variety of prime dimension g > 3 has maximal angle rank. Tankeev <cit.> showed that the angle rank of any absolutely simple abelian variety of prime dimension lies in 1, g-1, g. We also know from <cit.> that a necessary condition for δ_A = g is that the code is trivial. Furthermore, the answer is negative when the dimension is not prime (see <cit.>).
amsalpha
|
http://arxiv.org/abs/2306.07669v1
|
20230613102553
|
Rate-Splitting with Hybrid Messages: DoF Analysis of the Two-User MIMO Broadcast Channel with Imperfect CSIT
|
[
"Tong Zhang",
"Yufan Zhuang",
"Gaojie Chen",
"Shuai Wang",
"Bojie Lv",
"Rui Wang",
"Pei Xiao"
] |
cs.IT
|
[
"cs.IT",
"math.IT"
] |
[4]
theoremTheorem
theorembox 200
lemmaLemma
lemmabox 200
propositionProposition
propositionbox 200
corollaryCorollary
corollarybox 200
propertyProperty
propertybox 200
remarkRemark
remarkbox 200
claimClaim
claimbox 200
Rate-Splitting with Hybrid Messages: DoF Analysis of the Two-User MIMO Broadcast Channel with Imperfect CSIT
Tong Zhang, Member, IEEE, Yufan Zhuang,
Gaojie Chen, Senior Member, IEEE,
Shuai Wang, Member, IEEE, Bojie Lv, Member, IEEE,
Rui Wang, Member, IEEE, and Pei Xiao, Senior Member, IEEE,
T. Zhang is with Department of Electronic Engineering, Jinan University, Guangzhou 510632, China ([email protected]).
Y. Zhuang, B. Lv, and R. Wang are with Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen 518055, China ([email protected], {zhuangyf2019,lyubj}@mail.sustech.edu.cn).
G. Chen and P. Xiao are with 5GIC & 6GIC, Institute for Communication Systems (ICS), University of Surrey, Guildford GU2 7XH, UK ({gaojie.chen, p. xiao}@surrey.ac.uk).
Shuai Wang is with the Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China (e-mail: [email protected]).
Corresponding author: G. Chen.
July 31, 2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Most of the existing research on degrees-of-freedom (DoF) with imperfect channel state information at the transmitter (CSIT) assume the messages are private, which may not reflect reality as the two receivers can request the same content. To overcome this limitation, we consider hybrid private and common messages. We characterize the optimal DoF region for the two-user multiple-input multiple-output (MIMO) broadcast channel with hybrid messages and imperfect CSIT. We establish a three-step procedure for the DoF converse to exploit the utmost possible relaxation. For the DoF achievability, since the DoF region has a specific three-dimensional structure w.r.t. antenna configurations and CSIT qualities, by dividing CSIT qualities into cases, we check the existence of corner point solutions, and then design a hybrid messages-aware rate-splitting scheme to achieve them. Besides, we show that to achieve the strictly positive corner points, it is unnecessary to split the private messages into unicast and multicast parts because the allocated power for the multicast part should be zero. This implies that adding a common message can mitigate the rate-splitting complexity of private messages.
DoF region, hybrid messages, rate-splitting, imperfect CSIT, two-user MIMO broadcast channel
§ INTRODUCTION
The emergence of the upcoming sixth generation mobile communications (6G) will provide extremely reliable, ultra-fast, and ubiquitous wireless connectivity with significantly
elevated performance, as opposed to
those in existing communication standards and systems <cit.>. It is expected that 6G can achieve 50 times
higher peak data rate, 10 times reduced latency, and 100
times higher reliability than that of existing mobile communications systems. Typical 6G services are upgraded version of enhanced mobile broadband (eMBB), ultra reliable low latency communications (URLLC), and massive machine type communications (mMTC). One of the main challenges of 6G is the technique of multiple-access, which should support massive receivers with high data rate and are resilient to errors of channel state information at the transmitter (CSIT). Conventional multiple-access techniques, e.g., orthogonal multiple-access (OMA) and non-orthogonal multiple-access (NOMA), cannot address this challenge due to the following reasons: 1) OMA is very resource-consuming for supporting massive receivers 2) Both OMA and NOMA cannot be adaptive and robust to errors of CSIT, leading to degraded performance. In this regard, the rate-splitting multiple-access (RSMA) stands out as a viable solution, which not only can support massive receivers but also is adaptive and robust against errors of CSIT <cit.>. Specifically, it was found in <cit.> that NOMA is in fact a special case of RSMA. The foundations of RSMA stem from information-theoretic research, and the concept of RSMA then further moves to engineering practice with advanced wireless communication applications. Next, we will review the literature related to our problem.
§.§ Related Literature
The historical trajectory of RSMA from information-theoretic studies can be found in <cit.>. Date back as early as 1981, rate-splitting strategy was initially proposed in Gaussian interference channel to establish a class of new capacity region <cit.>, where the degrees-of-freedom (DoF) is elevated
by decoding part or whole of the interference. In <cit.>, rate-splitting-based strategy was shown to achieve the capacity region of
the deterministic interference channel within one bit. It was shown in <cit.> that any point in the capacity region of a Gaussian multiple-access channel is achievable by rate-splitting, i.e., each receiver “splits" data and signal into two parts. Further, the authors in <cit.> showed that enabling rate-splitting, the capacity region of slotted ALOHA multi-access systems is the same as the capacity region of multi-access with continuous transmission. Considering the fast time-varying channel, the transmitter can only have perfectly delayed and imperfectly current CSIT. Despite this challenging setting, rate-splitting strategies were shown to be DoF-optimal <cit.>. In particular, a rate-splitting scheme with delayed CSIT was first proposed in <cit.> for the two-user multi-input single-output (MISO) broadcast channel. Then, an improved scheme with DoF-optimality was designed in <cit.>. For the two-user MIMO broadcast channel, the DoF region was characterized by rate-splitting in <cit.>. It was shown in <cit.> that the rate-splitting can achieve the capacity region within a constant gap in the two-user case. The rate splitting design for secure communications was first proposed in <cit.> for the K-receiver MISO broadcast channel with imperfect CSIT. Then, the authors in <cit.> extended the idea in <cit.> to the two-user MIMO broadcast channel with imperfect CSIT. In the case when messages are not confidential, the achievable DoF regions was derived via rate splitting for the two receiver MIMO broadcast channel <cit.>. Recently, it was found in <cit.> that the achievable DoF region is the DoF region, showing the superiority of rate-splitting.
The applications of RSMA in advanced wireless communications can be found in <cit.>. In <cit.>, the integrated sensing and communication system was incorporated with rate-splitting, where the system performance
was enhanced and the system architecture was simplified since there was no need to use an additional radar sequence. The authors in <cit.> studied a joint design of intelligent reflecting surface (IRS) and rate-splitting, where the proposed framework outperformed the corresponding decode-and-forward rate-splitting, rate-splitting without
IRS and IRS-assisted conventional NOMA schemes. In <cit.>, the authors showed that in contrast
to conventional multi-receiver and massive MIMO systems, for which performance
collapses under mobility, rate-splitting can maintain reliable multi-receiver
connectivity with mobility. The authors in <cit.> studied the rate-splitting in satellite systems, where they revealed the superiority of rate-splitting-based multigroup multicast beamforming in both terrestrial and multibeam satellite systems. In addition, the rate-splitting was investigated in <cit.>, with the aim of mitigating the inevitable hardware impairments in realistic massive MISO broadcast channel. In <cit.>, the energy efficiency optimizations for both single-cell and multi-cell RSMA-based visible light communications were investigated. In <cit.>, the integration of RSMA with unmanned aerial vehicle base station was investigated. In <cit.>, a rate-splitting framework for multi-hop device-to-device (D2D) was proposed which enables two D2D links to share certain orthogonal radio resource blocks (RRBs) by forming device-clusters. The authors in <cit.> proposed a full-duplex cooperative rate-splitting (FD-CRS) scheme in a downlink two-group multicast system, in which the cell-center-receivers (CCUs) decode the multicast stream and their own private stream successively, then cooperatively form a distributed beamformer to assist the cell-edge-receivers (CEUs) in the multicast stream transmission. In the presence of untrusted receivers, the performance of rate-splitting considering outage probability and secrecy outage probability was analyzed in <cit.>. In <cit.>, a RSMA-assisted downlink transmission framework for cell-free massive MIMO was proposed to ameliorate the effect of pilot contamination in the downlink and achieve a performance gain over a conventional cell-free massive MIMO network. The authors of <cit.> designed rate-splitting precoders for an overloaded multicarrier multigroup multicast downlink system.
However, all the above works only considered the private transmissions and overlooked the possibility that two receivers can request the same content, i.e., the impact of common messages. In <cit.> and <cit.>, the rate optimization problem for rate-splitting with hybrid messages, i.e., private and common messages, was considered. To date, the DoF region with imperfect CSIT has not yet been investigated with hybrid messages, even for the two-user MIMO broadcast channel. It worth mentioning that the application scenarios or user cases of hybrid messages are diverse. For one example, when simultaneous wireless streaming of a popular immersive video, in addition to private messages, there can be a common message desired by more-than-one users <cit.>. For another example,
in multi-group multi-beam satellite systems, a common message (not splitting from private messages) can be available for different terrestrial users <cit.>.
§.§ Contributions
In this paper, to overcome the above limitation, we consider the hybrid messages for the two-user (M,N_1,N_2) MIMO broadcast channel with imperfect CSIT, where the transmitter has M antennas, the receiver Rx_k, k=1,2, has N_k antennas. Tight converse and achievability proofs for the DoF region are given. Compared with <cit.>, the impact of common message on rate-splitting of private messages is considered. Our main contributions are summarized as follows:
* We reveal all corner points for the DoF region of the two-user MIMO broadcast channel with hybrid messages and imperfect CSIT. Since the DoF region has a specific three-dimensional structure w.r.t. antenna configurations and CSIT qualities, existence verification of corner point solutions, especially the strictly positive corner points, is non-trivial. Despite this, by dividing the CSIT qualities into cases, we check the existence of corner point solutions by means of the characteristics of antenna configurations and CSIT qualities.
* We establish the converse for DoF region of the two-user MIMO broadcast channel with hybrid messages and imperfect CSIT. In particular, we complete the converse proof by relaxing the decodability of receivers and thus enhancing the original channel. The converse proof has three steps, i.e., relaxation of receiver Rx_1, relaxation of receiver Rx_2, and union of them. It then shows that this converse indeed exploits the utmost of possible relaxation by the matching with the proposed achievability.
* We derive the achievability for the DoF region of the two-user MIMO broadcast channel with hybrid messages and imperfect CSIT. This achievability is given by showing corner points are achievable. To this end, we design a hybrid message-aware rate-splitting scheme with power allocation. Furthermore, we show that to achieve the strictly positive corner points, splitting the private messages into unicast and multicast parts is unnecessary, because the allocated power for the multicast part should be zero. This implies that adding a common message can mitigate the rate-splitting complexity of private messages.
§.§ Organizations & Notations
The remainder of this paper is organized as follows: We introduce our system model in Section-II. Then, we summarize and discuss our main results in Section-III. The achievability is given in Section-IV. The converse is provided in Section-V. Finally, we draw our conclusions in Section-V.
The notation of this paper is given as follows: a, a, A denote a scalar, a vector, and a matrix, respectively. (·)^H , (·)^T, and (·)^⊥ respectively denote the Hermitian, transpose and the null space of a matrix or vector. (a)^+ stands for max(a,0). 𝔼{·} denotes the long-term expectation operator. The identity matrix with M dimensions is denoted by I_M. Furthermore, the definitions of specific symbols are summarized in Table I.
§ SYSTEM MODEL
We consider a two-user (M,N_1,N_2) MIMO broadcast channel with an M-antenna transmitter, denoted by Tx, a receiver with N_1 antenna denoted by Rx_1 and a receiver with N_2 antenna denoted by Rx_2, which is illustrated in Fig. 1. The transmitter has private messages W_1 and W_2 for receiver Rx_1 and Rx_2, respectively, and a common message W_0 for both two receivers. Mathematically, the signal received at Rx_k, k=1,2 can be written as
𝐲_k=𝐇_k^H𝐬+𝐧_k,
where 𝐬 denotes the transmitted symbol, 𝐧_k ∼𝒞𝒩(0,𝐈_N_k) denotes the AWGN vector at Rx_k, 𝐇_k ∈ℂ^M × N_k denotes the channel matrix between Tx and Rx_k.
In this paper, we consider the case with perfect channel state information at receivers (CSIR) and imperfect CSIT. To be specific, at the receiver side, the channel gains can be easily obtained by sending a pilot sequence for channel estimation. That is, receiver Rx_k knows the precoders and channel matrices to decode the desired signal. At the transmitter, there exists imperfect CSIT resulting from channel estimation errors, quantization errors, prediction errors, etc. Let Ĥ_k denotes the impefect CSIT for the channel between Tx and Rx_k. Furthermore, α_k≥ 0 denotes CSIT quality. Henceforth, we focus 0≤α_k≤1, because α_k≥1 is equivalent to perfect CSIT where the interference will be dissolved in noise, and α_k=0 implies no CSIT where the interference has the same power level as the desired signal <cit.>.
Furthermore, we focus on the case with hybrid message, i.e., (W_1,W_2,W_0). The common message intended for both receivers is denoted by W_0. Generally, the encoding function for the transmitter is expressed as
𝐬=f(W_0,W_1,W_2,Ĥ_1,Ĥ_2)
where f is designed specifically in Section-IV for our problem. As for delayed CSIT, which we will discuss later on, assuming that the channel is time-varying, it is defined as the CSIT only contains channel state information (CSI) in past time slots but not the CSI in current time slot.
The decoding function at Rx_k, denoted by g(·), decodes (W_k,W_0) = g(y_k,H_1,H_2). Let R_0 denote the rate of common message, and let R_k, k=1,2, denote the entire rate of private message. The rate is said to be achievable, if there are a sequence of codebook pairs {ℬ_1,t,ℬ_1,t}_t=1^n and decoding functions {g_1,n,g_2,n} such that the error probabilities 𝒫_e^[n](W_i W_i),∀ i go to zero when n goes to infinity. The capacity region, denoted by 𝒞(ρ), where ρ is defined as signal-noise-ratio (SNR), is the region of all such achievable rate tuples. The DoF region is defined as the pre-log factor of the capacity region as ρ→1,
𝒟≜{
(d_1, d_2, d_0) ∈ℝ_+^3 |
(R_1(ρ), R_2(ρ),R_0(ρ)) ∈𝒞(ρ),
d_i = lim_ρ→1R_i(ρ)/logρ, i = 0,1,2.
.}.
where d_0, d_1, d_2 denote the DoF of common message, and private messsages for receiver Rx_1 and receiver Rx_2, respectively.
§ MAIN RESULTS AND DISCUSSION
In this section, we summarize the mains results of this paper, including the DoF region and sum-DoF of the two-user (M,N_1,N_2) MIMO broadcast channel with hybrid messages and imperfect CSIT. We also discuss the implications of the main results.
For the two-user (M,N_1,N_2) MIMO broadcast channel with hybrid messages and imperfect CSIT, the DoF region, denoted by 𝒟, is given below.
𝒟 =
{
(d_1, d_2,
d_0)∈ℝ_+^3
| .
d_1+d_0 ≤min{M,N_1},
d_2+d_0 ≤min{M,N_2},
d_1+d_2+d_0 ≤min{M,N_2} + [min{M,N_1+N_2} - min{M,N_2}] α_0,
d_1+d_0/min{M,N_1} + d_2/min{M,N_2}≤
1 + min{M,N_1+N_2} - min{M,N_1}/min{M,N_2}α_1.
},
where α_0 is given in <cit.>.
The achievability proof is presented in Section-IV. Subsequently, the converse proof is given in Section-V.
The DoF region characterizes the interplay of private messages, common message, antenna configurations, and CSIT qualities, under limited signal space and shared spatial domain. As such, it can be seen that strictly positive corner points of the DoF region mostly captures the feature of hybrid private and common messages under imperfect CSIT. Interestingly, multiple strictly positive corner points can exist if some conditions hold. Furthermore, it can be seen from our proposed hybrid-message aware rate-splitting scheme that to achieve the strictly positive corner point of the DoF region, there is no need in splitting the private messages into private and common parts, as we will see later on.
According to Theorem 1, the sum-DoF of this (M,N_1,N_2) MIMO system, defined in Section-II, is given below
∑_i=0^2 d_i =
N_2+(M-N_2)α_2, 𝒜,
max{N_1 + (M-N_1)α_1, N_2+(M-N_2)(-N_2-N_1/M-N_1+N_2-N_1/M-N_1α_2+α_1) }, ℬ,
max{(M-N_2)^2α_1α_2/(M-N_2)α_2+(N_2-N_1)(1-α_1)+N_2, (M-N_1)α_1 + N_1}, 𝒞,
where 𝒜 = {α_1,α_2| N_2-N_1+(M-N_2)α_2/M-N_1≤α_1}, ℬ = {α_1,α_2| 1 - α_2 ≤α_1 < N_2-N_1+(M-N_2)α_2/M-N_1}, and 𝒞 = {α_1,α_2|α_1≤ 1 - α_2}.
The sum-DoF with N_1=N_2=N and different CSIT qualities is illustrated in Fig. <ref>, and compared with rate-splitting with unicast messages <cit.> and conventional ZF scheme with no rate-splitting <cit.>. It can be seen that the sum-DoF of hybrid messages is the same as that with private messages only, as opposed to the dimension elevation for the DoF region when common message is further considered. This is because, adding common message does not create extra spatial and signal space. Furthermore, it shows that except perfect CSIT (i.e., α = 1) and M ≥ 2N, rate-splitting achieves a higher sum-DoF than that by conventional zero-forcing (ZF) scheme with no rate-splitting <cit.>.
For the two-user (M,N_1,N_2) MIMO broadcast channel with delayed[The CSIT does not reflect the current CSI but does match with the past CSI. Please refer to <cit.> for more information.] and imperfect CSIT, and hybrid messages, the DoF region, denoted by 𝒢, is given below
𝒢 =
{
(d_1,d_2,d_0)
∈ℝ_+^3
|
d_1/min{N_1+α_2N_2,M}
+ d_2+d_0/min{N_2,M}≤ 1
d_1+d_0/min{N_1,M} + d_2/min{N_2+α_1N_1,M}≤ 1
. }.
The converse proof of Theorem 2 is similar to that in proving Theorem 1 via leveraging the converse proof in <cit.>. For the achievability proof of Theorem 2, it can be seen that only one corner point is off-coordinate, given by,
𝒫_12 = (min{N_1+α_2N_2,M}min{N_1,M}(min{N_2+α_1N_1,M}-min{N_2,M})/min{N_1+α_2N_2,M}min{N_2+α_1N_1,M}-min{N_2,M}min{N_1,M}.,
.min{N_2+α_1N_1,M}min{N_2,M}(min{N_1+α_2N_2,M}-min{N_1,M})/min{N_1+α_2N_2,M}min{N_2+α_1N_1,M}-min{N_2,M}min{N_1,M}, 0).
The achievability of this corner point is given in <cit.>. For corner points on the coordinate, i.e., 𝒫_1 = (min{M,N_1},0,0),
𝒫_2 = (0,min{M,N_2},0),
𝒫_0 = (0,0,min{N_1,N_2}), they can be achieved by interference-free transmission, i.e., time division multiple access (TDMA).
It is worth mentioning that delayed and imperfect CSIT may occur when the CSI feedback is lagging behind the variations of the channel, and the feedback CSI has errors. We compare our the DoF region of imperfect CSIT in (<ref>) with the DoF region of delayed and imperfect CSIT in (<ref>) in Fig. 3 with 4 parameters. Fig. 3 shows that the DoF region of delayed and imperfect CSIT is contained in the DoF region of imperfect CSIT.
§ ACHIEVABILIY PROOF OF THEOREM 1: PROPOSED SYSTEM AND SCHEME DESIGN
In this section, we first present the hybrid message-aware rate-splitting system, and then a unified hybrid message-aware rate-splitting scheme. Finally, we provide an example of the scheme to help readers better grasp the design principle and insights.
§.§ Proposed Hybrid Message-Aware Rate-Splitting System
In this subsection, to achieve the target DoF region, we therefore propose a hybrid message-aware rate-splitting system as a precondition.
This system adopts ZF and rate-splitting to construct the transmission procedure. In particular, 𝐰_k ∈ℂ^M1 denotes the ZF precoder that is a unit norm vector in the null space of Ĥ_k. Then, if the transmitted signal is ZF-precoded, the strength of the residual interference at the unintended receiver can be written as |𝐡_k,i^H𝐰_k|^2, where 𝐡_k,i denotes the i^th column of 𝐇_k. Note that 𝔼{|𝐡_k,i^H𝐰_k|^2}∼ P^-α_k.
Also, the rate-splitting technique is adopted. Specifically, the private messages for a particular receiver can be split into a unicast part and a multicast part. To be specific, the message W_k intended for Rx_k is split into a unicast part W_p,k decoded by Rx_k, k=1,2, and a multicast part W_c drawn from a shared codebook and decoded by both receivers. Then, the multicast part W_c is combined with the common message W_0 recasting as a composite multicast symbol.
The decoding procedure is given as follows. The composite multicast symbol is first decoded. Thereafter, the successive interference cancellation (SIC) is used to cancel the interference aroused by composite multicast symbol. Finally, the unicast symbol is decoded separately at each receiver.
§.§ A Unified Hybrid Message-Aware Rate-Splitting Scheme
In this subsection, based on the proposed system, we present a unified hybrid message-aware rate-splitting scheme. Without loss of generality, we consider M ≥ N_2 ≥ N_1. Furthermore, we assume M≤ N_1+N_2 since for other cases the DoF region can be achieved by turning off redundant transmit or receive antennas. In this case, the DoF region is simplified to
𝒟 =
{
(d_1,d_2,d_0)
∈ℝ_+^3
|
ℓ_1: d_1+d_0 ≤ N_1,
ℓ_2: d_2+d_0 ≤ N_2,
ℓ_3: d_1+d_2+d_0 ≤ N_2+(M-N_2)α_0,
ℓ_4: d_1+d_0N_1 + d_2N_2≤ 1+M-N_1N_2α_1,
ℓ_5: d_1N_1 + d_2+d_0N_2≤ 1 + M-N_1N_2α_1.
}, .
where the value of α_0 is critical and given by
α_0=
α_2, N_2-N_1+(M-N_2)α_2≤ (M-N_1)α_1,
α_2-N_2-N_1+(M-N_2)α_2-(M-N_1)α_1M-N_1, α_1 ≥ 1-α_2,
α_1α_2(M-N_2)(N_2-N_1)(1-α_1)+(M-N_2)α_2, α_1 ≤ 1-α_2.
Henceforth, we constitute the rate-splitting transmission block for this two-user (M,N_1,N_2) MIMO broadcast channel with hybrid messages and imperfect CSIT as follows:
* M-N_2 unicast symbols, denoted by 𝐮_1 ∈ℂ^(M-N_2) × 1, are sent to Rx_1 along a ZF-precoder 𝐕_1=Ĥ_2^⊥∈ℂ^M × (M-N_2) with power exponent A_1;
* M-N_1 unicast symbols, denoted by 𝐮_2^(1)∈ℂ^(M-N_1) × 1, are sent to Rx_2 along a ZF-precoder 𝐕_2^(1)=Ĥ_1^⊥∈ℂ^M × (M-N_1) with power exponent A_2;
* N_1+N_2-M unicast symbols, denoted by 𝐮_2^(2), is sent to Rx_2 along a precoder 𝐕_2^(2)∈ℂ^M × (N_1+N_2-M) in the subspace spanned by Ĥ_2 with power exponent (A_2-α_1)^+;
* A composite multicast symbol, denoted by (𝐜+𝐮_0) ∈ℂ^M × 1, is multicast using the remaining power.
Moreover, the power exponents A_1 and A_2 are defined as A_1∈[0,α_2] and A_2∈[0,1]. Mathematically, the transmitted and received signals are written as
𝐬=𝐜+𝐮_0_P - P^A_1 - P^A_2 - P^(A_2-α_1)^++𝐯_1𝐮_1_P^A_1+𝐕_2^(1)𝐮_2^(1)_P^A_2+𝐯_2^(2)𝐮_2^(2)_P^(A_2-α_1)^+,
𝐲_1=𝐇_1^H(𝐜+𝐮_0)_P + 𝐇_1^H𝐯_1𝐮_1_P^A_1+𝐇_1^H(𝐕_2^(1)𝐮_2^(1)+𝐯_2^(2)𝐮_2^(2))_P^(.A_2-a_1)^+,
𝐲_2=𝐇_2^H(𝐜+𝐮_0)_P + 𝐇_2^H𝐯_1𝐮_1_P^A_1-α_2+𝐇_2^H𝐕_2^(1)𝐮_2^(1)_P^A_2+𝐇_2^H𝐯_2^(2)𝐮_2^(2)_P^(A_2-α_1)+ .
As we can see from the received signals, if A_2 ≤α_1, the undesired unicast symbols are dissolved into the noise. If A_2 α_1, the designed power allocation policy will ensure that all the three unicast symbols intended for receiver Rx_2 are received by receiver Rx_1 with the same power level. Similar to <cit.>, for (8b) and (8c), using the proof in <cit.> and considering that each receiver decodes the multicast part splitting from private messages and the common message successively, the following DoF tuple is achievable.
At receiver Rx_1, we have
d_0+d_c≤ d_c^(1)≜ N_1 - (M-N_2)max{A_1, A_2-α_1} - (N_1+N_2-M)(A_2-α_1)^+,
d_p1=(M-N_2)(A_1-(A_2-α_1)^+)^+,
where d_c denotes the DoF for W_c, and d_p1 denotes the DoF for W_p,1.
At receiver Rx_2, we have
d_0+d_c≤ d_c^(2)≜ N_2 - (M-N_1)A_2 - (N_1+N_2-M) (A_2
-α_1)^+,
d_p 2=(M-N_1)A_2 + (N_1+N_2-M)(A_2-α_1)^+,
where d_p2 denotes the DoF for W_p,2.
Accordingly, the achievable sum-DoF is defined as
d_s(A_1,A_2)≜min{d_s^(1)(A_2),d_s^(2)(A_1,A_2)},
where
d_s^(1)(A_2)=N_1 + (M-N_1)A_2 - (M-N_2)(A_2-α_1)^+,
d_s^(2)(A_1, A_2) = N_2 + (M-N_2)(A_1-(A_2-α_1)^+)^+,
which are obtained by summing up (<ref>), (<ref>), (<ref>) and (<ref>), (<ref>), (<ref>). In what follows, we analyze corner points of the DoF region and present the power allocation policy for A_1,A_2 case by case so that the DoF region in (7) is achieved.
§.§.§ GENERAL CASE-1 (If N_2-N_1+(M-N_2)α_2/M-N_1≤α_1)
The DoF region in (7) is given below
𝒟 = {
(d_1,d_2,d_0)
∈ℝ_+^3
|
ℓ_1: d_1+d_0 ≤ N_1,
ℓ_2: d_2+d_0 ≤ N_2,
ℓ_3: d_1+d_2+d_0 ≤ N_2+(M-N_2)α_2,
ℓ_4: d_1+d_0N_1 + d_2N_2≤ 1+M-N_1N_2α_1.
}, .
which is illustrated in Fig. <ref>. The corner points on the coordinate are trivial and given by 𝒫_1 = (N_1, 0, 0), 𝒫_2=(0,N_2,0), 𝒫_0 = (0,0,N_1). The below proposition reveals the off-coordinate corner points of the DoF region.
The off-coordinate corner points of the DoF region in GENERAL CASE-1 are given in the following. The strictly positive corner points are
* 𝒫_123 = ((M-N_2)α_2,N_2-N_1+(M-N_2)α_2,N_1-(M-N_2)α_2), 𝒫_13 = (N_1,N_2-N_1+(M-N_2)α_2,0), 𝒫_23 = ((M-N_2)α_2,N_2,0).
The other corner points are 𝒫_12 = (0,N_2-N_1,N_1),
where 𝒫_123 denotes the intersection of ℓ_1, ℓ_2, and ℓ_3; 𝒫_13 denotes the intersection of ℓ_1 and ℓ_3; 𝒫_23 denotes the intersection of ℓ_2 and ℓ_3; and 𝒫_12 denotes the intersection of ℓ_1 and ℓ_2.
Please refer to Appendix A.
It can be seen that 𝒫_123 is strictly positive corner points and does not appear in the DoF region with private messages only.
Achieving the DoF region is equivalent to achieving corner points. We then show that corner points are achievable.
To achieve corner points 𝒫_123,𝒫_23 and 𝒫_13, we need to derive the optimal power exponents A_1^* and A_2^*. As the sum-DoF is not changed by adding common message, according to <cit.>, (A_1^*, A_2^*) ≜max d_s(A_1, A_2) are given by
A_1^*=α_2,
A_2^*= max{N_2-N_1+(M-N_2)α_2/M-N_1,1-M-N_2/N_2-N_1α_1}.
The solutions to achieve corner points 𝒫_13 and 𝒫_23 are given in <cit.> with an additional dimension d_0 set to zero. To achieve strictly positive corner point 𝒫_123, we consider no multicast part splitted from private messages, i.e., d_c=0, leading to d_c+d_0=d_0. Furthermore, each of corner points 𝒫_123, 𝒫_23 and 𝒫_13 achieves the sum-DoF N_2+(M-N_2)α_2.
To achieve corner point 𝒫_12, substituting A_1=0 and A_2=N_2-N_1/M-N_1 into (<ref>), (<ref>), (<ref>), and (<ref>) yields d_p1=0, d_p2=N_2-N_1, d_c+d_0=N_1. If there is no multicast part splitted from private messages, i.e., d_c=0, corner point 𝒫_12=(0,1,2) with sum-DoF N_1 are achieved.
§.§.§ GENERAL CASE-2 (If max{1-α_2,N_2-N_1/M-N_1}≤α_1≤N_2-N_1+(M-N_2)α_2/M-N_1)
The DoF region in (7) is given below
𝒟 = {
(d_1,d_2,d_0)
∈ℝ_+^3
|
ℓ_1: d_1+d_0 ≤ N_1,
ℓ_2: d_2+d_0 ≤ N_2,
ℓ_3: d_1+d_2+d_0 ≤ N_2+(M-N_2)(-N_2-N_1/M-N_1+N_2-N_1/M-N_1α_2+α_1),
ℓ_4: d_1+d_0N_1 + d_2N_2≤ 1+M-N_1N_2α_1,
}, .
which is illustrated in Fig. <ref>. Corner points on the coordinate are trivial and given by 𝒫_1 = (N_1, 0, 0), 𝒫_2=(0,N_2,0), 𝒫_0 = (0,0,N_1). The below proposition reveals the off-coordinate corner points of the DoF region.
The off-coordinate corner points of the DoF region in GENERAL CASE-2 are given in the following. The strictly positive corner points are
* 𝒫_234=
((M-N_2)α_1+ (M-N_2)(N_2-N_1)/M-N_1α_2-(M-N_2)(N_2-N_1)/M-N_1, -(N_1+N_2-M)α_1+ N_2(M-N_2)/M-N_1α_2 +N_2(N_2-N_1)/M-N_1
(N_1+N_2-M)α_1-(M-N_2)(N_2-N_1)/M-N_1α_2+N_2(N_2-N_1)/M-N_1)
* 𝒫_124=((M-N_1)α_1-(N_2-N_1),(M-N_1)α_1,N_2-(M-N_1)α_1)
Other corner points are 𝒫_34=(N_1α_1-N_1(M-N_2)/M-N_1α_2+N_1(M-N_2)/M-N_1, -(N_1 +N_2-M)α_1 + N_2(M-N_2)/M-N_1α_2 +N_2(N_2-N_1)/M-N_1,0), 𝒫_23=((M-N_2)α_1+(M-N_2)(N_2-N_1)/M-N_1α_2-(M-N_2)(N_2-N_1)/M-N_1 ,N_2,0),
𝒫_14=(N_1,(M-N_1)α_1,0),
𝒫_12=(0,N_2-N_1,N_1),
where 𝒫_234 denotes the intersection of ℓ_2, ℓ_3, and ℓ_4; 𝒫_124 denotes the intersection of ℓ_1, ℓ_2, and ℓ_4; 𝒫_34 denotes the intersection of ℓ_3 and ℓ_4; 𝒫_23 denotes the intersection of ℓ_2 and ℓ_3; 𝒫_14 denotes the intersection of ℓ_1 and ℓ_4; and 𝒫_12 denotes the intersection of ℓ_1 and ℓ_2.
Please refer to Appendix A.
It can be seen that 𝒫_234 and 𝒫_124 are strictly positive corner points and do not appear in the DoF region with private messages only.
Achieving the DoF region is equivalent to achieving corner points. We then show that corner points are achievable.
To achieve corner points 𝒫_234, 𝒫_34 and 𝒫_23, the optimal power exponents are given by (A_1^*, A_2^*) ≜max d_s(A_1, A_2) as in (<ref>). The solutions to achieve corner points 𝒫_234, 𝒫_34 and 𝒫_23 are the same as the solutions to achieve corner points 𝒫_123, 𝒫_13 and 𝒫_23 in GENERAL CASE-1. Note that, in this case, each of corner points 𝒫_234, 𝒫_34 and 𝒫_23 achieves the sum-DoF (M-N_2)α_1+(M-N_2)(N_2-N_1)/M-N_1(α_2-1)+N_2.
To achieve corner points 𝒫_124 and 𝒫_14, we substitute A_1=(M-N_1)α_1-(N_2-N_1)/M-N_2 and A_2=α_1 into (<ref>), (<ref>), (<ref>), and (<ref>) yielding d_p1=(M-N_1)α_1-(N_2-N_1), d_p2=(M-N_1)α_1 and d_c+d_0=N_2-(M-N_1)α_1. If there is no multicast part splitted from private messages, i.e., d_c=0, the strictly positive corner point 𝒫_124 is achieved. If there is no common message, i.e., d_0=0, and the multicast part of private messages is only for receiver Rx_1, corner point 𝒫_14=(2,2α_1,0) is achieved. Note that both corner points 𝒫_124 and 𝒫_14 achieve the sum-DoF N_1+(M-N_1)α_1.
Achieving corner point 𝒫_12 follows the same design as that in achieving corner point 𝒫_12 in GENERAL CASE-1.
§.§.§ GENERAL CASE-3 (If 1-α_2≤α_1≤N_2-N_1/M-N_1)
The DoF region of this case is the same as that in GENERAL CASE-2 except a different shape due to different CSIT qualities, where this DoF region is illustrated in Fig. <ref>.
Thereby, corner points on the coordinate are trivial and given by 𝒫_1 = (N_1, 0, 0), 𝒫_2=(0,N_2,0), 𝒫_0 = (0,0,N_1). The below proposition reveals the off-coordinate corner points of the DoF region.
The off-coordinate corner points of the DoF region in GENERAL CASE-3 are given by two sets. One set: corner points 𝒫_14'=(0,(M-N_1)α_1,N_1), and
𝒫_24=(0,N_2-(M-N_1)N_1/N_2-N_1α_1,(M-N_1)N_1/N_2-N_1α_1). The other set: corner points 𝒫_234, 𝒫_34, 𝒫_23 and 𝒫_14, which are the same as that in GENERAL CASE-2. In the two sets above, 𝒫_234 denotes the intersection of ℓ_2, ℓ_3, and ℓ_4;
𝒫_34 denotes the intersection of ℓ_3 and ℓ_4; 𝒫_23 denotes the intersection of ℓ_2 and ℓ_3; 𝒫_14 and 𝒫_14' denote the intersections of ℓ_1 and ℓ_4; 𝒫_24 denotes the intersection of ℓ_2 and ℓ_4.
For d_1 = 0, it turns out that 𝒟 = {(d_2,d_0)∈ℝ_+^2|d_0≤ N_1, d_2+d_0 ≤ N_2, d_0/N_1 + d_2/N_2≤ 1 + M-N_1/N_2α_1 }. The off-coordinate corner points are given by 𝒫_14'= (0,(M-N_1)α_1,N_1) and
𝒫_24=(0,N_2-(M-N_1)N_1/N_2-N_1α_1,(M-N_1)N_1/N_2-N_1α_1). The derivations of remaining corner points are the same as those in Proposition 2.
Achieving the DoF region is equivalent to achieving corner point. We then show that the above corner points are achievable.
To achieve corner point 𝒫_14', we substitute A_1=0 and A_2=1-α_1 into (<ref>), (<ref>), (<ref>), and (<ref>) which yields d_p1=0, d_p2=(M-N_1)α_1 and d_c+d_0=N_1. If there is no multicast part splitted from private messages, i.e., d_c=0, corner point 𝒫_14' with sum-DoF N_1+(M-N_1)α_1 are achieved.
To achieve corner point 𝒫_24, we substitute A_1=0 and A_2=1-M-N_2/N_2-N_1α_1 into (<ref>), (<ref>), (<ref>), and (<ref>), which yields d_p1=0, d_p2=N_2-(M-N_1)N_1/N_2-N_1α_1, and d_c+d_0=(M-N_1)N_1/N_2-N_1α_1. If there is no multicast part splitted from private messages, i.e., d_c=0, corner point 𝒫_24 with sum-DoF N_2 are achieved.
§.§.§ GENERAL CASE-4 (If α_1≤ 1-α_2)
The DoF region in (7) is given below
𝒟 = {
(d_1,d_2,d_0)
∈ℝ_+^3
|
ℓ_1: d_1+d_0 ≤ N_1
ℓ_2: d_2+d_0 ≤ N_2
ℓ_3: d_1+d_2+d_0 ≤ N_2+α_1α_2(M-N_2)^2/(N_2-N_1)(1-α_1)+(M-N_2)α_2
ℓ_4: d_1+d_0N_1 + d_2N_2≤ 1+M-N_1N_2α_1
}, .
which is illustrated in Fig. <ref>. The corner points on the coordinate are trivial and given by 𝒫_1 = (N_1, 0, 0), 𝒫_2=(0,N_2,0), 𝒫_0 = (0,0,N_1). The below proposition reveals the off-coordinate corner points of the DoF region.
The off-coordinate corner points of the DoF region in GENERAL CASE-4 are given by two sets. In particular, one set includes the following strictly positive corner points
* 𝒫_234=1/δ((M-N_2)^2α_1α_2, (M-N_1)N_1α_1^2+( (M-N_2)(M-N_1-N_2)α_2 + N_1^2 - N_2^2 - MN_1 + N_1N_2)α_1 + (M-N_2)N_2α_2 + N_2^2 - N_1N_2, ((M-N_2)(N_1+N_2-M)α_2 + (M-N_1)N_1(1-α_1))α_1) exists if α_1 ≤N_2-N_1/M-N_1 + M-N_2/M-N_1α_2.
* 𝒫_123 = ((M-N_2)^2α_1α_2/δ,N_2-N_1 + (M-N_2)^2α_1α_2/δ, N_1 - (M-N_2)^2α_1α_2/δ) exists if (M-N_2)^2α_1α_2/δ≤{(M-N_1)α_1 - (N_2-N_1), N_1}.
* 𝒫_124 = ((M-N_1)α_1 - (N_2 - N_1), (M-N_1)α_1, N_2 - (M-N_1)α_1) exists if (M-N_1)α_1 - (N_2-N_1) ≤(M-N_2)^2α_1α_2/δ and N_2-N_1/M-N_1≤α_1 ≤N_2/M-N_1.
The other corner points are
𝒫_34= 1/δ(N_1α_1(M-N_1-(M-N_1)α_1 + (M-N_2)α_2)
,
(M-N_1)N_1α_1^2+( (M-N_2)(M-N_1-N_2)α_2 + N_1^2 - N_2^2 - MN_1 + N_1N_2)α_1 + (M-N_2)N_2α_2 + N_2^2 - N_1N_2, 0),
𝒫_23=((M-N_2)^2α_1α_2/δ, N_2, 0),
where δ = (M-N_2)α_2 + (N_2 - N_1)(1-α_1).
The other set includes 𝒫_14, 𝒫_14', 𝒫_24, which are the same as that in GENERAL CASE-3. 𝒫_234 denotes the intersection of ℓ_2, ℓ_3, and ℓ_4; 𝒫_123 denotes the intersection of ℓ_2, ℓ_3, and ℓ_3; 𝒫_124 denotes the intersection of ℓ_1, ℓ_2, and ℓ_4;
𝒫_34 denotes the intersection of ℓ_3 and ℓ_4; 𝒫_23 denotes the intersection of ℓ_2 and ℓ_3; 𝒫_14 and 𝒫_14' denote the intersections of ℓ_1 and ℓ_4; 𝒫_24 denotes the intersection of ℓ_2 and ℓ_4.
Please refer to Appendix A.
It can be seen that 𝒫_234, 𝒫_123, and 𝒫_124 are strictly positive corner points with conditions and does not appear in the DoF region with private messages only.
Achieving the DoF region is equivalent to achieving corner points. We then show that corner points are achievable.
In this case, the space-time transmission scheme provided by <cit.> is able to further increase the sum-DoF. Suppose there are totally T time slots during the transmission, and T is sufficiently large. The power exponents in time slot l, denoted by (A_1,l,A_2,l), are given by
(A_1,l,A_2,l)=
(α_2,1), l = 1,...,ρ T,
(α_2,α_1), l = ρ T + 1,...,T,
where ρ∈ [0,1]. The optimal ρ is given by <cit.> that
ρ^*=N_2-N_1-(M-N_1)α_1+(M-N_2)α_2(N_2-N_1)(1-α_1)+(M-N_2)α_2.
To achieve corner points 𝒫_234, 𝒫_34 and 𝒫_23, we apply the space-time transmission scheme. The solutions to achieve corner points 𝒫_34 and 𝒫_23 are given in <cit.> with an additional dimension d_0 set to zero. To achieve the strictly positive corner point 𝒫_234, we consider there is no multicast part splitted from private messages, i.e., d_c=0, then 𝒫_234 can be achieved. Note that each of these three corner points 𝒫_234, 𝒫_34, and 𝒫_23 achieves (M-N_2)^2α_1α_2/(M-N_2)α_2+(N_2-N_1)(1-α_1)+N_2 sum-DoF.
§.§ An Example in the Two-User (4, 2, 3) MIMO Broadcast Channel
In this subsection, to facilitate better understanding, we consider a special case of a MIMO system with a 4-antenna transmitter and two receivers with 2 and 3 antennas, respectively. The block diagram of this (4,2,3) hybrid message MIMO system is illustrated in Fig. <ref>, where the main difference from conventional rate-splitting system with private messages only is highlighted.
We constitute the rate-splitting transmission block for the two-user (4, 2, 3) MIMO broadcast channel with hybrid messages and imperfect CSIT as follows:
* One unicast symbol, denoted by u_1, is sent to Rx_1 along a ZF-precoder 𝐯_1=Ĥ^⊥_2 ∈ℂ^4 × 1 with power exponent A_1;
* Two unicast symbols, denoted by 𝐮_2^(1)∈ℂ^2 × 1, are sent to Rx_2 along a ZF-precoder 𝐕_2^(1)=Ĥ_1^⊥∈ℂ^4 × 2 with power exponent A_2;
* One unicast symbol, denoted by u_2^(2), is sent to Rx_2 along a precoder 𝐯_2^(2)∈ℂ^4 × 1 in the subspace spanned by Ĥ_2. Its power exponent is (A_2-α_1)^+;
* A composite multicast symbol, denoted by (𝐜+𝐮_0)∈ℂ^4 × 1, is multicast using the remaining power, where 𝐜 denotes the multicast symbols split from private messages and 𝐮_0 denotes the common message symbol to be transmitted.
Recalling that the power exponents A_1 and A_2 are restricted in A_1∈[0,α_2] and A_2∈[0,1]. Mathematically, the transmitted and received signals can be written as
𝐬=𝐜+𝐮_0_P - P^A_1 - P^A_2 - P^(A_2-α_1)^++𝐯_1 u_1_P^A_1+𝐕_2^(1)𝐮_2^(1)_P^A_2+𝐯_2^(2) u_2^(2)_P^(A_2-α_1)^+,
𝐲_1=𝐇_1^H(𝐜+𝐮_0)_P + 𝐇_1^H𝐯_1 u_1_P^A_1+𝐇_1^H(𝐕_2^(1)𝐮_2^(1)+𝐯_2^(2) u_2^(2))_P^(.A_2-a_1)^+,
𝐲_2=𝐇_2^H(𝐜+𝐮_0)_P + 𝐇_2^H𝐯_1 u_1_P^A_1-α_2+𝐇_2^H𝐕_2^(1)𝐮_2^(1)_P^A_2+𝐇_2^H𝐯_2^(2) u_2^(2)_P^(A_2-α_1)^+ .
Similar to <cit.>, for (18b) and (18c), using the proof in <cit.> and considering that each receiver decodes the multicast part splitted from private messages and the common message successively, the following DoF tuple is achievable.
At receiver Rx_1, we have
d_0+d_c≤ d_c^(1)≜ 2-max{A_1, A_2-α_1}-(A_2-α_1)^+ ,
d_p 1=(A_1-(A_2-α_1)^+)^+ ,
At receiver Rx_2, we have
d_0+d_c≤ d_c^(2)≜ 3-2 A_2-(A_2-α_1)^+,
d_p 2=2 A_2+(A_2-α_1)^+.
Accordingly, the achievable sum-DoF is defined as
d_s(A_1,A_2)≜min{d_s^(1)(A_2),d_s^(2)(A_1,A_2)},
where
d_s^(1)(A_2)=2+2 A_2-(A_2-α_1)^+,
d_s^(2)(A_1, A_2)=3+(A_1-(A_2-α_1)^+)^+,
which are obtained by summing up (<ref>), (<ref>), (<ref>), and (<ref>). In the following, we present the power allocation policy case by case so that the DoF region can be achieved.
§.§.§ CASE-1 (If 1+α_2/2≤α_1)
Via GENERAL CASE-1, the DoF region of this (4, 2, 3) MIMO broadcast channel with hybrid messages and imperfect CSIT is simplified to
𝒟 = {
(d_1,d_2,d_0)
∈ℝ_+^3
|
ℓ_1: d_1+d_0 ≤ 2,
ℓ_2: d_2+d_0 ≤ 3,
ℓ_3: d_1+d_2+d_0 ≤ 3 + α_2,
ℓ_4: d_12 + d_23 + d_02≤ 1 + 23α_1.
}.
.
According to Proposition 1, corner points write as
𝒫_123 = (α_2,1+α_2,2-α_2),
𝒫_13 = (2,1+α_2,0),
𝒫_23 = (α_2,3,0), and
𝒫_12 = (0,1,2).
We then show that the above corner points are achievable.
To achieve corner points 𝒫_123, 𝒫_13, and 𝒫_23, we need to derive the optimal power exponents A_1^* and A_2^*. As shown in <cit.>, (A_1^*, A_2^*) ≜max d_s(A_1, A_2) are given by
A_1^*=α_2,
A_2^*=max{1+α_2/2,1-α_1}.
The solutions to achieve corner points 𝒫_13 and 𝒫_23 are given in <cit.> with an additional dimension d_0 set to zero. To achieve corner point 𝒫_123, we consider no multicast part splitted from private messages, i.e., d_c=0, then corner point 𝒫_123=(α_2,1+α_2,2-α_2) is achieved. Furthermore, each of these three corner points 𝒫_123, 𝒫_23 and 𝒫_13 achieves 3+α_2 sum-DoF.
To achieve corner point 𝒫_12, substituting A_1=0 and A_2=1/2 into (<ref>), (<ref>), (<ref>), and (<ref>) yields d_p1=0, d_p2=1, d_c+d_0=2. If no multicast part is splitted from private messages, i.e., d_c=0, corner point 𝒫_12=(0,1,2) with sum-DoF 3 are achieved.
§.§.§ CASE-2 (If max{1-α_2,1/2}≤α_1≤1+α_2/2)
Via GENERAL CASE-2, the DoF region of this (4, 2, 3) MIMO broadcast channel with imperfect CSIT and hybrid message is simplified to
𝒟 = {
(d_1,d_2,d_0)
∈ℝ_+^3
|
ℓ_1: d_1+d_0 ≤ 2,
ℓ_2: d_2+d_0 ≤ 3,
ℓ_3: d_1+d_2+d_0 ≤ 3 + -1+2α_1+α_22,
ℓ_4: d_12 + d_23 + d_02≤ 1 + 23α_1,
ℓ_5:d_12 + d_23 + d_03≤ 1 + 23α_1.
}.
.
According to Proposition 2, corner points write as
𝒫_124=(2α_1-1, 2α_1, 3-2α_1), 𝒫_234=(α_1+α_2/2.
.-1/2, 3α_2/2-α_1+3/2, α_1-3α_2/2+3/2),
𝒫_14=(2,2α_1,0), 𝒫_23=(-1+2α_1+α_2/2,3,0),
𝒫_12=(0,1,2),
𝒫_34=(2α_1-α_2+1 ., . 3/2α_2-α_1+3/2,0).
We thus show that the above corner points are achievable.
To achieve corner points 𝒫_234, 𝒫_34 and 𝒫_23, the optimal power exponents are given by (A_1^*, A_2^*) ≜max d_s(A_1, A_2) in (<ref>). The solutions to achieve corner points 𝒫_234, 𝒫_34 and 𝒫_23 are the same as those three to achieve corner points 𝒫_123, 𝒫_13 and 𝒫_23 in CASE-1. Note that, in this case, corner points 𝒫_234, 𝒫_34 and 𝒫_23 achieve 5+α_2+2α_1/2 sum-DoF.
To achieve both corner points 𝒫_124 and 𝒫_14, substituting A_1=2α_1-1 and A_2=α_1 into (<ref>), (<ref>), (<ref>), and (<ref>) yields d_p1=2α_1-1, d_p2=2α_1 and d_c+d_0=3-2α_1. If no multicast part is splitted from private messages, i.e., d_c=0, corner point 𝒫_124=(2α_1-1, 2α_1, 3-2α_1) is achieved. If there is no common message, i.e., d_0=0, and the multicast part splitted from private messages is for receiver Rx_1, corner point 𝒫_14=(2,2α_1,0) is achieved. Note that both corner points 𝒫_124 and 𝒫_14 achieve 2+2α_1 sum-DoF.
To achieve corner point 𝒫_12, the solution is the same as achieving corner point 𝒫_12 shown in CASE-1.
§.§.§ CASE-3 (If 1-α_2 ≤α_1≤1/2)
Note that this case only occurs when α_2≥1/2. The inequalities of DoF region of this case is the same as those in CASE-2 in (<ref>), but with a different region shape due to different CSIT qualities. According to Proposition 3, corner points write as
𝒫_234=(α_1+α_2/2-1/2, 3α_2/2-α_1+3/2, α_1-3α_2/2+3/2),
𝒫_14=(2,2α_1,0),
𝒫_34=(2α_1-α_2+1,3/2α_2-α_1+3/2,0),
𝒫_23=(-1+2α_1+α_2/2,3,0), 𝒫_14'=(0,1,2),
𝒫_24=(0,3-4α_1,4α_1).
We then show that the above corner points are achievable.
The corner points 𝒫_234, 𝒫_34, 𝒫_23, and 𝒫_14 are the same as that in CASE-2, thus with the same solutions to achieve.
To achieve corner point 𝒫_24, substituting A_1=0 and A_2=1-α_1 yields d_p1=0, d_p2=3-4α_1 and d_c+d_0=4α_1. If there is no multicast part splitted from private messages, i.e., d_c=0, corner point 𝒫_21=(0,3-4α_1,4α_1) with sum-DoF 3 are achieved.
To achieve corner point 𝒫_14', substituting A_1=0 while A_2=α_1 into (<ref>), (<ref>), (<ref>), and (<ref>) yields d_p1=0, d_p2=2α_1 and d_c+d_0=2. If there is no multicast part splitted from private messages, i.e., d_c=0, corner point 𝒫_14'=(0,2α_1,2) with sum-DoF 2+2α_1 are achieved.
§.§.§ CASE-4 (If α_1≤ 1-α_2)
Via GENERAL CASE-4, the DoF region of this (4, 2, 3) MIMO broadcast channel with imperfect CSIT and hybrid message is simplified to
𝒟 = {
(d_1,d_2,d_0)
∈ℝ_+^3
|
ℓ_1: d_1+d_0 ≤ 2,
ℓ_2: d_2+d_0 ≤ 3,
ℓ_3: d_1+d_2+d_0 ≤ 3 + α_1α_21+α_2 - α_1,
ℓ_4: d_12 + d_23 + d_02≤ 1 + 23α_1,
ℓ_5: d_12 + d_23 + d_03≤ 1 + 23α_1.
},
.
According to Proposition 4, corner points write as
* 𝒫_234=(α_1α_2/1+α_2 - α_1, 3α_1α_2/1+α_2 - α_1-4α_1+3, 4α_1-3α_1α_2/1+α_2 - α_1) if α_1 ≤1/2 + 1/2α_2.
* 𝒫_123=(α_1α_2/1+α_2 - α_1, 1+α_1α_2/1+α_2 - α_1, 2-α_1α_2/1+α_2 - α_1) if α_1α_2/1+α_2 - α_1≤{2α_1 - 1, 2}.
* 𝒫_124 = (2α_1 - 1, 2α_1, 3-2α_1) if 2α_1 - 1 ≤α_1α_2/1+α_2 - α_1 and 1/2≤α_1 ≤3/2.
𝒫_14=(2,2α_1,0),
𝒫_34=(2α_1(α_2 - 2α_1 +2)/1+α_2 - α_1,3α_1α_2/1+α_2 - α_1-4α_1+3,0),
𝒫_23=(α_1α_2/1+α_2 - α_1,3,0),
𝒫_14'=(0,2α_1,2),
𝒫_24=(0,3-4α_1,4α_1). In Fig. 6, we illustrate the DoF region with the existence of 𝒫_124 and 𝒫_234.
We then show that the above corner points are achievable.
In this case, the space-time rate-splitting transmission scheme provided by <cit.> is able to further increase the sum-DoF, compared with the schemes in other cases. Suppose there are totally T time slots during the transmission, and T is sufficiently large. The power exponents in time slot l is denoted by (A_1,l, A_2,l) and
(A_1,l,A_2,l)=
(α_2,1), l = 1,...,ρ T,
(α_2,α_1), l = ρ T + 1,...,T,
where ρ∈ [0,1]. After substituting this power levels in the space-time transmission scheme, (<ref>) and (<ref>) becomes
d_s,ST^(1)(ρ)=ρ(3+α_1)+(1-ρ)(2+2α_1),
d_s,ST^(2)(ρ)=3ρ+(1-ρ)(3+α_2),
and the achievable sum-DoF is defined as d_s,ST(A_1,A_2)≜min{d_s,ST^(1)(ρ),d_s,ST^(2)(ρ)} as previously. As shown in <cit.>, the optimal ratio ρ^* = max d_s,ST(A_1,A_2) is given by
ρ^*=1-2α_1+α_21+α_2 - α_1.
* To achieve corner points 𝒫_234, 𝒫_34 and 𝒫_23, we apply the space-time transmission scheme. The solutions to achieve corner points 𝒫_34 and 𝒫_23 are given in <cit.> with an additional dimension d_0 set to zero. To achieve the strictly positive corner point 𝒫_234, we consider there is no multicast part splitted from private messages, i.e., d_c=0, then corner point 𝒫_234=(α_1α_2/1+α_2 - α_1, 3α_1α_2/1+α_2 - α_1-4α_1+3, 4α_1-3α_1α_2/1+α_2 - α_1) is achieved.
* The
corner points 𝒫_14, 𝒫_14' and 𝒫_24 are the same as those in CASE-3, thus with the same solutions to achieve.
§ CONVERSE PROOF OF THEOREM 1
The DoF outer region can be derived based on the following three steps: In Step-I, let us relax the decoding requirement of the common
message W_0 and only request receiver Rx_1 to decode
it, such that W_0 degenerates into W_1. Since the relaxation won’t hurt the DoF region with private and common messages, the DoF region of this new channel
is an outer bound of that of the original channel. Thus, according to <cit.>[It was shown in <cit.> that the achievable DoF region in <cit.> is the DoF region in fact.] we obtain the outer bound in (<ref>), denoted by 𝒟^outer_1.
𝒟^outer_1 =
{
(d_1, d_2,d_0)
∈ℝ_+^3 |
d_1+d_0 ≤min{M,N_1},
d_2 ≤min{M,N_2},
d_1+d_2+d_0 ≤min{M,N_2} + [min{M,N_1+N_2} - min{M,N_2}] α_0,
d_1+d_0min{M,N_1} + d_2min{M,N_2}≤
1 + min{M,N_1+N_2} - min{M,N_1}min{M,N_2}α_1,
d_1min{M,N_1} + d_2min{M,N_2}≤
1 + min{M,N_1+N_2} - min{M,N_1}min{M,N_2}α_1.
},
𝒟^outer_2 = { (d_1, d_2, d_0) ∈ℝ_+^3 |
d_1 ≤min{M,N_1},
d_2+d_0 ≤min{M,N_2},
d_1+d_2+d_0 ≤min{M,N_2} + [min{M,N_1+N_2} - min{M,N_2}] α_0,
d_1min{M,N_1} + d_2min{M,N_2}≤
1 + min{M,N_1+N_2} - min{M,N_1}min{M,N_2}α_1,
d_1min{M,N_1} + d_2+d_0min{M,N_2}≤
1 + min{M,N_1+N_2} - min{M,N_1}min{M,N_2}α_1.
}
Likewise, in Step-II, we relax the decoding requirement of the common message W_0 and only require receiver Rx_2 to decode it, such that W_0 degenerates into W_2. Since the relaxation won't hurt the DoF region with private-common messages, the DoF region of this new channel is an outer bound of that of the original channel. Thus, according to <cit.> we obtain the outer bound in (<ref>), denoted by 𝒟^outer_2. Finally, in Step-III, by union of 𝒟^outer_1 and 𝒟^outer_2, because d_1 + d_0 ≤min{M,N_1} is dominated by d_1 ≤min{M,N_1}, d_2 + d_0 ≤min{M,N_2} is dominated by d_2 ≤min{M,N_2}, d_1/min{M,N_1} + d_2/min{M,N_2}≤
1 + min{M,N_1+N_2} - min{M,N_1}/min{M,N_2}α_1 is dominated by d_1+d_0/min{M,N_1} + d_2/min{M,N_2}≤
1 + min{M,N_1+N_2} - min{M,N_1}/min{M,N_2}α_1, d_1/min{M,N_1} + d_2/min{M,N_2}≤
1 + min{M,N_1+N_2} - min{M,N_1}/min{M,N_2}α_1 is dominated by d_1/min{M,N_1} + d_2+d_0/min{M,N_2}≤
1 + min{M,N_1+N_2} - min{M,N_1}/min{M,N_2}α_1, d_1/min{M,N_1} + d_2+d_0/min{M,N_2}≤
1 + min{M,N_1+N_2} - min{M,N_1}/min{M,N_2}α_1 is dominated by d_1+d_0/min{M,N_1} + d_2/min{M,N_2}≤
1 + min{M,N_1+N_2} - min{M,N_1}/min{M,N_2}α_1, it can be seen that 𝒟^outer is derived. This completes the proof.
§ CONCLUSION
In this paper, we characterized the DoF region of the two-user MIMO broadcast channel with hybrid messages and imperfect CSIT, which was given by the tight converse and achievability proof of the DoF region. Specifically, the proposed hybrid message-aware rate-splitting design achieves the proposed DoF converse. Besides, we obtained the sum-DoF of the two-user MIMO broadcast channel with hybrid messages and imperfect CSIT. We showed that the DoF region with hybrid messages is with specific three-dimensional structure w.r.t. antenna configurations and CSIT qualities. To achieve the strictly positive corner point, we further showed that it is unnecessary to split the private messages into private and common messages, since the allocated power for the multicast part should be zero. We also derived the DoF region of the two-user MIMO broadcast channel with hybrid messages and delayed-imperfect CSIT.
In the future, there are many related problems worthwhile to be investigated. For instance, it would be interesting to study the three-user MIMO broadcast channel with hybrid messages and imperfect CSIT, and MIMO interference channel with hybrid messages and imperfect CSIT, where one of difficulties is how to design the power allocation policy for multi-layer rate-splitting. Furthermore, it would be interesting to investigate the rate optimization in the presence of noise.
§ PROOF OF PROPOSITIONS 1, 2, AND 4
§.§ Proof of Proposition 1
First of all, we find out all corner points with one zero coordinate.
According to <cit.>, corner points without common messages (i.e., d_0 = 0) are given by 𝒫_13 = (N_1,N_2-N_1+(M-N_2)α_2,0) and 𝒫_23 = ((M-N_2)α_2,N_2,0). If d_1 = 0, it turns out that 𝒟 = {(d_2,d_0)∈ℝ_+^2|d_0 ≤ N_1,d_2+d_0≤ N_2, d_2/N_2+ d_0/N_1≤ 1 + M-N_1/N_2α_1 }, where the off-coordinate corner point is given by 𝒫_12 = (0,N_2-N_1,N_1). If d_2 = 0, it turns out that 𝒟 = {(d_1,d_0)∈ℝ_+^2|d_0 ≤ N_2, d_1+d_0 ≤ N_1}, where the off-coordinate corner point does not exist due to N_1 - N_2 ≤ 0.
Next, we derive corner points with strictly positive coordinate. However, although there are 𝒫_123, 𝒫_124, 𝒫_234, and 𝒫_134, only 𝒫_123 is qualified as a corner point. This point is shown in the following. By MATLAB Symbolic Calculation, corner point with intersection of ℓ_1, ℓ_2, and ℓ_3 is given by
𝒫_123 = ((M-N_2)α_2,N_2-N_1+(M-N_2)α_2,N_1-(M-N_2)α_2). It can be check that 𝒫_123 satisfies ℓ_4, since
d_1+d_0/N_1 + d_2/N_2 = 1 + N_2- N_1+(M-N_2)α_2/N_2
then
d_1+d_0/N_1 + d_2/N_2 - (1 + (M-N_1)α_1/N_2) = N_2- N_1+(M-N_2)α_2 - (M-N_1)α_1/N_2(a)≤ 0,
where (a) is due to α_1≥N_2-N_1+(M-N_2)α_2/M-N_1.
corner point with intersection of ℓ_1, ℓ_2, and ℓ_4 is given by
𝒫_124 = (N_1-N_2+(M-N_1)α_1, (M-N_1)α_1, N_2-(M-N_1)α_1).
It can be checked that 𝒫_124 violates ℓ_3 and does not exist, since
d_1 + d_2 + d_0 = N_1-N_2+(M-N_1)α_1+ (M-N_1)α_1+ N_2-(M-N_1)α_1
then
d_1 + d_2 + d_0 - (N_2 + (M-N_2)α_2) = N_1 + (M-N_1)α_1 - N_2 - (M-N_2)α_2 (a)≥ 0,
where (a) is due to α_1≥N_2-N_1+(M-N_2)α_2/M-N_1.
corner point with intersection of ℓ_2, ℓ_3, and ℓ_4 is given by
𝒫_234 = ((M-N_2)α_2, N_2(M-N_2)α_2-N_1(M-N_1)α_1+N_2(N_2-N_1)/N_2 -N_1,
N_1(M-N_1)α_1-N_2(M-N_2)α_2/N_2-N_1).
It can be checked that 𝒫_234 violates ℓ_1 and does not exist, since
d_1+d_0 = N_1(M-N_1)α_1-N_2(M-N_2)α_2/N_2-N_1 + (M-N_2)α_2
then
d_1+d_0 - N_1
= N_1 (M-N_1)α_1 - (M-N_2)α_2/N_2 - N_1 - N_1 (a)≥ 0,
where (a) is due to α_1≥N_2-N_1+(M-N_2)α_2/M-N_1. The corner point with intersection ℓ_1, ℓ_3, and ℓ_4 does not exist, since the determinant of coefficient matrix for related equations is zero, i.e.,
|
1 0 1
1 1 1
1/N_1 1/N_2 1/N_1
| = 0.
This completes the proof.
§.§ Proof of Proposition 2
First of all, we find out all corner points with one zero coordinate.
According to <cit.>, corner points without common messages (i.e., d_0 = 0) are given by
𝒫_14=(N_1,(M-N_1)α_1,0), 𝒫_34=(N_1α_1-N_1(M-N_2)/M-N_1α_2+N_1(M-N_2)/M-N_1, -(N_1+N_2-M)α_1 + N_2(M-N_2)/M-N_1α_2 +N_2(N_2-N_1)/M-N_1,0), and
𝒫_23=((M-N_2)α_1+(M-N_2)(N_2-N_1)/M-N_1α_2-(M-N_2)(N_2-N_1)/M-N_1 ,N_2,0). If d_1 = 0, it turns out that
𝒟 = {(d_2,d_0)∈ℝ_+^2|
d_0 ≤ N_1,
d_2 + d_0 ≤ N_2,
d_0/N_1 + d_2/N_2≤ 1+M-N_1/N_2α_1
},
where the off-coordinate corner point is given by 𝒫_12=(0,N_2-N_1,N_1) due to N_2-N_1/M-N_1≤α_1. If d_2=0, it turns out that
𝒟 = {(d_1,d_0)∈ℝ_+^2|
d_1 + d_0 ≤ N_1,
d_0 ≤ N_2
},
where the off-coordinate corner point does not exist due to N_1 - N_2 ≤ 0.
Next, we derive corner points with strictly positive coordinate. However, although there are 𝒫_123, 𝒫_124, 𝒫_234, and 𝒫_134, only 𝒫_124 and 𝒫_234 are qualified as corner points. This point is shown in the following. The corner point with intersection of ℓ_1, ℓ_2, and ℓ_4 is given by 𝒫_124=((M-N_1)α_1-(N_2-N_1),(M-N_1)α_1,N_2-(M-N_1)α_1), and corner point with intersection of ℓ_2, ℓ_3, and ℓ_4 is given by 𝒫_234=
((M-N_2)α_1+ (M-N_2)(N_2-N_1)/M-N_1α_2-(M-N_2)(N_2-N_1)/M-N_1, -(N_1+N_2-M)α_1+ N_2(M-N_2)/M-N_1α_2 +N_2(N_2-N_1)/M-N_1, (N_1+N_2-M)α_1-(M-N_2)(N_2-N_1)/M-N_1α_2+N_2(N_2-N_1)/M-N_1).
It can be checked that 𝒫_124 satisfies ℓ_3, since
d_1 + d_2 + d_0 = (M-N_1)α_1 + N_1
then
d_1 + d_2 + d_0 -(N_2+(M-N_2)(-N_2-N_1/M-N_1+N_2-N_1/M-N_1α_2+α_1) )
=(M-N_1)α_1 + N_1 - (N_2+(M-N_2)(-N_2-N_1/M-N_1+N_2-N_1/M-N_1α_2+α_1))
=(N_2-N_1)(N_2-N_1+(N_1-M)α_1+(M-N_2)α_2)/M-N_1
(a)≤(N_2-N_1)(N_2-N_1+(N_1-M)(N_2-N_1+(M-N_2)α_2/M-N_1)+(M-N_2)α_2)/M-N_1
=
(N_2-N_1)(N_2-N_1- N_2+N_1-(M-N_2)α_2 +(M-N_2)α_2)/M-N_1
=0,
where (a) is due to α_1≤N_2-N_1+(M-N_2)α_2/M-N_1.
It can be checked that 𝒫_234 satisfies ℓ_1, since
d_1+d_0 = N_1α_1 + (N_2-N_1)(2N_2-M)/M-N_1
then
d_1+d_0 - N_1
=N_1(M - N_2 + (M-N_1)α_1 - (M-N_2)α_2)/M - N_1 - N_1
(a)≤N_1(M - N_2 + (M-N_1)(N_2-N_1+(M-N_2)α_2/M-N_1) - (M-N_2)α_2)/M - N_1 - N_1
= N_1(M - N_2 + N_2-N_1+(M-N_2)α_2 - (M-N_2)α_2)/M - N_1 -N_1
=0,
where (a) is due to α_1≤N_2-N_1+(M-N_2)α_2/M-N_1. The
corner point with intersection of ℓ_1, ℓ_2, and ℓ_3 is given by 𝒫_123= (
(M-N_2)(α_1+N_1-N_2/M-N_1-N_1-N_2/M-N_1α_2),
N_2-N_1 + (M - N_2)(α_1 + N_1 - N_2/M - N_1 - N_1-N_2/M-N_1α_2),
N_1 - (M - N_2)(α_1 + N_1-N_2/M-N_1 - N_1-N_2/M-N_1α_2)
).
𝒫_123 violates ℓ_4 and does not exist, since
d_1+d_0/N_1+d_2/N_2 = 1 + N_2-N_1 + (M - N_2)(α_1 + N_1 - N_2/M - N_1 - N_1-N_2/M-N_1α_2)/N_2 then
d_1+d_0/N_1+d_2/N_2 - (1+M-N_1/N_2α_1)
=N_1^2+N_2^2-3N_1N_2+(2N_2-N_1)(N_1-M)α_1+(N_2-N_1)(M-N_2)α_2/N_2(M-N_1)
(a)≥N_1^2+N_2^2-3N_1N_2+(2N_2-N_1)(N_1-M)(N_2-N_1+(M-N_2)α_2/M-N_1)+(N_2-N_1)(M-N_2)α_2/N_2(M-N_1)
=(M-N_2)(1-α_2)/M-N_1
(b)≥0,
where (a) is due to α_1≤N_2-N_1+(M-N_2)α_2/M-N_1, and (b) is due to M ≥ N_2, 1 ≥α_2, and M ≥ N_1. The corner point with intersection ℓ_1, ℓ_3, and ℓ_4 does not exist, since the determinant of coefficient matrix for related equations is zero.
This completes the proof.
§.§ Proof of Proposition 4
First of all, we find out all corner points with one zero coordinate.
According to <cit.>, corner points without common messages (i.e., d_0 = 0) are given by 𝒫_14 = (N_1, (M-N_1)α_1, 0), 𝒫_34=1/δ(N_1α_1(M-N_1-(M-N_1)α_1 + (M-N_2)α_2)
, (M-N_1)N_1α_1^2+( (M-N_2)(M-N_1-N_2)α_2 + N_1^2 - N_2^2 - MN_1 + N_1N_2)α_1 + (M-N_2)N_2α_2 + N_2^2 - N_1N_2, 0), and
𝒫_23=((M-N_2)^2α_1α_2/δ, N_2, 0), where δ = (M-N_2)α_2 + (N_2 - N_1)(1-α_1). If d_1 = 0, it turns out that
𝒟 = {(d_2,d_0)∈ℝ_+^2|
d_0 ≤ N_1,
d_2 + d_0 ≤ N_2,
d_0/N_1 + d_2/N_2≤ 1+M-N_1/N_2α_1
},
where the off-coordinate corner points are given by 𝒫_14'= (0,(M-N_1)α_1,N_1) and
𝒫_24=(0,N_2-(M-N_1)N_1/N_2-N_1α_1,(M-N_1)N_1/N_2-N_1α_1). If d_2=0, it turns out that
𝒟 = {(d_1,d_0)∈ℝ_+^2|
d_1 + d_0 ≤ N_1,
d_0 ≤ N_2
}, where the off-coordinate corner point does not exist due to N_1 - N_2 ≤ 0.
Next, we derive corner points with strictly positive coordinate. However, although there are 𝒫_123, 𝒫_124, 𝒫_234, and 𝒫_134, while 𝒫_234 is qualified as a corner point if α_1 ≤N_2-N_1/M-N_1 + M-N_2/M-N_1α_2, 𝒫_123 is qualified as a corner point if (M-N_2)^2α_1α_2/δ≤{(M-N_1)α_1 - (N_2-N_1),N_1}, and 𝒫_124 is qualified as a corner point if (M-N_1)α_1 - (N_2-N_1) ≤(M-N_2)^2α_1α_2/δ and α_1 ≤N_2/M-N_1. This point is shown in the following. The
corner point with intersection of ℓ_2, ℓ_3, and ℓ_4 is given by 𝒫_234=1/δ((M-N_2)^2α_1α_2, (M-N_1)N_1α_1^2+( (M-N_2)(M-N_1-N_2)α_2 + N_1^2 - N_2^2 - MN_1 + N_1N_2)α_1 + (M-N_2)N_2α_2 + N_2^2 - N_1N_2, ((M-N_2)(N_1+N_2-M)α_2 + (M-N_1)N_1(1-α_1))α_1). If α_1 ≤N_2-N_1/M-N_1 + M-N_2/M-N_1α_2, it can be checked that 𝒫_234 satisfies ℓ_1, since
d_1+d_0 = (M-N_2)N_1α_1α_2 + (M-N_1)N_1(1-α_1)α_1/δ
then
d_1+d_0 - N_1
= (M-N_1)N_1(1-α_1)α_1 - (M-N_2)N_1α_2(1 - α_1) - (N_2 - N_1)N_1(1-α_1)/δ
= N_1(1-α_1)(M-N_1)α_1 - (M-N_2)α_2 - (N_2-N_1) /δ
(a)≤ 0,
where (a) is due to α_1 ≤N_2-N_1/M-N_1 + M-N_2/M-N_1α_2. Otherwise, if N_2-N_1/M-N_1 + M-N_2/M-N_1α_2 < α_1, it can be checked by the same steps that 𝒫_234 violates ℓ_1 and does not exist. corner point with intersection of ℓ_1,ℓ_2, and ℓ_3 is given by 𝒫_123 = ((M-N_2)^2α_1α_2/δ,N_2-N_1 + (M-N_2)^2α_1α_2/δ, N_1 - (M-N_2)^2α_1α_2/δ), which requires (M-N_2)^2α_1α_2/δ≤ N_1 for non-negativity of d_0. If (M-N_2)^2α_1α_2/δ≤ (M-N_1)α_1 - (N_2-N_1), 𝒫_123 satisfies ℓ_4, since d_1 + d_0/N_1 + d_2/N_2 = 1 + N_2-N_1 + (M-N_2)^2α_1α_2/δ/N_2
then
d_1 + d_0/N_1 + d_2/N_2 - (1 + M-N_1/N_2α_1 )
= 1/N_2(N_2-N_1 + (M-N_2)^2α_1α_2/δ- (M-N_1)α_1 )
(a)≤ 0,
where (a) is due to (M-N_2)^2α_1α_2/δ≤ (M-N_1)α_1 - (N_2-N_1). Otherwise, if N_2-N_1 + (M-N_2)^2α_1α_2/δ> (M-N_1)α_1, it can be checked by the same steps that 𝒫_123 violates ℓ_4 and does not exist.
corner point with intersection of ℓ_1, ℓ_2, and ℓ_4 is given by 𝒫_124 = ((M-N_1)α_1 - (N_2 - N_1), (M-N_1)α_1, N_2 - (M-N_1)α_1), which requires N_2-N_1/M-N_1≤α_1 for non-negativity of d_1 and α_1 ≤N_2/M-N_1 for non-negativity of d_0. If (M-N_1)α_1 - (N_2-N_1) ≤(M-N_2)^2α_1α_2/δ, 𝒫_124 satisfies ℓ_3, since d_1 + d_2 + d_0 = N_1 + (M-N_1)α_1
then
d_1 + d_2 + d_0 - (N_2+α_1α_2(M-N_2)^2/(N_2-N_1)(1-α_1)+(M-N_2)α_2)
= N_1 - N_2 + (M-N_1)α_1 - α_1α_2(M-N_2)^2/(N_2-N_1)(1-α_1)+(M-N_2)α_2
(a)≤ 0,
where (a) is due to (M-N_1)α_1 - (N_2-N_1) ≤(M-N_2)^2α_1α_2/δ. If (M-N_2)^2α_1α_2/δ < (M-N_1)α_1 - (N_2-N_1), it can be checked by the same steps that 𝒫_124 violates ℓ_3 and does not exist. The corner point with intersection ℓ_1, ℓ_3, and ℓ_4 does not exist, since the determinant of coefficient matrix for related equations is zero.
IEEEtran
|
http://arxiv.org/abs/2306.03307v2
|
20230605232739
|
Reef Elegy: An Auditory Display of Hawaii's 2019 Coral Bleaching Data
|
[
"Stefano Kalonaris"
] |
cs.SD
|
[
"cs.SD",
"eess.AS"
] |
Bose-Einstein Condensation in Gap-Confined Exciton-Polariton States
D. Sanvitto
July 31, 2023
===================================================================
This paper describes an auditory display of Hawaii's 2019 coral bleaching data via means of spatial audio and parameter mapping methods. Selected data fields spanning 78 days are mapped to sound surrogates of coral reefs' natural soundscapes, which are progressively altered in their constituent elements as the corresponding coral locations undergo bleaching. For some of these elements, this process outlines a trajectory from a dense to a sparser, reduced soundscape, while for others it translates moving away from harmonic tones and towards complex spectra.
This experiment is accompanied by a short evaluation study to contextualize it in an established aesthetic perspective space and to probe its potential for public engagement in the discourse around climate change.
§ INTRODUCTION
Coral bleaching is a characteristic whitening of the visible surface caused by the expulsion of a microscopic unicellular algae called zooxanthellae, the photosynthetic pigments in corals.
It happens in response to several factors such as low salinity, pollutants, and temperature stress, among others. While these have always been contributing causes, in the last few decades the role of humans in this process has been central. Recently, in fact, there has been an unprecedented increase in carbon dioxide and other significant greenhouse gases driven by fossil fuel combustion or agricultural and land-use sources (e.g., methane). The increase in greenhouse gas leads to an increase in air and sea temperature (the “global warming”). Manifestations of these trends are changing sea levels and changing weather patterns (e.g., storms), both deleterious for coral reef ecosystems. Other anthropogenic factors that alter the natural ecosystemic balance of coral reefs are related to overfishing. This is because some fish (but also crabs, sea urchins, etc.) help maintaining coral health by eating reef macroalgae and preventing coral smothering and weakening. Therefore, overfishing for these species can result in a problematic increase in algae cover.
Without further inquiry into the complex weaving of interactions contributing to coral bleaching, this paper looks at this phenomenon with respect to Hawaii. The first mass bleaching event in this area was reported in 2002 <cit.> with subsequent notable events in 2004, 2005, 2014, and 2015. Although not as severe as the 2014 and 2015 events, 2019's coral bleaching in Hawaii was still reason of concern and was documented extensively in a multi-institution initiative, leading to many online blogs and some publications <cit.>.
Comparing pre- and post-bleaching via means of images (see Figure <ref>) offers a stark warning and unequivocal measure of the urgency needed to implement drastic and long-lasting changes, both at socio-political and economic levels, but also in one's personal sphere (e.g., daily consumption habits, etc.). There are also tools for the visual assessment of coral bleaching that have been developed for the wider community, such as the Hawaiian Ko`a Card <cit.>. The experiment presented in this paper sets out to investigate alternative representations of the same phenomenon for those who are less visually oriented and/or able, with the hope of contributing to raising awareness and calling for action to preserve and nurture marine ecosystems at large. To this end, some of the data available from a 2019 study (see Section <ref>) is used to auditorily display the process of coral bleaching by means of parameter mapping. Given the embryonic stage of this sonic experiment, the aims and goals are kept modest and exploratory rather than explanatory, as it does not try to uncover or discover correlations or causal relationships between data fields (e.g., pollutants, sea temperature, percentage of bleached coral populations, etc.) in the dataset. After a review of 1) relevant works that have treated similar topics and of 2) popular methods used for auditory display of data, this paper describes in detail the concept and the practical implementation of the author's experiment. Then, a small evaluation study is carried out and the results are discussed alongside future improvements in the context of public engagement on climate issues through sound.
§ RELATED WORK
Firstly, it is important to disambiguate the term sonification. Hereinafter, sonification will be used as a synonym of auditory display of data instead of the act of applying sound energy to agitate particles in a laboratory sample. It is necessary to clarify this point, because the alternate meaning is often found in marine biology literature and studies.
Several marine phenomena/topics have been the object of interest in sonification endeavors, dealing with ocean conditions <cit.>, sea-surface water temperature <cit.>, or wideband hydroacoustic data <cit.>.
As for coral reefs, NASA's Coral Sea Sonification[https://soundcloud.com/nasa/coral-sea-sonification] was created using spectral ocean reflectance data, and it is part of the Sounds of the Sea[https://www.nasa.gov/feature/goddard/2022/hear-sounds-of-the-sea-in-sonifications] initiative that focuses on auditory display.
In recent years, multidisciplinary scientists and artists have also turned to rendering ecological data into sound in the context of art installations or more arts-oriented environments. Gilmurray termed this domain Ecological Sound Art <cit.>. For example, maritime traffic has been addressed in sound art installations such as Baltic Sea Radio[https://var-mar.info/baltic-sea-radio/] by the artist duo Varvara & Mar or the nautical cycle works of David Berezan (particularly Sea Lantern, 2017, and Buoy, 2011, both incorporating real-time data from sea buoys).
Regarding coral reef data in ecological sound art, Johnstone's Coral Symphony <cit.> installation consisted of two separate sonification modules: one sonically displaying reef data from the Myrmidon Reef on the Great Barrier Reef, the other interactively mapping audience motion tracking data to seascapes obtained from undersea recordings, whereby the latter are injected, added and layered to a total resulting sound environment.
In 2018, Lauren Jones and Eunjeong Stella Ko realized and presented Hearing Seascapes, a virtual-reality installation combining audiovisual data to generate endangered coral reef location-dependent sound.
More recently, a dedicated concert titled Sound as ocean memory: ecoacoustics, audification and sonification [https://ccrma.stanford.edu/ brg/soniOM/april-9.html], organized and held at CCRMA, Stanford, featured many compositions realized from undersea data, and two relating to coral, specifically. These were Mike Cassidy's Coral Reef Sonification (2021) which sonified data measuring gene expression in response to temperature fluctuations, and choralCoral - 3 genomic études of climate (2021) by Tim Weaver, Steve Palumbi, and Jonathan Berger, that sonified Acropora coral species genomic sequences in relation to climate change and heat stress, in the Palau/South Pacific island region.
Finally, in <cit.>, Heather R. Spence and Mark Ballora presented five sonifications and soundscape compositions based on marine data. Among these, Reef Recall comprises one (of six) movements (i.e., Crustacean Chorus) that relates to the characteristic shrimp-made crackling sonic tapestry of coral reefs. Reef REM Ember, instead, is another of those works, and layers processed (via means of transient detection, quantization, etc.) undersea recordings with input from live musicians.
§ SONIFICATION APPROACHES
There are distinct methods for auditory display of data, and the choice of one (or several) over the others is project-specific. For geodesic data, for example, Audification <cit.> is often used, but Parameter Mapping Sonification <cit.>, hereinafter PMSon, remains arguably the indisputable default choice for most auditory display applications, allowing high customization in the sound design layers while maintaining (conditioned upon careful planning) intelligibility of the data. The ability to communicate or infer meaning in relation to the data or process of interest through sonification is a crucial factor in how successful an auditory display is judged. With this in mind, the authors of <cit.> argue for a method called Model Based Sonification, in which users can “explore” a given phenomenon by interacting with a data model representation of it, by means of movement, for example. A more recent approach to auditory display of data goes by the name of Wave Space Sonification <cit.> whereby a scalar field is scanned along a data-driven trajectory.
Finally, there exist other sonification methods, such as Earcons or Auditory Icons, that are used as auditory aids or sound placeholders for an event or action, for example, to alert users of a particular system's state.
It appears that none of the relevant work cited in the previous section exploited the more recent paradigms of sonification. As for the experiment described in this paper, it is also designed using a more conventional approach (PMSon). Details of the model are presented in the next section.
§ REEF ELEGY
Most coral reefs are situated within the boundary for 20^∘C isotherms. If one is in the habit of diving or snorkeling within this boundary, they probably have come across vast stretches of bleached or degraded reefs. It is not a sight for the faint of heart and, while one very much hopes for climate action and ecosystemic restoration, an elegy is an appropriate, if suggestive, term for a sound meditation on this topic. In fact, an elegy is a reflection or lament on death, normally intended in the form of poetry, but originally also included epitaphs (mournful songs).
§.§ Data
The data was obtained from a two-month collection during 2019, on behalf of the Hawaii Coral Bleaching Collective (HCBC) <cit.> and comprises 517 clustered observations of 22 variables. Of these, only the mean percentage of living coral cover that was partially or fully bleached was used in the sonification, along with the mean geographic coordinates (in decimal degrees), the mean depth for the observation (ranging from 0.6 to 29.8 m), and the photosynthetically active radiation (PAR) values reported for each cluster.
Figure <ref> shows location, depth and percentage of bleached coral population for these clusters.
However, the number of clustered observations proved to be CPU-intensive for the auditory display model (see Section <ref>). Therefore, the data was further clustered based on the location, using the ordering points to identify the clustering structure (OPTICS) <cit.> algorithm (unsupervised) with a minimum number of samples n=2. This resulted in 176 clusters of location, bleaching, PAR, and depth averages.
§.§ Spatialization
From this new dataset, a sonification dataframe was constructed as follows. The depth, PAR, and coral bleaching percentage were converted to the interval [0, 1], while the mean longitude and latitude of each clustered data point was first re-scaled to the range of the original observations bounding box, then converted to radians.
The reason for these steps lies in the fundamental idea behind Reef Elegy, which is to scatter the sonic representation of the observations on a 3D sound sphere. Thus, had the global Earth coordinate boundaries been kept, this would have resulted, perceptually, in a near-single point origin for the sound processes (corresponding to Hawaii's location on the planet). Regarding the use of radians instead of degrees, this is due to the requirements and specifications of the sound spatialization technique used, ambisonics.
Ambisonics <cit.> is a full-sphere surround sound format that allows the encoding of a sound field independently from a given speaker configuration. The encoded sound field representation can then be decoded for playback on any speaker setup. The encoding can be done using variable spatial resolution, hereinafter referred to as order. For a given order n a full-sphere system requires
(n+1)^2 channels. An ambisonic encoder takes a source signal S and two parameters, the horizontal angle θ (or azimuth) and the elevation angle
ϕ. It positions the source at the desired angle by distributing the signal over the ambisonic components. The convention for the coordinate system is that used for two-dimensional polar coordinates and three-dimensional cylindrical coordinates, but often times in physics and in the geographical spherical coordinate system θ is used for elevation (inclination) and ϕ for azimuth. For clarity, the following equivalences are stipulated and kept throughout: longitude or azimuth or θ, in the interval [-π, π], and latitude or elevation or ϕ, in the interval [-π/2, π/2].
Figure <ref> shows how the data points are encoded in an ambisonic sound field.
For the spatialization as decribed above, the SC-HOA extension <cit.>, which provides wrapper classes of AmbiTools <cit.> for the SuperCollider[https://supercollider.github.io] audio programming language, was used with an ambisonic order of n=3. This meant that the computational load for modeling the sound process with the original 517 sound sources posed challenges to the CPU used in this experiment. There needs to be a minimum of 517×3 UGens (building blocks of synth definitions, used to generate or process both audio and control signals) per sound source (1 encoder, 1 decoder, and at least 1 oscillator, for a minimal rendition). The original data was thus re-clustered as described in Section <ref>.
§.§ Sonification
The sonification for Reef Elegy is inspired by the natural soundscape of coral reef environments. For the most part, these are dominated by the sounds of shrimp but there is also a component created by macroalgae's photosynthesis. The latter is normally difficult to detect due to the overwhelming shrimp snapping sound component, and because the two sound processes occupy approximately the same frequency band. However, the photosynthesis sound component has been hypothesized to be a sonic landmark of coral reefs under stress.
Before proceeding with further details about the sonification model, the main assumptions adopted must be stated.
First, that there exists some correlation between the presence of shrimp and the health of the coral reef, whereby the deterioration of the latter leads to a decreased population of the former.
While some <cit.> suggested that snapping shrimp frequency band sound level measurements can be useful as a metric to assess reef quality and biodiversity, others <cit.> have shown little consistency in this regard.
Nevertheless, the aforesaid relation is considered true for the sonification model, in this case. Moreover, it is assumed that the proportionality is real-time, so that there is a direct mapping between the percentage of bleached coral population and the density of the shrimp population, which can be maintained and dynamically (auditorily) displayed as time steps go by.
Secondly, the relationship between macroalgae and coral is complex and involves many other factors, for example the role of herbivorous fish and sea urchins which is in turn affected by overfishing and sea pollution (see Section <ref>). An even remotely feasible modeling of this complex ecosystem at a sound level is beyond the scope of this paper's experiment. Here, it is simplistically assumed that the photosynthesis induced sound element is related to the PAR alone, overlooking all the contributing factors.
Third, both shrimp snapping and algae sound have peaks during diurnal times and valleys during the dark period of the day. These inter-day patterns are not considered in the experiment as, to compress 78 days into a time length suitable for listening, one day ends up being relatively short (one second, in the experiment described in Section <ref>). Therefore, the subtle waves of activity within it would not only be lost, but also, perhaps, mask the intra-day sound dynamics, which remain the focus of Reef Elegy.
Two versions of the auditory display were produced, one done via sound synthesis, the other processing real undersea recordings. For simplicity, these will be referred to as AD1 and AD2, respectively.
With the 176 re-clustered data points, and according to the mapping strategies outlined so far, each sonification required an average of 1672 UGens.
In both versions, and working with the aforesaid assumptions, two layers of sound were implemented: crackles and bubbles.
§.§.§ Crackles
Shrimp snap “via the collapse of a cavitation bubble during rapid claw closure, generating broadband signals (up to 200 kHz), typically peaking between 2 and 20 kHz" <cit.>.
As a heuristic value, the minimum shrimp density estimations reported in <cit.> were chosen, for both degraded and healthy sites. These values originally referred to a single hydrophone coverage area, therefore they were scaled by three orders of magnitude considering that 1) the sonification represented many locations simultaneously, and 2) it would have been difficult to auditorily and perceptually appreciate the difference in density resulting from the bleaching data mapping. A shrimp density interval of [0.023, 0.471] was thus obtained and inversely mapped to the coral bleaching percentage using a linear to exponential mapping.
For each day (a time step t) in the dataset, each sound source's crackles synth is passed the value of the coral bleach percentage for that cluster for that day as the density parameter.
Therefore, each cluster gradually reaches its percentage of coral bleach over the course of the sonification.
The gain value for each cluster's crackles synth is multiplied by a coefficient that accounts for the mean depth of the cluster, so that observations that were taken at higher depths get slightly boosted.
For AD1, the shrimp snapping/crackling was obtained using a UGen which generates random impulses according to the density parameter, whereas for AD2, the density parameter controlled the trigger rate of a sample-based granular synthesis UGen.
§.§.§ Bubbles
As a result of algae photosynthesis, oxygen-containing bubbles are formed and subsequently released. In detaching from the algae, bubbles create a short “ping” sound. This bubble release is not uniform and appears “as an irregular pulse-train-like time series” <cit.>.
Tank experiments <cit.> should be considered with caution when porting findings to real coral reef environments; however, “the bubble production mechanism […] may be used as a general indicator of photosynthetic activity" <cit.>.
To auditorily render this process, the PAR values from the coral bleaching dataset were mapped to different parameters in the two sonifications.
For AD2, the amplitude contribution from the bubbles element increases linearly according to the PAR daily value for each cluster. In AD1, the same is true except that, instead of using audio samples, each cluster's bubbles component is a frequency modulation (FM) unit, acting as a partial in a resulting additive synthesis (all clusters). The carrier frequency for each cluster's bubbles synth is an integer multiple of a given fundamental frequency (fixed), whereas the modulator frequency is affected by the PAR value and so is the index of modulation. Therefore, a relation between a complex and wide frequency spectrum and unhealthy coral reef is thus established. Consequently, towards the end of the sonification, the bubbles element is further from a harmonic tone than it was at the beginning.
In both cases (AD1 and AD2), and similarly to the crackles, the amplitude of a cluster's bubbles component is further multiplied by a depth compensation factor.
§ EVALUATION
Sonification, independent of the method or approach chosen, is inevitably going to involve a series of arbitrary decisions to establish formal relations between entities or properties in the data domain and their counterparts in the sound/music domain. Whether a sonification is more geared towards conveying information or is instead concerned with artistic goals is a topic that has informed many of the discussions in the auditory data display community. In <cit.>, a useful cartesian representation of the conceptual sonification space, subtended by an axis spanning from Ars Informatica to Ars Musica (the Intentionality dimension) and another spanning from Abstract to Concrete (the Indexicality dimension), is put forward. This model, hereinafter the Aesthetic Perspective Space (APS), can help to contextualize a given project and to determine the conceptual position it occupies in the vast horizon of sonification.
More recently, the authors of <cit.> formalized the notion of APS via an extensive study of 32 sonification works relating to climate change. Inspired by the procedure outlined there and borrowing some of its metrics, a short evaluation of Reef Elegy was carried out.
§.§ Experiment
The study was anonymous and conducted online using a commercial but free survey tool part of a popular office suite offering. A total of 14 participants completed the study, with an average completion time of 10 min. Of the respondents, seven had variable degrees of familiarity with sonification (hereinafter, initiated) and seven had never heard of it before the study (referred to as uninitiated, from now on). A preliminary discriminative listening test asked everyone to evaluate two short (3 s) audio files. One was a recording of shrimp snapping/crackling, the other a synthesized abstraction of the natural process. Respondents simply had to state whether they thought the audio contained concrete or abstract sounds. Although, perhaps not surprisingly, the initiated did better in correctly classifying the two recordings (see Table <ref>), this was more of a tuning task, to get the participants' attention. Moreover, the different level of discernment did not seem to affect the statistical outcome of subsequent tests.
Following this task, a brief description of Reef Elegy was given, to provide context, motivation, and essential information about the sonification project. Then, two main tasks were presented to the respondents, one relating to Reef Elegy's position with respect to APS, and another about some qualitative characteristics of this sonification project. Both renditions of Reef Elegy were used for the experiment, with a time step t = 1 s and a fade out tail of 5 s, for a total of 83 s each. This choice was deemed appropriate for a listening task/study, with in mind to minimize the time load for the participants.
§.§.§ APS
The APS questionnaire comprising eight Likert-items used in <cit.> to span the APS was replicated here (for each sonification). Ratings used a seven-level response scale ranging from “Strongly disagree” to “Strongly agree” with “Neutral” in the middle. Variable name aliases for the items were also kept, to facilitate future comparisons of Reef Elegy with existing and already analyzed sonification works. A rapid visual inspection comparing results between the two groups suggested no significant differences.
Nevertheless, formal tests were carried out.
First, the items' validity was tested via inter-rater agreement using Cronbach's alpha coefficient, yielding good results (shown in Table <ref>).
Since the Likert-items were carefully designed to investigate the same construct and were fully Likert scoring-compliant (e.g., response levels were anchored with verbal labels which connote more-or-less evenly spaced gradations, bivalent and symmetrical about a neutral middle, arranged horizontally with equal spacing, etc.), it was thus reasonable to consider the recoded response levels (to consecutive positive integers) as intervals. This allowed to combine the Likert items into Likert-scales, and to apply parametric tests. Using mean as the aggregating operator, the resulting Likert-scales for the two groups were checked using an independent two-sample T-test with equal sample size. For both sonifications this yielded a p > 0.05, failing to reject the null hypothesis.
To calculate the Intentionality and Indexicality values from the responses, the formulas used in <cit.> were applied. Figure <ref> shows the perceived locations of the two sonifications on the APS, for the two groups. This further confirms that, regardless of the level of familiarity with auditory data display, each version occupied a similar conceptual space. Moreover, the two sonifications scored similarly, despite one being obtained using real-world undersea recordings and despite the reasonable ability across both groups to discriminate between these sounds and their synthesized simulations/models. Thus, applying (granular) synthesis to natural soundscapes and naturalizing synthetic sounds (through modeling) had the effect of pulling the two sonic outputs closer to one another.
§.§.§ Qualitative Characteristics
As for the qualitative characteristics items, the original list in <cit.> was aimed at sonification experts who evaluated finished works that had been presented, performed, or published at the time of evaluation. Therefore, that study presupposed a considerable amount of information on the object of the evaluation.
Considering that the study presented here refers to the prepublication stage of a sonification endeavor, not necessarily aimed at domain experts, and under further anonymity constraints due to the review process, most of the aforementioned scales seemed problematic to this end. Notwithstanding, five out of the original 25 items were kept to gather insight. The corresponding original alias/convenience variable names were also kept, and items were evaluated on a seven-level rating scale ranging from “Extremely little” to “Extremely much” with “Average” in the middle.
The first item asked how convincing the sonifications were in communicating coral bleaching, while the second was related to the perceived potential of the sonifications in raising awareness or thoughts about climate change. These were followed by questions on the specificity of the information on source data, on the level of detail about the sonification methods and, lastly, on how well the sonifications matched the original phenomenon.
These questions were asked only once, for both sonifications to be considered as a standalone project.
Figure <ref> shows the responses per group for these items.
It would appear that the most contended item was the first (variable alias name: Convincing), with the initiated group leaning more towards a positive response (four people responded with levels above the neutral middle) when compared to the uninitiated (no positive leveled responses).
On the contrary, there seemed to be consensus on the modest if poor potential of the sonifications to raise awareness, on an appropriate level of information about the source data, and on the satisfactory representation of the original phenomenon. Finally, respondents familiar with sonification seemed to have found the details regarding the mapping methods somewhat lacking.
Since the items did not inquire about the same construct, further investigation did not follow the same procedure adopted earlier for the APS tasks. Instead, non-parametric tests (i.e., pairwise Mann-Whitney U) were conducted, which all failed to reject the null hypothesis.
The study did not provide participants with optional fields for commentary. However, one of the respondents reached out to share their thoughts on the sonifications, which are reported below verbatim.
Although it would have been much easier to convey and faster to consume the message of coral bleaching by just displaying a conventional visualization, sonification adds a lot of value emotionally and engages the listener more. The temporal aspect of how the bleaching process is unfolding is best conveyed by sonification which requires time to consume the information and learn about a phenomena. Also, the sound of unhealthy coral reef has this ominous darker quality to it, and when it slowly overtakes the rhythmic joyful cracking sound of the shrimp, listener can emotionally connect with the process of coral reefs suffering bleaching, shrimp being swept away by a wave of dull hissing sound signifying the crisis unfolding. (Participant, private email communication)
§ CONCLUSION
This paper presented Reef Elegy, a parameter mapping sonification of Hawaii's 2019 coral bleaching data, evaluated by means of an online survey with participants of varying sonification skill level.
The consensus that emerged seemingly positioned this work around the center of an established conceptual space for aesthetic perspective in sonification, and deemed the proposed auditory display effective in portraying the gradual loss of life and diversity in coral ecosystems. At the current prototypical stage of development, it did not prove to be sufficiently inspiring for the debate around climate change.
To this end, it is reasonable to envisage renditions of Reef Elegy where both the time span and the complexity of the mapping can be extended.
Adjusting and/or replacing some of the modular elements in the source code would allow to achieve results more akin to sound art, regarding both sonic content and time scales (one can think of some slowly evolving works by Eliane Radigue, such as Transamorem - Transmortem, for example). Furthermore, because of the ambisonic encoding of the sonification process, rendering to multichannel speaker setup would be seemingly automatic. A domain shift from the current small experiment towards some more immersive future edition would arguably benefit the aims and goals of this sonic endeavor. Given the increasing occurrence and scale of mass bleaching events, it is hoped that continuing to offer alternative, improved, and more engaging representations of such critical phenomena might help sensibilize audiences and inspire structural, sustained action and change in policies, behavior, and habits.
§ ACKNOWLEDGMENT
The author thanks Georgios Diapoulis for standing in for him during the conference presentation. To avoid an unnecessary carbon footprint (flying from Tokyo to Stockholm and back generates about 1,475 kg CO2) and in line with this paper's climate responsibility pledge, the author did not attend in person.
IEEEtran
§ SUPPLEMENTARY MATERIAL
Supplementary material regarding Section <ref>, including the two sonifications, is available online at
<https://doi.org/10.5281/zenodo.7879979>
|
http://arxiv.org/abs/2306.06409v1
|
20230610110253
|
Functional Causal Bayesian Optimization
|
[
"Limor Gultchin",
"Virginia Aglietti",
"Alexis Bellot",
"Silvia Chiappa"
] |
stat.ML
|
[
"stat.ML",
"cs.LG"
] |
plainnat
1,Equal contribution.]Limor Gultchin
2,]Virginia Aglietti
2]Alexis Bellot
2]Silvia Chiappa
[1]University of Oxford, The Alan Turing Institute, Work done at DeepMind, London, UK
[2]DeepMind, London, UK
plain
*genericthm*
namedthm*[1]
theoremTheorem[section]
proposition[theorem]Proposition
lemma[theorem]Lemma
corollary[theorem]Corollary
definition
definition[theorem]Definition
assumption[theorem]Assumption
remark
remark[theorem]Remark
arrows,shapes,backgrounds,through,shadows
decorations.pathmorphing,calc
dot node/.style=
shape=circle,
draw,
inner sep=+0pt,
minimum size=+4.mm
,
dotdot node/.style 2 args=
dot node,
label=[shape=circle,fill=gray,outer sep=+0pt,inner sep=+0pt,minimum size=+2.mm]center:
,
arc style/.style=
|<->|,
shorten >=+-.5,
shorten <=+-.5,
automata,positioning
dot=[circle,fill,inner sep=2.5pt]
dgraph=[->, line width=1.5pt]
A
X
x
Z
z
I
U
V
F
C
c
#1#1
#1#1
#1#1
#1#1
Functional Causal Bayesian Optimization
[
Accepted . Received
=======================================
We propose functional causal Bayesian optimization (), a method for finding interventions that optimize a target variable in a known causal graph. extends the family of methods to enable functional interventions, which set a variable to be a deterministic function of other variables in the graph. models the unknown objectives with Gaussian processes whose inputs are defined in a reproducing kernel Hilbert space, thus allowing to compute distances among vector-valued functions. In turn, this enables to sequentially select functions to explore by maximizing an expected improvement acquisition functional while keeping the typical computational tractability of standard settings. We introduce graphical criteria that establish when considering functional interventions allows attaining better target effects, and conditions under which selected interventions are also optimal for conditional target effects. We demonstrate the benefits of the method in a synthetic and in a real-world causal graph.
§ INTRODUCTION
Finding interventions in a system that optimize a target variable is key to many scientific disciplines, including medicine, biology, and social sciences.
Causal graphs <cit.>, in which an intervention on a variable is represented as modifying the casual influence from its incoming edges, offer a powerful tool for dealing with the effects of interventions, and are therefore increasingly integrated into approaches to learning optimal policies such as bandits <cit.>, reinforcement learning <cit.>, and Bayesian optimization <cit.>.
Most works in causal Bayesian optimization () have focused on the hard intervention (X=x), which consists in setting variable X to a constant value x. However, in many practical scenarios the investigator may be able to implement policies that also contain other types of interventions.
Consider, for example, the graph in fig:causalgraphs1(left) representing causal relationships between prostate specific antigen () and other variables. An investigator wishing to find a policy for prescribing Aspirin and Statin dosages, as well as Calories Intake (), that minimizes might be able to consider, in addition to policies made of only hard interventions (as the one represented in fig:causalgraphs1(middle)), also policies where e.g. Statin dosage retains a dependence on Age and (as the one represented in fig:causalgraphs1(right)).
Contextual interventions are achieved in <cit.> and in <cit.> by searching for different hard interventions in separate sub-groups defined by some contexts and by inducing changes in the parametrization of a node's conditional distribution via action variables, respectively. However, the first approach learns an implicit mapping between contexts and intervention values, and requires extrapolating to unseen or rarely explored areas of the context space; while the second approach can only induce some modifications of the parametrization and does not allow choice of context.
In this work, we introduce an extension of the family of methods that considers a more flexible and general type of contextual intervention, consisting in making variable X a deterministic function of other nodes in the graph. Such a functional intervention is implemented via new techniques for computing distances among functions of different variables.
Our contributions can be summarized as follows:
* We formalize the problem of finding policies made of hard and functional interventions optimizing the expectation of a target variable as the functional causal global optimization () problem.
* We introduce two graphical criteria that establish when functional interventions could be necessary to solve the problem and when policies made of only hard interventions are sufficient, respectively.
* We introduce conditions in which a policy solving the problem also optimizes conditional expectations of the target variable.
* We propose functional causal Bayesian optimization (), a method for solving the problem that models the expectation of the target variable under each policy scope with a Gaussian process model whose inputs are defined in a reproducing kernel Hilbert space.
* We validate in a synthetic and in a real-world setting with respect to target effects, conditional target effects, and costs of interventions.
§ BACKGROUND AND SETTING
We consider a system of observable random variables with target variable Y∈ and intervenable variables ⊆\ Y, and the problem of finding a subset of and interventions on it that optimize the expectation of Y. Our goal is to introduce a method that allows two types of interventions on a variable X∈: (i) the hard intervention (X=x) consisting in setting X to value x; and (ii) the functional intervention[Functional interventions are also called conditional interventions in <cit.>.] X=π_X|_X(_X) that makes X a deterministic function of a set of variables _X ⊆\{X, Y}, called the context of X, where π_X|_X__X↦_X with __X indicating the range of _X. Both hard and functional interventions make X a deterministic function of a context _X (the hard intervention (X=x) can be viewed as a functional intervention with empty context _X=∅, setting X to value x=π_X|∅(∅) where π_X|∅∅↦ x is the empty function), and are therefore referred to as deterministic interventions <cit.>.
We specify the system's behavior using a structural causal model () M defined by the tuple ⟨, , , p() ⟩, where is a set of exogenous, mutually-independent, unobserved random variables with distribution p(), and ={f_V}_V∈ is a set of deterministic functions such that V=f_V((V), _V) with (V) ⊆\ V and _V ⊆, ∀ V∈. A deterministic intervention on X therefore replaces f_X with π_X|_X.
M has associated a directed graph, which we assume to be acyclic[A directed graph is acyclic if it has no directed paths starting and ending at the same node. A directed path is a sequence of linked nodes whose edges are directed and point from preceding towards following nodes in the sequence.], with nodes ∪ and with an edge from A to B if A∈(B) or A∈_B.
A node A with an edge into B is called a parent or direct cause of B (in this case B is called a child of A). A node A with a directed path ending at B is called an ancestor of B (in this case B is called a descendant of A).
We consider the projection of this graph into the graph that contains only nodes and that has a directed edge from V to W if V is a parent of W and a bi-directed edge between V and W if _V∩_W≠∅ (_V∩_W is an unobserved confounder between V and W), and refer to it as causal graph associated with M. Given a causal graph , we say that M is compatible with if all edges that are in the causal graph associated with M are also in .
We indicate the set of parents, ancestors, and descendants of V in with _(V), _(V) and _(V), respectively. We indicate the nodes connected to V by a bi-directed edge with _(V). We refer to the joint distribution of determined by p(), which we denote by p(), as observational distribution.
The space of deterministic interventions for a casual graph can be formalized using the concepts of mixed policy scope () and deterministic mixed policy () introduced in <cit.>.
A mixed policy scope for a causal graph
is a collection of pairs X_X such that (i) X ∈, _X ⊆\{X, Y};
and (ii) the graph _ obtained by removing from the incoming edges into X and by adding to directed edges from _X to X, for every X_X∈, is acyclic.
An specifies the variables in on which interventions are performed and their contexts. For example, ={∅,{, }}
for in fig:causalgraphs1(left) specifies that interventions are performed on and , and with context ∅ and {, } respectively, as graphically represented in fig:causalgraphs1(right).
A deterministic mixed policy compatible with is defined as = {π_X|_X}_⟨ X, _X ⟩∈\_⋃{π_X|∅(∅)}_⟨ X, _X ⟩∈_, where π_X|_X__X↦_X, π_X|∅(∅) denotes the value returned by the empty function, and _={X_X∈_X=∅}.
A specifies the function π_X|_X or the value π_X|∅(∅)
that replaces f_X∈ in M,
∀⟨ X, _X ⟩∈.
The replacements induce a variant M_ of M with joint distribution over denoted by p_().
We refer to p_() as interventional distribution induced by , and to an observation from p_() as an interventional data sample.
§ PROBLEM
Let μ^Y_=𝔼_p_[Y] denote the expectation of Y w.r.t. the interventional distribution induced by , which we refer to as the target effect. Our goal is to introduce a method for solving the problem of minimizing μ^Y_ over the space Σ of s for and the space of s that are compatible with , formally defined below.
(problem)
The functional causal global optimization () problem is the problem of identifying a tuple (^*,π^_^) such that
^, π^_^ = _∈Σ, ∈μ^Y_ .
Importantly, Proposition 1 in <cit.> implies that the target effect μ^Y_π^_^ given by a solution of the problem (^*,π^_^) equals the one that would be obtained by also considering
stochastic interventions <cit.>.
The problem extends the causal global optimization () problem defined in <cit.> that only considers hard interventions. In sec:optimality we introduce graphical criteria that establish when only considering hard interventions might lead to a bigger target effect and when this is not the case. In addition, in sec:subgroup we introduce conditions under which a policy solving the problem is also optimal for conditional target effects.
Solving the problem requires computing distances between functions defined over different contexts. In sec:gpsurrogate we propose to model each target effect via a Gaussian process whose kernel allows computing such distances. We discuss how this approach enables us to keep the computational tractability of standard Bayesian optimization () methods while allowing to flexibly specify functional interventions.
§.§ Hard Interventions (Sub-)optimality
Let Σ_ denote the set of s in Σ that contain only hard interventions, Σ_={∈Σ=_}.
In this section, we introduce graphical criteria that establish when restricting the search space in the problem from Σ to Σ_ might lead to a bigger target effect and when this is not the case, thereby informing the investigator about when functional interventions should be considered. The proofs are given in Appendix <ref>.
Let be a causal graph such that (i) ∃ C∈_(Y) with C∉; or (ii) ∃ C∈sp_(Y). If ∃ X ∈_(Y) ∩ such that {⟨ X, C ⟩} is an , then there exists at least one compatible with for which min_∈Σ_, ∈μ^Y_>min_∈Σ, ∈μ^Y_.
[9]r0.12
-0.5cm
0.8
-0.4cm
[dgraph]
[] (a) [label= north:(i)] at (1.7,0.8) ;
[dot] (X) [fill=gray!70,label=south:X] at (2, 0) ;
[dot] (C) [fill=darkGreen!70,label=north:C] at (3, 1) ;
[dot] (Y) [fill=red!70,label=south:Y] at (4, 0) ;
[line width=0.6pt, ](C)–(X);
[line width=0.6pt, ](X)–(Y);
[line width=0.6pt, ](C)–(Y);
0.8
-0.4cm
[dgraph]
[] (b) [label= north:(ii)] at (1.7,0.8) ;
[dot] (X) [fill=gray!70,label=south:X] at (2, 0) ;
[dot] (C) [fill=gray!70,label=north:C] at (3, 1) ;
[dot] (Y) [fill=red!70,label=south:Y] at (4, 0) ;
[Latex[length=2.mm,width=2.mm]-Latex[length=2.mm,width=2.mm], dashed, line width=0.6pt](C) to [bend left=+40](Y);
[line width=0.6pt, ](X)–(Y);
[line width=0.6pt, ](C)–(X);
In a casual graph , if _(Y)⊆ and sp_(Y)=∅ there exists a compatible with ={X∅: X ∈_(Y)} that solves the problem.
Proposition <ref> captures two conditions for sub-optimality of hard interventions: the existence of a non-intervenable variable C in _(Y) that can serve as context for a functional intervention on a variable X, as in the causal graph (i) on the right (for which ={X}); and the existence of a variable C with an unobserved confounder between it and Y that can serve as context for a functional intervention on a variable X, as in the casual graph (ii) on the right (for which ={X, C}). In both cases, a hard intervention on X would cut the paths from X to Y passing through C (X← C → Y and X← C ↔ Y respectively). Instead, a functional intervention on X with context C would keep such paths open and therefore could assign intervention values to X informed by values of C, potentially leading to a smaller target effect. Below, we provide two s and functional interventions for which this is the case.
Consider graph (i), with ℳ with = {U_C, U_X, U_Y} such that p(U_C) = p(U_X) = 𝒩(0,1) and p(U_Y) = 𝒩(1, 1), and functional assignments C = U_C, X = CU_X, Y = C X U_Y. Σ_= {^1={X∅}} with π_^1={x=π_X|∅(∅)} induces the modified M_π_^1 where Y = U_C xU_Y and μ^Y_π_^1 = 0.
In contrast, = {XC} with π_={π_X|C(C)=-1/C} induces M_ with Y = -U_Y, giving μ^Y_ = -1.0. Therefore, π_ achieves a smaller target effect than π_^1.
Consider graph (ii), with ℳ with = {U_CY, U_X, U_Y} such that p(U_CY) = p(U_X) = 𝒩(0, 1) and p(U_Y) = 𝒩(1, 1), and functional assignments C =U_CY, X = CU_X, Y = U_CY X U_Y. In this case, Σ_={^1={X∅},^2={C∅},^3={X∅,C∅}}
with s
π_^1={x=π_X|∅(∅)},
π_^2={c=π_C|∅(∅)}, and π_^3={x=π_X|∅(∅), c=π_C|∅(∅)}. In M_π_^1, Y = x U_CY U_Y thus μ^Y_π_^1 = 0. In M_π_^2, Y = cU_X U_CYU_Y thus μ^Y_π_^2 = 0. In M_π_^3, Y = xU_CYU_Y thus μ^Y_π_^3 = 0.
In contrast, = {XC} with π_={π_X|C(C) = -1/C} induces M_ with Y = -U_Y giving μ^Y_ = -1. Therefore π_ achieves a smaller target effect than any other containing only hard interventions.
§.§ Conditional Target Effects
In addition to potentially leading to a smaller target effect, considering functional interventions allows to deal with settings in which the investigator might wish to minimize the target effect conditioned on a set of variables. For instance, in the health example of fig:causalgraphs1(left), the investigator might want to find interventions minimizing the expectation of in a given population as well as in a specific sub-group made of individuals aged over 65, μ^_, > 65:=𝔼_p_[ > 65] – since a high percentage of prostate cancer cases are diagnosed within this sub-group <cit.> – while still not negatively affecting individuals of other ages. Such settings can be formalized as wishing to minimize the conditional target effect μ^Y_, = =𝔼_p_[Y=]̧ for ⊂\ Y and ∈̧_.
Let _ denote the intervention variables included in , _={X X_X∈}, and _X^ the context variables in for an intervention on X.
Unlike when considering only hard interventions, the following proposition shows that, under some conditions, a solution of the problem also minimizes μ^Y_, = in a restricted s space (the proof is given in Appendix <ref>).
If ^, π^_^=_∈Σ, ∈μ^Y_,
then ^, π^_^=_∈Σ^, ∈μ^Y_,= ∀⊂\ Y
such that ∩de_()=∅ and ∀∈̧_ with Σ^ = {∈Σ: _ = _^ and {X_X^^∪_X^∪: X ∈_^} is an }.
§ METHODOLOGY
We propose to solve the problem using the functional causal Bayesian optimization () method summarized in Algorithm <ref>, which assumes known casual graph and continuous variables .
first reduces the search space from Σ to a subset using the procedure described in sec:redundancy; and then solves the minimization problem in (<ref>) using a Gaussian process () g_() to model the unknown target effect μ^Y_, ∀∈, as described in sec:gpsurrogate, with the following sequential strategy. At each trial t=1,…,T: (1) _t and π^t__t are selected via the expected improvement acquisition functional () described in sec:ei; (2-3) a set of S interventional data samples is obtained and used to compute a sample mean estimate, μ̂^Y_π^t__t, of μ^Y_π^t__t; (4) (π^t__t, μ̂^Y_π^t__t) is added to the interventional dataset __t of the _t; (5) the posterior distribution of the g__t,
denoted by τ(g__t | 𝒟^I__t), is updated. Once the maximum number of trials is reached, a tuple (^, π^_^) giving the smallest estimated target effect in ={_}_∈Σ is returned.
Notice that Algorithm <ref> only requires realizations from p_π^t__t(Y) (and could also operate if given directly μ̂^Y_π^t__t instead). This is a considerable practical advantage compared to context-specific reward approaches such as the one in <cit.> that, similarly to non-causal contextual methods <cit.>, require values of the contexts and of the target variable resulting from the intervention at that specific context values.
Similarly to recent approaches in contextual <cit.>, can directly operate on aggregate rewards.
§.§ Search Space Reduction
The cardinality of Σ grows exponentially with the cardinality of and the number of possible context sets _X for each X. Therefore, solving the problem by exploring the entire set could be prohibitively expensive.
Even if Σ has small cardinality, reducing the search space would simplify the problem by reducing the number of target effects to be modelled. We propose to use the results in <cit.> to reduce the search to the subset of non-redundant s included in Σ, denoted by , which is guaranteed to contain a solution to the problem. For completeness and clarity, in this section we describe these results in the setting of s.
Let ^'⊆ indicate that ^'_X⊆_X, ∀X^'_X∈^' with _^'⊆_. Furthermore, let π_^'⊆ indicate that π_X|^'_X(_̧X^')=∫π_X|_X(^̧'_X ∪”̧_X) p_(”̧_X^̧'_X) d”̧_X, ∀ X ∈_^', ”̧_X∈__X\^'_X. Finally, let _ denote d-separation in , and _\ X the modification of obtained by removing node X and its incoming and outgoing edges.
An is said to be non-redundant if there exists an compatible with and ∈ such that μ^Y_π^'_≠μ^Y_ ∀^'⊂ and π^'_⊂.
The following proposition gives a graphical criterion for identifying .
An is non-redundant if and only if (1) _⊆__(Y) and (2) Y __\ X C _X\ C for every X ∈𝐗_ and C ∈_X.
§.§ Gaussian Process Surrogate Models
We model the unknown target effect μ^Y_ for each using a g_(). Differently from existing works on Bayesian functional optimization that focus on univariate functional inputs, can include scalar values as well as functions potentially defined on different input spaces.
[9]r0.14
-0.4cm
0.8
-0.4cm
[dgraph]
[dot] (c1) [fill=darkGreen!70,label=north:C_1] at (-0.8, 2) ;
[dot] (c2) [fill=darkGreen!70,label=north:C_2] at (0.8, 2) ;
[dotdot node] (x) [fill=brightBlue!70,label=north:X] at (0, 1.4) ;
[dotdot node] (z)[fill=brightBlue!70,label=north:Z] at (1.6,1.4) ;
[dot] (y) [fill=red!70,label=north:Y] at (0.8,0.8) ;
[line width=0.6pt, brightBlue, ](c1)–(x);
[line width=0.6pt, brightBlue, ](c2)–(x);
[line width=0.6pt, brightBlue, ](c2)–(z);
[line width=0.6pt, ](x)–(y);
[line width=0.6pt, ](z)–(y);
0.1cm
0.8
-0.4cm
[dgraph]
[dot] (c1) [fill=darkGreen!70,label=north:C_1] at (-0.8, 2) ;
[dot] (c2) [fill=darkGreen!70,label=north:C_2] at (0.8, 2) ;
[dotdot node] (x) [fill=brightBlue!70,label=north:X] at (0, 1.4) ;
[dotdot node] (z)[fill=brightBlue!70,label=north:Z] at (1.6,1.4) ;
[dot] (y) [fill=red!70,label=north:Y] at (0.8,0.8) ;
[line width=0.6pt, brightBlue, ](c1)–(x);
[line width=0.6pt, brightBlue, ](c2)–(z);
[line width=0.6pt, ](x)–(y);
[line width=0.6pt, ](z)–(y);
For instance, for = {X{C_1, C_2}, Z{C_2}} with _ given on the top right, π_X|{C_1, C_2} is defined over _C_1×_C_2, while π_Z|C_2 over _C_2.
Alternatively, for = {X{C_1}, Z{C_2}} with _ given on the bottom right, π_X|C_1 is defined over _C_1, while π_Z|C_2 over _C_2.
We address this complexity by introducing a kernel function for g_() that allows to compute distances among the mixed inputs while handling the different input dimensionality.
More specifically, g_(π) ∼𝒢𝒫(m_(π), K^θ_(π, π^')), where π, π^'∈ (we omit the subscript to simplify the notation[In this section, a π_ indicates a vector, rather than a set, of interventions.]), and m_ and K^θ_ denote the prior mean and covariance functional with hyperparameters θ. Notice that :=ℝ^|_|×ℬ(_) where[|| indicates the cardinality of the set .] ℝ^|_| is the space of scalar values for __ while ℬ(_) is the space of bounded vector-valued functions on _=⋃_X∈__X. Given an interventional dataset 𝒟^I_ for , for which we assume a Gaussian likelihood, the posterior distribution τ(g_ | 𝒟^I_) can be computed by standard updates <cit.>.
We initialize m_ to a zero mean functional and extend the kernel to consider mixed inputs as detailed below.
Kernels for Functional .
We define K^θ_ as the kernel K^θ_S(π, π')=σ^2_f exp(-||π - π'||^2/ 2ℓ^2), where θ = (σ^2_f, ℓ) and where
||π - π^'|| represents a distance between mixed inputs to the [While we discuss the kernel, this procedure can be used to compute any stationary kernel involving the distance between functional inputs similarly to <cit.>.]. Let π_ and π_ denote the vectors whose elements are the scalar values and the functions included in π, respectively.
We define ||π - π'||^2 as ||π - π'||^2 = ||π_ - π^'_||^2 + ||π_ - π^'_||^2_ℋ_κ_, with ||π_ - π^'_||^2 indicating the square of the Euclidean distance in ℝ^|_|, and ||π_ - π^'_||^2_ℋ_κ_ the distance between functions in the vector-valued reproducing kernel Hilbert space (, <cit.>) ℬ(_)=ℋ_κ_ described below.
Specifically, ℋ_κ_ is an with vector-valued reproducing kernel κ_^: __×__→ℝ^|_| × |_|
where denotes the hyper-parameters and _ = {X_X∈_X ≠∅}. We refer to κ_^ as the functional intervention kernel to distinguish it from K^θ_S.
We thus have ||π_ - π^'_||^2_ℋ_κ_ = π_-π^'_π_-π^'__ℋ_κ_, where ··_ℋ denotes the inner product in the space ℋ. Evaluating this quantity requires computing κ_^ at different input values for the variables in _, say _ and ^'_, for π_ and π^'_ respectively.
We write the vector of functions π_ included in the ℋ_κ_ as π_(·) = ∑_i = 1^N_ακ_^(_^i, ·)_i with _i ∈ℝ^|_| and _^i ∈__ and let π^'_(·) = ∑_i=1^N_βκ_^(_^i, ·)_i with _i ∈ℝ^|_|. This implies that the inner product π_-π^'_π_-π^'__ℋ_κ_ can be written as
∑_i=1^N_α∑_j=1^N_α_i^⊤κ_^(^i_, ^j_) _j
+∑_i=1^N_β∑_j=1^N_β_i^⊤κ_^(^i_, ^j_) _j
- 2 ∑_i=1^N_α∑_j=1^N_β_i^⊤κ_^(^i_, ^j_) _j.
To construct κ_^, we propose to augment the input space
by including a task index for each function π_X|_X in , we redefine κ_^ (__×𝒯) × (__×𝒯) →ℝ^|_| × |_| where 𝒯 is the space of integer values from 1 to |_|. For every realization of the context variables and the task index, say (_, t)^i, we can then evaluate κ_^((_, t)^i, (_, t)^j). We assume the covariance between functions defined on different input spaces, for which t^i ≠ t^j, to be 0[Alternative kernel constructions where this assumption is relaxed are discussed in Appendix <ref>.]. Instead, we let the covariance structure across function values associated with different inputs for t^i = t^j be determined by a task-specific kernel, which we denote by k^t^i. Denote by ^i_[t^i] the subset of values included in ^i_ for the contexts of the t^i task and by [t^i] the subset of hyper-parameters for t^i included in . We have that κ_^((_, t)^i, (_, t)^j) is equal to k^t^i(^i_[t^i], ^j_[t^j]) with hyper-parameters [t^i] if t^i=t^j and to 0 otherwise. The kernel k^t^i might differ across tasks both in terms of functional form and hyper-parameter values. This allows to impose different characteristics in terms of smoothness for each function π_X|_X included in π.
§.§ Acquisition Functional
We sequentially select interventions by numerically[Alternatively, the functional gradient w.r.t. functions in a
could be derived analytically (see <cit.>).] maximizing the expected improvement (ei) per unit of cost _(·) across the s in . Given an interventional dataset _, for each ∈ the functional ei () is given by:
_(π) = σ^2_(π𝒟^I_)[γ(π)Φ(γ(π)) + ϕ(γ(π))]/_(π),
where σ^2_(π𝒟^I_) = K^θ_(π, π𝒟^I_), Φ(·) and ϕ(·) are the cdf and pdf of a standard Gaussian random variable respectively, and γ(π) = m_(π𝒟^I_) - g^/K^θ_(π, π𝒟^I_)
with g^ denoting the optimum observed for g_ across s in .
m_(π𝒟^I_) and K^θ_(π, π𝒟^I_)
denote the posterior parameters of τ(g__). At every trial t of the optimization, the and the are chosen by numerically solving _t, π^t__t = _∈, π_∈Π__(π_).
_(π) denotes the cost associated to π.
We consider two types of costs: (i) _(π) = ||; and (ii) _(π) = ∑_X ∈_∫___Xπ_X|_X(_̧X) d_̧X, (the sum of the area under π_X|_X over all X∈_), which can be seen as a measure of the units of intervention given to a population whose context values _̧X are uniformly distributed in __X. Notice that the second cost requires knowledge of __X at initialization.
We use the first cost in the experiments of sec:echain, and the second cost in the experiments of sec:health.
§ RELATED WORK
There exist two other -type methods in the literature that can achieve contextual interventions, namely <cit.> and <cit.>. performs different hard interventions in separate sub-groups defined by some contexts after observing context values.
Interventional data samples, formed by context values, intervention values, and target effect, are used to fit a model over the potentially high-dimensional context-intervened variables space. Therefore, can only be used in settings in which the investigator observes the values of the context variables, say =$̧, selects an intervention and observes the resulting target effect across units with=$̧, rather than an aggregate target effect across all possible context values in a population. This is not feasible in many applied problems (in a/b testing platforms, in which outcomes are often measured as an aggregate across a large population that spans an entire distribution of contexts), and might lead to sup-optimal policies for unseen or rarely observed context values. In addition, this method defines the surrogate model for each on _ thus reducing the flexibility of the learned policy by not encoding the existence of different _X for each X in _.
considers systems described by in which X∈ is of the form X = f_X(_(X), _X) + U_X, where _X is a set of action variables that parametrize f_X whose values can be set by the investigator to induce a change in the parametrization.
Therefore, a contextual intervention in modifies a node's original functional assignment rather than replacing it as in . This might lead to more limited interventions and does not allow change of contexts. In addition, this method can achieve contextual interventions only in settings in which the system's contains action variables. When this is not the case, can only implement hard interventions (see the experiment of sec:health).
Finally, unlike , does not reduce the search space and cannot handle unobserved confounders.
Extensions of <cit.> to solve functional global optimization () problems have been studied by searching over the space of Bernstein polynomials <cit.>, by constructing a sequence of low-dimensional search spaces <cit.>, or by representing the functional inputs as elements in an () <cit.>. This work takes an approach similar to , but considers a varied search space and its causal reduction. More importantly, thanks to a simple kernel construction, it enables functional , which has generally focused on univariate functional inputs, to deal with settings where the inputs are multi-task functions.
§ EXPERIMENTS
We compare[We cannot compare to as: (i) in our settings the values of the contexts are not observed before intervening, and only an aggregate target effect across contexts is observed post intervention; (ii) this method does not allow considering s that do not share the same contexts.] with , , , and on the synthetic graph in sec:echain (), and on
the healthcare graph in fig:causalgraphs1(a) ().
The experiments aim at highlighting three main advantages of using to find optimal interventions. The first advantage is the ability to achieve smaller target effects compared to methods that use only hard interventions. We assess this by looking at the convergence to the optimum. The second advantage is the ability to perform well w.r.t. conditional target effects. We demonstrate this in the experiments, by computing the performance gain for π_ on sub-group =$̧, which is defined aspgain(π_,=)̧ = μ̂^Y_= -μ̂^Y_π_, =, whereμ̂^Y_=denotes an estimate of the conditional expectation ofYgiven=$̧ w.r.t. the observational distribution and μ̂^Y_π_, = an estimate of the conditional target effect.
The third advantage is the ability to craft flexible and more targeted s that can incur similar or lower cost, while still ensuring a smaller target effect than policies made of only hard interventions. We exemplify this in the experiments where we assume a cost function given by _(π)=∑_X ∈_∫___Xπ_X|_X(_̧X) d_̧X.
[c]1
1!
6c2*Search Space and Optimization Problem
brightBlue cboGreen darkPurple -h mcbosoftpurple lightGreen [HTML]DAE8FC
2*Σ 2*Σ_ 2*Σ_ 2*𝒫_A 2*_, _X=∅ 2*_⊆, _X≠∅
2* 2* 2* 2*^ 2*go 2*
The different search spaces of , , , , and are summarized in the table above. An intervention in and is performed on all variables or on a subset of variables in simultaneously: considers only hard interventions, thus its search space contains only _, _X=∅={X_X X∈, _X=∅}); while considers functional interventions with a fixed _X≠∅ over trials, its search space contains only one formed by tuples X_X with _⊆, denoted by _⊆, _X≠∅. and with hard interventions, denoted by -h, consider the space of s containing only hard interventions Σ_. Finally, performs interventions via actions variables A = {A_X}_X∈ thus exploring the power set 𝒫_A (with the convention that no intervention on X corresponds to removing _X from the ). While aims at solving the problem, and -h target the problem, and the problem. Finally, solves a global optimization problem (go), while a problem in the action variable space, denoted by ^*. In all experiments, we consider settings where the , , and problems have unique solutions, and the go optimum coincides with the optimum.
, , , and .
While does not impose restrictions in terms of context variables used for functional interventions beyond acyclicity of _, for ease of demonstration and for computational reasons, in the experiments we only consider keeping the original parents as contexts. In other words, we set _X=_(X) for each functional intervention. We make the same choice for . To demonstrate performance on different choices for , we consider linear and functional intervention kernels κ_^ in the and experiments, respectively. We use the same functional intervention representation for .
For each ∈ we numerically optimize the acquisition functions on a grid whose size is set to ^|_| + 1 where is a hyper-parameter. We initialize by randomly generating a single and associated target effect for each ∈Σ. We provide average results across the 20 different initializations.
.
In the experiments, we consider both restricted to hard interventions (-h) and with contextual interventions (by augmenting the with an action variable for each variable in ). In the experiments, the is given and does not contain action variables. Therefore, we follow <cit.> and consider only hard interventions on , , and . We run the algorithm[We used the code companion to <cit.> available at <https://github.com/ssethz/mcbo>.] by setting the random seed controlling both the initial interventional data and the optimization of the acquisition function to values 1,…,20. We report results across the 20 different seeds. Cross-validation with values 0.05, 0.5, and 5 on the hyper-parameter β for the ucb acquisition function, as done in <cit.>, does not give major differences in the performance (we report the results for β=5).
§.§ Experiments
[8]r0.19
-0.7cm
[c].19
-0.19cm
0.8
[dgraph]
[dot] (X) [fill=darkGreen!70,label=north:X] at (1.,0) ;
[dot] (Z) [fill=gray!70,label=north:Z] at (2.5,0) ;
[dot] (W) [fill=gray!70,label=north:W] at (3,1) ;
[dot] (Y) [fill=red!70,label=north:Y] at (4,0) ;
[line width=0.6pt, ](X)–(Z);
[line width=0.6pt, ](Z)–(Y);
[line width=0.6pt, ](W)–(Y);
[line width=0.6pt, ](X)to [bend right=+20](Y);
-0.1cm
[c].15
X = U_X, W = U_W
Z = -0.5X + U_Z
Y = -W -3ZX + U_Y
We first experiment on the chain graph with associated given on the right (see Appendix <ref> for details). fig:aug_chain_full_results(left) shows how considering mixes of hard and functional interventions allows to reach the smallest target effect.
fig:aug_chain_full_results(middle) shows how and differ in terms of conditional target effects defined for X<0 and X>0. Due to the existence of the interaction term -3ZX, minimizing Y would require setting Z to a negative value when X<0 and to a positive value when X>0. However, this cannot be achieved via hard interventions that set Z to a fixed value irrespective of X as in . As a consequence , which selects ^ = {⟨ Z, ∅⟩, ⟨ W, ∅⟩} and π^_^ = {-1, 1}, achieves a very low performance gain for X>0, pgain(π^_^⋆,X>0). Instead,
selects 𝒮^ = {⟨ Z, X⟩, ⟨ W, ∅⟩} and π^_^= {π^_Z|X, 1}, where the linear function π^_Z|X (shown as a dashed red line in fig:aug_chain_full_results(right)) has a slope that gives an optimal Z value for both sub-groups thus leading to an evenly distributed performance gain.
§.§ Experiments
-0.2cm
For the experiments, we use the by <cit.> (see Appendix <ref> for details).
fig:health_full_results shows the results obtained with =5.
In these experiments, achieves the smallest target effect by selecting ^ = {⟨, ∅⟩, ⟨, (, ) ⟩, ⟨, ∅⟩} and π^_^ = { 0.1, π^_|,, 1}. and select ^ = {⟨, ∅⟩, ⟨, ∅⟩, ⟨, ∅⟩}, and π^_^ = {0.1, 1, 1}. -h does not reach convergence.
fig:health_full_results(middle) displays π^_|, selected by (left) and π^_|∅(∅)=1 selected by as a constant function over and (right). These two plots show that, while methods that consider only hard interventions are forced to assign intervention values uniformly across the context space, methods that also allow functional interventions can concentrate on specific sub-groups, in this case characterized by lower values of Age and . Being able to differentiate among interventions assigned to different sub-groups has important implications in terms of cost _^(π^_^).
fig:health_full_results(right) shows that incurs almost the same cost as . This result demonstrates another key property of functional interventions: taking the context values into account allows the investigator to assign interventions to units in the population characterized by context values that lead to smaller target effects.
Similar results are observed with = 8 (fig:health_grid8_full_results). achieves the smallest target effect (fig:health_grid8_full_results, left), and incurs a lower cost compared to (fig:health_grid8_full_results, right). In this setting converges to ^ ={∅, (Age, )} with π^*_^* = {0.1,π^*_|Age, }. Due to the more complex π^*_|Age (fig:health_grid8_full_results(middle, left)), which allocates the highest dosages to mid-range value of and , the investigator can avoid intervening on thus lowering the overall cost of the intervention
while still achieving an overall smaller target effect.
§ CONCLUSION
We proposed the method for finding policies made of hard and functional interventions that optimize a target effect. We introduced graphical criteria that establish when functional interventions could be necessary to achieve optimal target effects and when hard interventions are sufficient. Furthermore, we showed that optimizing a target effect by considering functional interventions allows the investigator to identify policies that are also optimal w.r.t. conditional target effects. We demonstrated the benefit of the proposed approach on a synthetic and on a real-world causal graph. Future work will explore the use of gradient-based optimization methods for the acquisition functional, as well as the development of more flexible kernel construction for the functionals (see Appendix <ref>).
These extensions would enable the identification of more flexible functional interventions while speeding up the convergence of the algorithm.
The authors would like to thank Michalis Titsias, Alan Malek, and Eleni Sgouritsa for valuable discussions.
§ PROOFS
Proposition <ref>
Let be a causal graph such that (i) ∃ C∈_(Y) with C∉; or (ii) ∃ C∈sp_(Y). If ∃ X ∈_(Y) ∩ such that {⟨ X, C ⟩} is an , then there exists at least one compatible with for which min_∈Σ_, ∈μ^Y_>min_∈Σ, ∈μ^Y_.
Case (i):
Assume that there exists C∈_(Y) with C∉ and X ∈_(Y) ∩ such that {⟨ X, C ⟩} is an . As X ∈_(Y), there exists a directed path from X to Y, say X → X_i → X_i-1→⋯→ X_1 → Y without loss of generality. Let M = ⟨, , , p() ⟩ be an such that
C = U_C, U_C∼𝒩(0,1),
X_i = X, X_i-1 = X_i, …, X_1 = X_2,
Y = X_1 C U_Y, U_Y ∼𝒩(1,1).
M is compatible with .
In this , any π_ with ∈Σ_ would give μ_π_^Y=𝔼_π_[Y]=0.
In contrast, a π_ including the functional intervention π_X|C(C)= -1/C would result in Y = - U_Y and therefore μ_π_^Y = -1, giving min_∈Σ_, ∈μ^Y_=0>-1≥min_∈Σ, ∈μ^Y_.
Case (ii):
Assume that there exists C∈sp_(Y) and X ∈_(Y) ∩ such that {⟨ X, C ⟩} is an . As X ∈_(Y), there exists a directed path from X to Y, say X → X_i→ X_i-1→⋯→ X_1 → Y without loss of generality. Let M = ⟨, , , p() ⟩ be an such that
C = U_CY, U_CY∼𝒩(0,1),
X_i = X, X_i-1 = X_i, …, X_1 = X_2,
Y = X_1 U_CY U_Y, U_Y ∼𝒩(1,1).
M is compatible with . In this , any π_ with ∈Σ_ would give
μ_π_^Y=𝔼_π_[Y]=0. In contrast, a π_ containing the functional intervention π_X|C(C)= -1/C, would result in Y = - U_Y and therefore μ_π_^Y = -1, giving min_∈Σ_, ∈μ^Y_=0>-1≥min_∈Σ, ∈μ^Y_.
In the following proposition we use the notation _ to indicate the modification of obtained by removing the outgoing edges from .
Proposition <ref>
In a casual graph , if _(Y)⊆ and sp_(Y)=∅ there exists a compatible with ={X∅: X ∈_(Y)} that solves the problem.
Consider ∈Σ for and π_𝒮 compatible with . Let = _(Y) \ ((_∪_) ∩_(Y)). As _(Y)⊆,
we can define the _ = {X∅: ∀ X ∈_(Y)}. Denote by p_π^*__(Y) the distribution of Y induced by an optimal π^*__ compatible with _pa, such that
∫__Y Y p_π^__(Y)dY≤∫__Y Y p_π__pa(Y)dY, for every π__pa compatible with _, and let = _Y ×__∪_×_. Exploiting the rules of do-calculus <cit.> and σ-calculus <cit.> we obtain
μ_π_^Y
= ∫_ Y p_π_(Y _∪_∪) p_π_(_∪_∪)d_∪_ d dY _ A
= ∫_ Y p_π_(Y _(Y)) A2.8cm (rule 1 σ-calculus) Y __ (_∪_∪)\_(Y) _(Y)
= ∫_ Y p(Y _(Y)) A3.2cm (rule 2 σ-calculus) Y __, _, ___ (_(Y)\(_(Y) ∩_))
= ∫_ Y p(Y (_(Y))) A2.6cm (rule 2 do-calculus) Y ___(Y)_(Y)
= ∫_ Y p_π__(Y) A≥∫_ Y p_π^__(Y) A
= μ^Y_π^__pa,
where __, _, __ denotes d-separation in both _, _ and __.
Proposition <ref>
If ^, π^_^=_∈Σ, ∈μ^Y_,
then ^, π^_^=_∈Σ^, ∈μ^Y_,= ∀⊂\ Y
such that ∩de_()=∅ and ∀∈̧_ with Σ^ = {∈Σ: _ = _^ and {X_X^^∪_X^∪:X ∈_^} is an }.
Assume, by contradiction, that (^, π^_^), with π^_^={π_X|_X^^^^}_X∈_X^^, is a solution to the problem but there exist ⊂\ Y and a value ∈̧_ such that the tuple (^1, π_^1) with ^1 ∈Σ^ and π_^1={π_X|_X^^1^^1}_X∈_X^^1∈ satisfies μ^Y_π_^1, = < μ^Y_π^_^,=. As ^1 ∈Σ^, we can construct ^2 = {X_X^^∪_X^^1∪: X ∈_^} and the compatible π_^2={π^^2_X|_X^^∪_X^^1∪}_X ∈_^ with
π^^2_X|_X^^∪_X^^1∪=
π_X|_X^^1^^1 if ∈ [-̧δ,+̧δ]
π_X|_X^^^^ otherwise,
for a small enough δ>0. As ∩_() =∅, variables in are not affected by interventions on variables in _^, and therefore p_π^_^()= p_π_^1()=p(). Thus we obtain:
μ^Y_π_^2 = ∫__μ^Y_π_^2, ='̧ p_π_^2(='̧)d'̧
= ∫_[-̧δ,+̧δ]μ^Y_π_^2, ='̧ p_π_^2(='̧)d'̧ + ∫__\ [-̧δ,+̧δ]μ^Y_π_^2, ='̧ p_π_^2(='̧)d'̧
= ∫_[-̧δ,+̧δ]μ^Y_π_^1, ='̧ p_π_^1(='̧)d'̧ + ∫__\ [-̧δ,+̧δ]μ^Y_π^_^, ='̧ p_π^_^(='̧)d'̧
< ∫_[-̧δ,+̧δ]μ^Y_π^_^, ='̧ p_π^_^(='̧)d'̧ + ∫__\ [-̧δ,+̧δ]μ^Y_π^_^, ='̧ p_π^_^(='̧)d'̧
=μ^Y_π^_^,
with contradicts the assumption that (^, π^_^) is a solution to the problem.
§ ALTERNATIVE KERNEL CONSTRUCTION
The kernel function κ_^ introduced in sec:gpsurrogate sets the covariance between the elements in the vector π_ associated to a π_ to 0, thus restricting the type of functions that can be selected during optimization[Notice that, for hard interventions, this corresponds to limiting the range of values that can be set when intervening.].
[4]r0.18
-0.5cm
0.9
[dgraph]
[dot] (c1) [fill=darkGreen!70,label=north:C_1] at (-0.8, 2) ;
[dot] (c2) [fill=darkGreen!70,label=north:C_2] at (0.8, 2) ;
[dotdot node] (x) [fill=brightBlue!70,label=north:X] at (0, 1.4) ;
[dotdot node] (z)[fill=brightBlue!70,label=north:Z] at (1.6,1.4) ;
[dot] (y) [fill=red!70,label=north:Y] at (0.8,0.8) ;
[line width=0.6pt, brightBlue, ](c1)–(x);
[line width=0.6pt, brightBlue, ](c2)–(x);
[line width=0.6pt, brightBlue, ](c2)–(z);
[line width=0.6pt, ](x)–(y);
[line width=0.6pt, ](z)–(y);
For instance, consider the graph on the right with = {X(C_1, C_2), ZC_2} and = {π_X|{C_1, C_2}, π_Z|C_2}. The proposed kernel function would set Cov(π_X|{C_1, C_2}, π_Z|C_2) = 0. While a study of the effect of choosing different covariance structures on the optimal target effect goes beyond the scope of this paper, in this section we provide alternative kernel constructions that relax this constraint.
Given a π_, one can define the correlation between elements in π_ by introducing a |_|-dimensional vector of parameters for each function π_X|_X in π_ such that the j-th term ω_j=1 if the j-th term in _ is in _X and ω_j=0 otherwise. For instance, for = {π_X|{C_1, C_2}, π_Z|C_2}=π_, we have ω_1 = ω_2 = 1 for π_X|{C_1, C_2} as both variables in _ = {C_1, C_2} are in _X, while ω_1 = 0 and ω_2 = 1 for π_Z|C_2 as only C_2 is in _Z.
We can then redefine κ_^ to be an kernel on an input space given by product between the the context variables and the parameters. Denote by ^i, ^j two possible values for the vector, for instance we could have ^i=[1, 1]^⊤ and ^j=[0, 1]^⊤ in the example above; and by ^̧i = [c_1^i, …, c_|_|^i]^⊤ and ^̧j = [c_1^j, …, c_|_|^j]^⊤ two vector of values for _. We can define κ_^ : (__×Ω) × (__×Ω) →ℝ^|_| × |_| where Ω is the space of values for each vector and κ_^((,̧)^i, (,̧)^j) = κ_^((^̧i)^⊤^i, (^̧j)^⊤^j) = γexp(-0.5/l^2 ∑_n=1^|_|(c^i_nω^i_n - c^j_nω^j_n)^2) where = {γ, l}. For the example above, we can write κ_^((^̧i)^⊤^i, (^̧j)^⊤^j) = γexp(-0.5/l^2 [(c_1^iω_1^i - c_1^jω_1^j)^2 + (c_2^iω_2^i - c_2^jω_2^j)^2]). When γ≠0, ^i=[1, 1]^⊤ and ^j=[0, 1]^⊤, this kernel would return a covariance between π_X| C_1, C_2 and π_Z| C_2 equal to κ_^((^̧i)^⊤^i, (^̧j)^⊤^j) = γexp(-0.5/l^2 [(c_1^i)^2 + (c_2^i - c_2^j)^2]). The covariance would thus depend on the context values in the overlapping part of the context variables space and a correction term (c_1^i)^2. Instead of fixing the values in to either zero or one based on the graph structure, one could think about optimizing the values that are different from zero so as to achieve a higher flexibility in terms of allowed covariance while still imposing structure via the zero values.
As a more general kernel construction, given a , a vector of parameter values ^i and a vector of context values ^̧i = [c_1^i, …, c^i_|_|]^⊤, one could define the augmented input vector ^̧i_aug = [(^̧i)^⊤^i, (^̧i) ^i, t]^⊤ (and similarly for two alternative vector of values ^̧j and ^j) given by the concatenation of two |_|-dimensional vector obtained by (^̧i)^⊤^i and a task index t that gives the index of the function in π__, similarly to what was introduced in sec:gpsurrogate.
For an augmented vector of hyper-parameters = [γ, l, γ̃, l̃], one could then define the following kernel:
κ_^(_̧aug^i, _̧aug^j) = 𝕀_t = t'γ^2 exp(-0.5/l^2∑_n=1^|_| (^̧i_aug,n - ^̧j_aug, n)^2)
+ 𝕀_t≠ t'γ̃^2 exp(-0.5/l̃^2∑_n=|_|+1^2|_| (^̧i_aug,n - ^̧j_aug, n)^2)
=𝕀_t = t'γ^2 exp(-0.5/l^2∑_n=1^|_| (c^i_nω_n - c^j_nω'_n)^2)
+ 𝕀_t≠ t'γ̃^2 exp(-0.5/l̃^2∑_n=|_|+1^2|_| (c^i_nω_n - c^j_nω'_n)^2),
where c^i_n is the n-th term of the ^̧i vector (similarly for ^̧j and ^i), and 𝕀_t=t' is an indicator function equal to one if t=t' and zero otherwise. The first term in (<ref>) represents an rbf kernel capturing the covariance structure within the t-th function in π_ while the second term is again an rbf kernel that captures the covariance across functions in π_. Differently from the kernel described above we now have two sets of hyper-parameters: γ, l for the first kernel and γ̃, l̃ for the second. This gives higher flexibility in terms of the functional interventions we can learn and thus the target effect values we can achieve. As in the previous kernel we can let the parameters in , as well as in , change to capture different level of correlations or set them equal to one and zero depending on the structure of the graph. In the latter case and for the example introduced above, we would have ω_1 = ω_2 = 1 for π_X|C_1, C_2 which would lead to a standard kernel for the first term in (<ref>). We could then set γ̃=0 to have a zero covariance across functions or finally vary ω_3 and ω_4 for both π_X|C_1, C_2 and π_Z|C_2 to allow for increasing level of correlation.
§ CHAIN EXPERIMENTS
For the experiments we use the following :
X = U_X, 0.1cm W = U_W, 0.1cm Z = -0.5X + U_Z, 0.1cm Y = -W -3ZX + U_Y, 0.1cm with U_X, U_W, U_Z, U_Y ∼ N(0,1).
We set the range for hard interventions on both Z and W to [-1, 1]. The set of non-redundant s is = {{Z∅}, {W∅}, {Z∅, W∅}, {Z{X}}, {Z{X}, W∅}}.
We set = 10 and represent each functional intervention with N_α=N_β=10 samples for the context variables. We sample the coefficients _i (for i=1,…, N_α) and _j (for j=1, …, N_β) uniformly in the interval [-0.27, 0.27], in order to keep the range of values obtained for the intervened variables following a functional intervention similar to the ranges set for the hard interventions. For each ∈, we initialize the linear kernel κ^_ with = 1. Exploration is hard to achieve when the models for including functional interventions are initialized with K^θ_ and hyper-parameters θ = (ℓ, σ^2_f) = (1, 1). We thus perform hyper-parameters search exploring continuous values σ^2_f ∈ [1, 10000] and ℓ∈ [1, 30], which results in selecting σ_f^2 = 7000, and ℓ = 20 for both and .
For and , which consider only hard interventions and thus do not suffer from exploration issues, we initialize K^θ_ with θ = (1, 1). For we use the default setting (Matérn 5/2 kernel), as it is not possible to tune the kernel and corresponding hyper-parameters. In order to run with contextual interventions, we use the augmented with action variables X = U_X, W= U_W + A_W, Z = -0.5X + U_Z + A_Z, Y = -W -3ZX + U_Y.
In this setting, the average cpu execution time for a single run is ∼ 6 minutes, while for a single run is ∼ 14 minutes.
§ HEALTH EXPERIMENTS
For the experiments, we use the from <cit.>:
= U_, = U_, = 1500 + 10 × U_,
Height = 175 + 10 × U_Height,
Weight = + 6.8 ×Age - 5 ×/13.7 + × 150/7716,
= Weight / (Height/100)^2,
= σ(-8 + 0.1 × + 0.03 ×),
= σ(-13 + 0.1 × + 0.2 ×),
= 6.8 + 0.04 × - 0.15 × - 0.6 × + 0.55 ×
0.9cm+σ(2.2 - 0.05 × + 0.01× - 0.04 × + 0.02 ×) + U_,
with U_Age∼𝒰(55, 75), U_ci∼𝒰(-100, 100), U_bmr∼ t𝒩(-1, 2), U_Height∼ t𝒩(-0.5, 0.5), U_psa∼𝒩(0, 0.4), where 𝒰(·, ·) denotes a uniform distribution, t𝒩(a, b) a standard Gaussian distribution truncated between a and b, and σ(·) the sigmoidal transformation defined as σ(x) = 1/1 + exp(-x).
We set the ranges for hard interventions on Aspirin, Statin, and CI to [0.1, 1].
The set of non-redundant s is = {{∅},
{∅}, {∅},
{∅, ∅}, {∅, ∅}, {∅, ∅}, {∅, ∅, ∅}, {{, }}, {{, }}, {{,}, {, }}, {{, }, ∅}, {∅,
{, }},{{, }, ∅},{{, }, ∅}, {{, }, {, }, ∅}, {∅, {, }, ∅}, {{, }, ∅, ∅}}.
We represent each functional intervention with N_α=N_β=10 samples for the context variables. We sample the coefficients _i (for i=1,…, N_α) and _j (for j=1, …, N_β) uniformly in the interval [0, 3.3], in order to keep the total cost of functional interventions and hard interventions comparable. The kernels K^θ_ and κ_^ are initialized with θ = (1, 1) and = (1, 1) for each ∈. In this setting, the average cpu execution time for a single run is ∼ 3 hours and 20 minutes, while for a single run is ∼ 10 hours.
|
http://arxiv.org/abs/2306.08076v1
|
20230613184628
|
Graph Structure and Feature Extrapolation for Out-of-Distribution Generalization
|
[
"Xiner Li",
"Shurui Gui",
"Youzhi Luo",
"Shuiwang Ji"
] |
cs.LG
|
[
"cs.LG"
] |
Phase synchronization in a sparse network of randomly connected neurons under the effect of Poissonian spike inputs
Elbert E. N. Macau
===================================================================================================================
Out-of-distribution (OOD) generalization deals with the prevalent learning scenario where test distribution shifts from training distribution. With rising application demands and inherent complexity, graph OOD problems call for specialized solutions. While data-centric methods exhibit performance enhancements on many generic machine learning tasks, there is a notable absence of data augmentation methods tailored for graph OOD generalization. In this work, we propose to achieve graph OOD generalization with the novel design of non-Euclidean-space linear extrapolation. The proposed augmentation strategy extrapolates both structure and feature spaces to generate OOD graph data. Our design tailors OOD samples for specific shifts without corrupting underlying causal mechanisms.
Theoretical analysis and empirical results evidence the effectiveness of our method in solving target shifts, showing substantial and constant improvements across various graph OOD tasks.
§ INTRODUCTION
Machine learning algorithms typically assume training and test data are independently and identically distributed (i.i.d.). However, distribution shift is a common problem in real-world applications, which substantially degrades model performance.
The out-of-distribution (OOD) generalization problem deals with learning scenarios where test distributions shift from training distributions and remain unknown during the training phase.
The area of OOD generalization has gained increasing interests over the years, and multiple OOD methods have been proposed <cit.>. Although both general OOD problems and graph analysis <cit.> have been intensively studied, graph OOD research is only in its early stage <cit.>. With various applications and unique complexity, graph OOD problems call for specific solutions. Data augmentation (DA) methods have shown significant boost in generalization capability and performance improvement across multiple fields <cit.>, creating a promising possibility for graph OOD studies. Currently, there lacks DA methods designed for OOD generalization on graphs.
Conventional data augmentations increase the amount of data and act as regularizers to reduce over-fitting, which empirically enhance model performance in previous studies <cit.>. Many DA techniques <cit.>, including graph data augmentations (GDA), exclusively interpolate data samples to generate new ones. Since machine learning methods themselves also perform interpolation <cit.>,
interpolation-based DA boosts model performances by making overall progress in learning. Mixup <cit.> is a typical example, showing improvements in accuracy and also OOD generalization for computer vision tasks.
However, some practical tasks are often out-of-distribution instead of in-distribution (ID). Thus, models are expected to extrapolate instead of interpolate.
Currently, few augmentation studies focus on extrapolation, especially for graphs. Thus, performance gain from DA appears limited in OOD tasks.
The distribution area where models cannot generalize to is also hardly reachable when generating augmentation samples using traditional techniques, which is the substantial obstacle for OOD generalization.
In this work, we propose to solve OOD generalization in graph classification tasks from a data-centric perspective.
To stimulate the potential improvement of DA in OOD tasks, we aim at data extrapolation, essentially, generating OOD data samples.
Graph data has the complex nature of topological irregularity and connectivity, with unique types of distribution shifts in both feature and structure.
Practically, models cannot be expected to solve unknown shifts. Thus, injecting environment information <cit.> in training to convey the types of shifts is a promising solution for OOD.
We propose an environment-aware framework with linear extrapolation designed in graph structural and feature space. Structural linear extrapolation is enabled with graph splicing and subgraph extraction techniques, while feature linear extrapolation performs space-spanning with selected variant features.
We theoretically justify that samples generated from linear extrapolation are both causally-valid and tailored for specific OOD shifts.
Theoretical and empirical analyses show that linear extrapolation can generalize over certain shifts. Extensive experiments show that our method substantially outperform both OOD learning and data augmentation methods for graph tasks.
Comparison with prior methods. In previous studies, several graph OOD methods have been proposed <cit.>. They either focus on identifying causal subgraphs without using the environment information or establish regularizations, aiming to achieve invariant prediction.
Our work, in contrast, solves OOD generalization from a data perspective, proposing to extrapolation with augmentation strategies, which can be combined with invariant regularizations in parallel. Moreover, we not only use environment information, but also construct new environments by design, injecting extra OOD information to guide the model in generalization.
Comparing to GDA methods, some GDA works empirically show improvements in generalization, while few works target OOD problems or generate OOD samples to generalize over graph distribution shifts. In contrast, we offer a graph augmentation method to extrapolate in structure and feature for OOD generalization with theoretical analyses.
Considering techniques, our design of graph splice serves the extrapolation of global features and avoids add-on nodes to preserve graph structures, divergent from linker design approaches for molecules <cit.>. In addition, we design subgraph extraction by label-environment-aware pair learning, a novel technique different from previous studies.
A more detailed discussion of related works is provided in Appendix <ref>.
§ PROBLEM SETTING
Graph notations. We denote a graph as G = (A,X,E), where A∈ℝ^n× n, X∈ℝ^p× n, and E∈ℝ^q× n are the adjacency, node feature, and edge feature matrices, respectively. We assume n, m, p, and q are the numbers of nodes, edges, node features, and edge features respectively. Additionally, we assume a set of latent variables {z_i∈ℝ^f}_i=1^n form a matrix
Z=[z_1,z_2,⋯,z_n]∈ℝ^f× n. For graph-level tasks, each graph has a target label Y∈𝒴, and for node-level tasks there is a label for each node.
OOD settings.
The environment formalism following invariant risk minimization (IRM) is a common setting for OOD studies <cit.>.
This framework assumes that training data form groups, known as environments. Data are similar within the same environment but dissimilar across different environments. Since many different shifts can exist between training and test data, models are usually not expected to solve all shifts. Instead, the target type of shift is conveyed using environments.
Specifically, the target shift between training and test data, though more significant, should be similarly reflected among different training environments. In this case, OOD methods can potentially grasp the shift by learning among different training environments.
In this work, we follow this formalism and use environment information in augmentation strategies to benefit OOD generalization. Environments are given as environment labels ε_i ∈ℰ for each data sample.
Graph structure and feature distribution shifts.
Graph data are complex in that it contains features as well as topological structures.
Therefore, graph distribution shifts can happen on both features and structures, which possesses different properties and should be handled separately.
Feature distribution shifts happen on node or edge features, and we consider node features in this work. In this case, shifts are solely on the node feature distribution as P^tr(X)≠ P^te(X), while P^tr(A) = P^te(A), where P^tr(·) and P^te(·) denote training and test distributions, respectively.
In contrast, structure distribution shift is the more distinctive and complex case in graph OOD. Structural shifts can happen in the distribution of A or the conditional distribution between X and A, resulting in P^train(X,A)≠ P^test(X,A). In graph-level tasks, two common structural shift domains are graph size and graph base <cit.>, the latter also known as scaffold <cit.> in molecule data. Specifically, graph size refers to the number of its nodes, and graph base refers to the non-functional backbone substructure irrelevant with targets.
§ LINEAR EXTRAPOLATION IN GRAPH SPACE
We propose the GDA strategy of input-space linear extrapolation for structure and feature, inspired by the philosophy of “linear interpolation” from Mixup <cit.>. The linear extrapolation of data distributions essentially teach the model to behave outside the training distribution by reachable OOD sample.
Linear extrapolation extends beyond the training distribution in both structure and feature spaces for input graph data, effectively teaching the model to anticipate and handle OOD scenarios.
§.§ Causal Analysis
We first establish causal analyses following prior invariant learning works <cit.>. As shown in Figure <ref>(b), C, S_1, S_2∈𝒵 are the latent variables in high-dimensional space that are causally associated with the target Y, non-causally associated with Y, and independent of Y, respectively.
The environment ℰ is target-irrelevant as well as observable.
In the non-Euclidean structure space, we posit that information from the latent space is wholly reflected in the graph structure, so that
C and ℰ determine respective subgraphs of a graph <cit.>.
Formally, we define subgraphs caused by C as causal subgraphs Ginv, and subgraphs caused by ℰ as environmental subgraphs Genv.
Since graphs with the same label should contain invariant causal subgraphs, causal subgraphs are potentially extractable from label-invariant graph pairs, and environmental subgraphs from environment-invariant graph pairs.
Similarly, considering distribution shifts in the feature space, we assume C and ℰ determine respective elements of feature vectors.
For single-node feature x∈ℝ^p, where x=X for node-level tasks and x⊆X for graph-level tasks, let p=i+v. We define node features determined by C as invariant node features xinv∈ℝ^i, while other node features are variant features xvar*∈ℝ^v. In practice, with environment information, it is realistic to assume we can learn to select a subset of the variant features xvar∈ℝ^j substantially determined by ℰ, where j ≤ v.
§.§ Linear Extrapolation Formulation
Linear extrapolation, which constructs samples beyond the known range while maintaining the same direction and magnitude of known sample differences, is a central concept in our approach.
Generally, for data points (x_1,y_1) and (x_2,y_2), linear extrapolation (x_3,y_3) is defined as x_3 = x_1 +a(x_2-x_1), y_3 = y_1 +a(y_2-y_1), where a∈ℝ, a>1 a<0.
Therefore, given two feature vectors x_1 and x_2, we can define linear feature extrapolation as
x_3 = x_1 +a(x_2-x_1), s.t. a∈ℝ, a>1 a<0.
The extension of linear extrapolation to graph structure requires the definition of structural linear calculations.
We define graph addition, G_1 + G_2, as the splicing of two graphs, resulting in unions of their vertex and edge sets. Graph subtraction, G_2 - G_1, is defined as subtracting the largest isomorphic subgraph of G_1 and G_2 from G_2.
Let D{G_tr} = {(G_1, y_1), (G_2, y_2), …, (G_N, y_N)} be the N-sample graph training set.
Given the discrete nature of graph operations, we formulate the linear extrapolation of graphs below.
[Structural Linear Extrapolation]
Given graphs G_i,G_j∈ D{G_tr}, we define 1-dimension structural linear extrapolation on D{G_tr} as G_sle^1 = a_i · G_i + b_ij· (G_j - G_i), where a_i, b_ij∈{0, 1}.
We extend to define N-dimension structural linear extrapolation:
G_sle^N=∑_i=1^N a_i · G_i + ∑_i=1^N∑_j=1^N b_ij· (G_j - G_i) = 𝐚^⊤𝐆 + ⟨ B, 1𝐆^⊤ - 𝐆1^⊤⟩_F,
where 𝐚=[a_1, …, a_N]^⊤, B={b_ij}^N× N, 𝐆=[G_1, …, G_N]^⊤, 1 is a N-element vector of ones, and ⟨·, ·⟩_F is the Frobenius inner product. Let c_ij∈{0, 1} indicate the existence of causal subgraphs in (G_j - G_i). Then the label y for G_sle^N is defined as y = (∑_i=1^N a_i · y_i+∑_i=1^N∑_j=1^N c_ijb_ij· y_j)/(∑_i=1^N a_i+∑_i=1^N∑_j=1^N c_ijb_ij)=(𝐚^⊤𝐲 + ⟨ C∘ B, 1𝐲^⊤⟩_F)/ (𝐚^⊤1 + ⟨ C, B ⟩_F), where ∘ denotes Hadamard product, 𝐲=[y_1, …, y_N]^⊤, and C={c_ij}^N× N.
Note that we no not need to avoid multiple graphs to ensure linearity in Eq. <ref>, due to the high dimensionality of graph structure.
In this context, 𝐚^⊤𝐆 denotes splicing multiple graphs together; while ⟨ B, 1𝐆^⊤ - 𝐆1^⊤⟩_F denotes splicing together multiple subtracted subgraphs. These definitions enable structural linear extrapolation in the non-Euclidean graph space.
§.§ Linear Extrapolation for OOD Generalization
In this subsection, we justify that linear extrapolation can generate OOD samples respecting specific shifts while maintaining causal validity, i.e., preserving underlying causal mechanisms.
First, we establish an assumption that combining causal structures causes a sample with mixed label for structural linear extrapolation.
[Causal Additivity]
Let (G_1,y_1), (G_2,y_2) and (G_3,y_3) be graph-label pairs. If G_3=G_1inv+G_2inv+Genv, then y_3=ay_1+(1-a)y_2, where Genv is any combination of environmental subgraphs, and a∈ (0,1).
This causal assumption holds valid in a wide range of graph classification tasks, and we further discuss its scope of application in Appendix <ref>.
Next, we provide two definitions to establish the conditions under which structural linear extrapolation can cover certain environment values, , specific graph size or base.
[Size Extrapolation Achievability] Given a N-sample set of graphs D{G}, we say that a graph size |X| is achievable by size extrapolation if there exists an N-dimension structural linear extrapolation G_sle^N on D{G}, s.t. G_sle^N=(A,X,E).
[Base Extrapolation Achievability] Given a N-sample set of graphs D{G}, we say that a graph base ℬ is achievable by base extrapolation if there exists an N-dimension structural linear extrapolation G_sle^N on D{G}, s.t. (G_sle^N)env=ℬ.
The following theorems assert that structural linear extrapolation can create OOD samples covering at least two environments in opposite directions of the distribution, respecting size and base shifts each, and ensure the causal validity of these samples.
Given an N-sample training dataset D{G_tr}, its N-dimension structural linear extrapolation can generate sets D{G_1} and D{G_2} s.t. (G_1)env<(G_tr)env <(G_2)env for ∀G_tr,G_1,G_2, where < denotes “less in size” for size extrapolation and “lower base complexity” for base extrapolation.
Given an N-sample training dataset D{G_tr} and its true labeling function for the target classification task f(G),
if D{G_sle^N} is a graph set sampled from the N-dimension structural linear extrapolation of D{G_tr} and Assumption <ref> holds, then for ∀ (G_sle^N,y)∈ D{G_sle^N}, y=f(G_sle^N).
Proofs and further analysis are provided in Appendix <ref>.
These theorems show that structural linear extrapolation has the capability to generate OOD samples that are both plausible and diverse.
The justification of feature linear extrapolation is relatively straightforward and provided in Section <ref>.
Thus, we provide theoretical bases for the applicability of linear extrapolation in graph OOD tasks.
§ G-SPLICE FOR STRUCTURAL LINEAR EXTRAPOLATION
In this section, we specify structural linear extrapolation as a feasible augmentation method with detailed implementations, termed G-Splice.
Using environment information, the method extrapolates global structural features while preserving structural information that causes the label. The approach is underpinned by theoretical analysis for structural linear extrapolation, providing causally-valid OOD samples that are tailored for specific shifts.
The overall model constructs diverse augmentation samples, as shown in Figure <ref>(a).
In the following subsections, we describe the technical modules of splicing, component graph selection, and post-sampling procedures separately.
§.§ Graph Splice
The action of splicing a group of component graphs is essentially a conditional edge generation task, which we refer to as “bridge” generation. We generate bridges of predicted number along with corresponding edge attributes between given component graphs to join multiple components into a single graph.
In this work, we use conditional variational autoencoders (cVAE) <cit.>, though other generative models may also be used.
We adopt cVAE as the major bridge generator for its adequate capability and high efficiency, as compared with diffusion models <cit.> in Appendix <ref>.
The bridge generator takes as input a group of component graphs, denoted as G_1,⋯,G_f = (X_1,A_1,E_1),⋯, (X_f,A_f,E_f).
The cVAE encoder produces a latent variable distribution. Specifically, we construct the inference model as
q_ϕ(Z|X_1,A_1,E_1,⋯, X_f,A_f,E_f)
= _v_i∼G_j∏^n_i=1 q_ϕ(z_i|X_j,A_j,E_j)
= ∏^n_i=1𝒩(z_i|μ_i,diag(σ_i^2)),
where v_i∼G_j denotes that the i-th node v_i belongs to component graph G_j, μ_i and σ_i are the generated mean and standard deviation vectors of the i-th latent distribution, and n is the total number of nodes in all component graphs.
The encoder q_ϕ is parameterized by three-layer graph isomorphism networks (GIN) <cit.>.
The generative model produces the probability distribution for bridges A^b and corresponding attributes E^b:
p_θ(A^b,E^b|Z) = ∏^n_i=1∏^n_j=1 p_θ(A_ij^b,e_ij^b|z_i,z_j),
p_θ(A_ij^b,e_ij^b|z_i,z_j)=
MLP_θ(z_i,z_j), if [t]
v_i∼G_s, v_j∼G_t,
s.t. s≠ t
(0, None), otherwise
where A_ij^b is the ij-th element of A^b and e_ij^b∈E^b is the corresponding edge attribute vector.
By sampling B times from p_θ(A^b,E^b|Z), we sample B pairs of bridge-attribute vectors to complete bridge generation.
To train the bridge generator, we optimize the variational lower bound ℒ the variational parameters:
ℒ_θ,ϕ = 𝔼_q_ϕlogp_θ(A^b|Z)
+ α𝔼_q_ϕlogp_θ(E^b|Z)
- βKL[q_ϕ||p(Z)],
where KL[q(·)||p(·)] is the Kullback-Leibler divergence between q(·) and p(·). We take the Gaussian prior
p(Z) = ∏_ip(z_i) = ∏_i 𝒩(z_i|0,I).
α and β are hyperparameters regularizing bridge attribute and KL divergence respectively.
Bridge number prediction. To predict the number of prospective bridges between a set of component graphs, a pre-trained GNN parameterized by η produces probabilities for the bridge number B,
p_η(B) = GNN_η(X_1,A_1,E_1,⋯, X_f,A_f,E_f).
When generating bridges, we first sample the number B with the predicted probabilities from the categorical distribution.
Note that we do not include new nodes as part of the bridge, since we aim at preserving the local structures of the component graphs and extrapolating certain global features. More manually add-on graph structures provide no extrapolation significance, while their interpolation influence are not proven beneficial, which is reflected in Appendix <ref>.
§.§ Component Graph Selection
Whole graphs. Corresponding to 𝐚^⊤𝐆 in Eq. <ref>, we use whole graphs from the training data as a category of component graphs, which possesses computational simplicity and enables extrapolation.
Causal subgraphs and environmental subgraphs. The part of ⟨ B, 1𝐆^⊤ - 𝐆1^⊤⟩_F in Eq. <ref> requires the operation of subtracting the largest isomorphic subgraph, which is practically unfeasible. In this case, using target and environment label information, we approximate (G_j - G_i) for particular graph pairs. Specifically, (G_j - G_i)≈ G_jinv for G_j, G_i with different labels but the same environment, and (G_j - G_i)≈ G_jenv for G_j, G_i with the same label but different environments.
Therefore, we can use extracted causal/environmental subgraphs as (G_j - G_i).
We perform pair-wise similarity matching on label-invariant graphs to extract causal subgraphs, and on environment-invariant graphs to extract environmental subgraphs, following Sec. <ref>.
Since environment for test data remains unknown and these subgraphs are inaccessible during test, using these subgraphs in data augmentation for training can be an optimum strategy.
Let G_ε_1 and G_ε_2 be two graphs with the same label y_1 but different environments ε_1 and ε_2.
Ginv should be the subgraph both graphs contain and have most in common,
Ginv=Imax(G_s,ε_1, G_s,ε_2),
where I(·) measures similarity between graphs and G_s denotes possible subgraphs.
Since graph neural networks aggregate information of an ego graph, i.e., the local subgraph within k-hop of a node, to the embedding of that node through message passing, nodes with similar ego graphs should have similar embeddings.
Therefore, in the node embedding space,
nodes from G_ε_1 and G_ε_2 with similar representations should be a part of Ginv.
We encode both graphs into node embeddings with a GNN and calculate their weighted similarity matrix S^w, each element of which is the weighted cosine similarity of a pair of nodes from G_ε_1 and G_ε_2, ,
S^w_ij=w*S_c(z_i,z_j), for v_i∼G_ε_1, v_j∼G_ε_2,
where w is a trainable parameter and S_c(·) is the cosine similarity calculation.
The scores in the weighted similarity matrix S^w are considered as probabilities to sample the causal subgraph from either G_ε_1 or G_ε_2.
We use label-invariant and environment-variant graph pairs to pre-train a causal subgraph searching network, which is optimized for the sampled causal subgraphs to be capable of predicting the label Y solely.
Similarly, environment-invariant and label-variant graph pairs are used to pre-train the environmental subgraph searching network with the environment label.
§.§ Post-sampling Procedure
To enable structural linear extrapolation, we need to make selections of component graphs and sample bridges accordingly, as well as assign the labels and environments.
Since linear extrapolation has infinite choices of a and b, to enable training, we simplify the augmentation into three options: 1.single causal subgraphs Ginv, 2.causal and environmental subgraphs spliced Ginv + f·Genv, 3.whole graphs spliced f·G.
With the pre-trained component graph extractors and bridge generator, we apply these strategies to augment graph data in OOD classification tasks.
The actual number of component graphs f is set as a hyperparameter, tuned and determined through OOD validation during training. The augmentation selections are also tuned as a hyperparameter, with at least one option applied. We label the generated graphs following the definition of structural linear extrapolation.
Considering extrapolation in environments, for size OOD tasks, we create up to three new environments with the three options according to the size distribution of the augmented graphs.
For base/scaffold OOD tasks, we create up to two new environments, one for Ginv, the other for Ginv + f·Genv and f·G, since the former option construct graphs without base/scaffolds, and the latter options graphs with multiple base/scaffolds combined.
All original graphs and augmentation graphs then form the augmented training distribution with add-on environments. To make adequate use of the augmented environment information for OOD generalization, we optionally apply an invariant regularization <cit.> during learning, reaching our final objective,
ψ^* := *argmin_ψ𝔼_(G,y)∼∪_ε∈{ℰ∪ℰ_A}P_ε[ℓ(f_ψ(G),y)]
+γVar_ε∈{ℰ∪ℰ_A}[𝔼_(G,y)∼ P_εℓ(f_ψ(G),y)],
where ℰ_A are the augmented environments, f_ψ is the prediction network for OOD tasks, ℓ(·) calculates cross-entropy loss, and Var[·] calculates variance.
§ FEATX FOR FEATURE LINEAR EXTRAPOLATION
We implement feature linear extrapolation as FeatX, a simple label-invariant data augmentation strategy designed to improve OOD generalization for graph feature shifts, which is applicable to both graph-level and node-level tasks, as shown in Figure <ref>.
Environment information is used to selectively perturb non-causal features while causal features determining the label are preserved.
During the augmentation process, with knowledge of the domain range of the node features, we span the feature space with extrapolation for non-causal features.
In contrast to Mixup which exclusively interpolates, with knowledge of non-causal features and their domain, FeatX covers extrapolation as well as interpolation to advance in OOD generalization.
We introduce technical details in the following subsections and end with theoretical support for the effectiveness of feature linear extrapolation.
§.§ Non-Causal Feature Selection
Contrary to graph structure, features reside in a relatively low-dimensional space. Direct extrapolation of node features that have causal relationships with the target may not always yield beneficial outcomes (Appendix <ref>).
Therefore, we only perturb the selected variant features xvar.
The selection of xvar is implemented as learning an invariance mask M∈ℝ^p based on a variance score vector S_V∈ℝ^p and a threshold T.
The variance score vector S_V measures the variance of each feature element the target, in which high scores represents large variances and therefore features of xvar.
Variance scores are learned using label and environment information.
A group of label-invariant and environment-variant samples should have similar invariant features xinv while variant features xvar vary majorly; therefore, their feature variances increase the variance score vector S_V.
Conversely, feature variances of environment-invariant label-variant samples reduce the variance scores.
The invariance mask M selects xvar and masks out other features by applying the threshold T on the the variance score vector S_V.
Formally,
S_V = k_1𝔼_y∈ YVar_P_y [x] - k_2𝔼_ε∈ℰVar_P_ε[x], M =[S_V > T],
where k=[k_1,k_2] and T are trainable parameters, and Var_P[x] calculates the variance of node features for samples in distribution P.
§.§ Node Feature Extrapolation
We apply the mask M and perturb the non-causal node features to achieve extrapolation xvar, without altering the topological structure of the graph.
Let the domain for x be denoted as 𝒟, which is assumed to be accessible. Valid extrapolations must generate augmented samples with node feature X_A∈𝒟 while X_A P^train(X). We ensure the validity of extrapolation with the generalized modulo operation (Appendix <ref>), ∀X∈ℝ^p, (X𝒟)∈𝒟.
Given each pair of samples D_ε_1, D_ε_2 with the same label y but different environments ε_1 and ε_2, FeatX produces
X_A = M×((1+λ)X_ε_1 - λ'x_ε_2)𝒟+M×X_ε_1,
(A,E) = (A_ε_1,E_ε_1),
where λ, λ' ∼𝒩(a, b) is sampled for each data pair.
Empirically, we achieve favorable extrapolation performance and faster convergence with λ'=λ∈ℝ^+∼Γ(a, b), which we use in experiments, with the shape parameter a and scale parameter b of gamma distribution as hyperparameters.
Note that D_ε_i,D_ε_j can be both graph-level and node-level data samples, and in the graph-level case x_ε_2⊆X_ε_2 is the features of a random node.
During the process, the augmented samples form a new environment.
Replacing original node features in training samples by the augmented ones, our optimization process is formulated as
ψ^* := *argmin_ψ,k,T𝔼_(D_ε_i,D_ε_j,y)∼ P^train[ℓ(f_ψ(X_A,A,E),y)],
where f_ψ is the prediction network for OOD tasks and ℓ(·) calculates cross-entropy loss.
§.§ Solving Feature-based Graph Distribution Shifts
FeatX enables extrapolation the selected variant features, while causal features are preserved.
By generating causally valid samples with OOD node features, FeatX essentially expands the training distribution range.
Theoretical analysis can evidence that our extrapolation spans the feature space outside P^train(X) for xvar, thereby transforming OOD areas to ID.
We present the following theorem showing that, under certain conditions, FeatX substantially solves feature shifts on the selected variant features for node-level tasks.
Let n_A be the number of samples FeatX generates and f_ψ be the well-trained network with FeatX applied.
If (1) ∃ (X_1,⋯,X_j)∈ P^train from at least 2 environments, s.t. (X_1var, ⋯, X_jvar) span ℝ^j, and (2) ∀X_1 ≠X_2, the GNN encoder of f_ψ maps G_1= (X_1, A,E) and G_2 = (X_2, A,E) to different embeddings,
then with ŷ=f_ψ(X,A,E),
ŷXvar as n_A →∞.
Proof is provided in Appendix <ref>.
Theorem <ref> states that, given sufficient diversity in environment information and expressiveness of GNN, FeatX can achieve invariant prediction regarding the selected variant features. Therefore, FeatX possesses the capability to generalize over distribution shifts on the selected variant features.
Extending on the accuracy of non-causal selection, if xvar*=xvar,
we achieve causally-invariant prediction in feature-based OOD tasks.
Thus, FeatX possesses the potential to solve feature distribution shifts.
§ EXPERIMENTAL STUDIES
We evaluate the effectiveness of our method on graph OOD classification tasks.
Setup.
For all experiments, we select the best checkpoints for OOD tests according to results on OOD validation sets; ID validation and ID test are also used for comparison if available.
For fair comparisons, we use unified GNN backbones for all methods in each dataset, specifically, GIN-Virtual <cit.> and GCN <cit.> for graph-level and node-level tasks, respectively.
Experimental details and hyperparameter selections are provided in Appendix <ref>.
Baselines. We compare our method with both OOD learning algorithms and graph data augmentation methods, as well as the empirical risk minimization (ERM). OOD algorithms include IRM <cit.>, VREx <cit.>, DANN <cit.>, Deep Coral <cit.>, GroupDRO <cit.>, and graph OOD methods DIR <cit.>, EERM <cit.> and SRGNN <cit.>. Graph data augmentation methods include DropNode <cit.>, DropEdge <cit.>, Feature Masking <cit.>, FLAG <cit.>, Graph Mixup <cit.>, LISA <cit.>, and G-Mixup <cit.>. Note that among the baselines, DIR, DropNode and G-Mixup only apply for graph-level tasks, while EERM and SRGNN only apply for node-level tasks.
§.§ OOD Performance on Structure Shifts
Datasets & Metrics.
To evidence the generalization improvements of structure extrapolation, we evaluate G-Splice on 8 graph-level OOD datasets with structure shifts.
We adopt 5 datasets from the GOOD benchmark <cit.>, HIV-size, HIV-scaffold, SST2-length, Motif-size, and Motif-base, where “-" denotes the shift domain.
We construct another natural language dataset Twitter-length <cit.> following the OOD split of GOOD. Additionally, we adopt protein dataset DD-size and molecular dataset NCI1-size following <cit.>.
All datasets possess structure shifts, thus proper benchmarks for structural OOD generalization.
Dataset details are in Appendix <ref>.
For evaluation, we report ROC-AUC for GOODHIV, Matthews correlation coefficient (MCC) for DD and NCI1,
and accuracy in percentage for all other datasets.
Results. OOD performances of G-Splice and all baselines on structure distribution shifts are shown in Table <ref>. As can be observed, G-Splice consistently outperforms all other methods in OOD test results, showing effectiveness in various structural OOD tasks. On synthetic dataset GOODMotif, G-Splice substantially outperforms most baselines by 60% for size domain and 20% for base domain in accuracy, approaching ID performances, which evidences the generalization improvements achieved by our structural extrapolation. With a VREx-like regularization applied, G-Splice+R achieve further performance gain on most datasets, implying that combined use of augmented environment information with both data extrapolation and invariance regularization is beneficial.
Furthermore, in contrast to OOD performances, G-Splice does not always perform best in ID settings. Also, G-Splice shows significant performance gain compared with other graph data augmentation methods.
This reveals that G-Splice enhances generalization abilities with extrapolation strategies rather than overall progress in learning or simple data augmentation.
By guiding the model to extrapolate with OOD samples, G-Splice extends the data distribution and improves generalization for specific structure shifts.
As ablation studies, we evidence that certain extrapolation procedures specifically benefit size or base shifts, supporting our theoretical analysis, which is detailed in Appendix <ref>.
§.§ OOD Performance on Feature Shifts
Datasets & Metrics
To show the OOD effectiveness of feature extrapolation, we evaluate FeatX on 5 graph OOD datasets with feature shifts.
We adopt 5 datasets from the GOOD benchmark, CMNIST-color, Cora-word, Twitch-language, WebKB-university, and CBAS-color, with more details in Appendix <ref>.
All shift domains are structure-irrelevant and provide specific evaluation on features.
We report accuracy in percentage for all 5 datasets.
Results. OOD performances of FeatX and all baselines on feature shifts are shown in Table <ref>.
We can observe that FeatX consistently outperforms all other methods in OOD test results, showing its effectiveness in various feature OOD tasks.
On GOODWebKB, FeatX substantially outperforms most baselines by 100% in accuracy.
On synthetic dataset GOODCBAS, FeatX outperforms most baselines by 14% and achieves OOD results close to ID results, substantially solving the feature shift with feature extrapolation.
FeatX does not always outperform in ID settings; also, FeatX shows significant performance gain compared with other graph data augmentation methods.
This reveals that FeatX specifically enhances generalization abilities in feature rather than making overall progress in learning with simple data augmentation.
FeatX succeeds in selecting non-causal features and lead the model to extrapolate with OOD samples spanning the selected feature space.
§.§ Structure and Feature Extrapolation Comparisons
Structural and feature linear extrapolations are performed respectively, targeting specific types of shifts. With the following experiments, we show that feature and structure shifts benefit little from the generalization abilities of each other, while G-Splice and FeatX indeed solve respective shifts.
[8]r0.5
Extrapolation comparison on structure and feature shifts.
0.5!
[2pt]
3c2*Method 2cGOOD-Motif-base↑ 2cGOOD-CMNIST-color↑
(r)4-5 (r)6-7
1cIDID 1cOODOOD 1cIDID 1cOODOOD
(r)1-7
3lERM 92.60±0.03 68.66±3.43 77.96±0.34 28.60±2.01
3lG-Splice 92.14±0.29 83.96±7.38 79.63±0.43 25.24±3.19
3lFeatX 92.60±0.05 62.93±6.10 69.54±1.51 62.49±2.12
3lG-Splice+FeatX 91.21±0.65 83.36±5.13 70.02±2.51 28.45±5.06
[2pt]
As shown in Table <ref>, when we apply G-Splice and FeatX individually and combined on synthetic datasets designed for structure and feature shifts, they both individually outperform in their “expertise" of extrapolation. While G-splice performs favorably on GOODMotif, it fails to match the baseline performance of ERM on GOODCMNIST, since G-Splice augments structural OOD samples which creates additional OOD factors for datasets with feature shift. FeatX stands in the similar situation, providing little benefit for structural OOD problems while introducing extra shifts, which makes learning extra difficult. When combined, G-Splice and FeatX can solve either type of shift, but would create extra shifts at the same time, resulting in occasional performance gain. The results evidence the distinct shift problems and generalization abilities with structure and feature, supporting our design to handle them with respective extrapolation strategies. Practically, the two strategies can be combined to solve comprehensive shifts.
§ DISCUSSION
Our work introduces an innovative GDA approach to solve graph OOD generalization using linear extrapolation in graph space. Our environment-aware framework, featuring G-Splice and FeatX, improves over existing methods by generating causally-valid OOD samples that enhance model performance. Overall, our data-centric approach opens a new direction in graph OOD studies. Currently, our method depends on the quality of environment information and GNN expressiveness. Also, our theoretical analysis focus on linear extrapolation. Future works could explore optimizing environment information and incorporating non-linear extrapolation studies.
This work was supported in part by National Science Foundation grants IIS-2006861 and IIS-1908220,
Cisco Research,
Presidential Impact Fellowship and institutional supports of Texas A&M University.
unsrtnat
0.1in
0.1in
toptitlebar
Graph Structure and Feature Extrapolation for Out-of-Distribution Generalization
Supplemental Material
bottomtitlebar
0.3in minus 0.1in
§ BACKGROUND AND RELATED WORKS
Out-of-Distribution (OOD) Generalization. Out-of-Distribution (OOD) Generalization <cit.> addresses the challenge of adapting a model, trained on one distribution (source), to effectively process unseen data from a potentially different distribution (target). It shares strong ties with various areas such as transfer learning <cit.>, domain adaptation <cit.>, domain generalization <cit.>, causality <cit.>, and invariant learning <cit.>. As a form of transfer learning, OOD generalization is especially challenging when the target distribution substantially differs from the source distribution. OOD generalization, also known as distribution or dataset shift <cit.>, encapsulates several concepts including covariate shift <cit.>, concept shift <cit.>, and prior shift <cit.>. Both Domain Adaptation (DA) and Domain Generalization (DG) can be viewed as specific instances of OOD, each with its own unique assumptions and challenges.
Domain Generalization (DG). DG <cit.> strives to predict samples from unseen domains without the need for pre-collected target samples, making it more practical than DA in many circumstances. However, generalizing without additional information is logically implausible, a conclusion also supported by the principles of causality <cit.>. As a result, contemporary DG methods have proposed the use of domain partitions <cit.> to generate models that are domain-invariant. Yet, due to the ambiguous definition of domain partitions, many DG methods lack robust theoretical underpinning.
Causality & Invariant Learning. Causality <cit.> and invariant learning <cit.> provide a theoretical foundation for the above concepts, offering a framework to model various distribution shift scenarios as structural causal models (SCMs). SCMs, which bear resemblance to Bayesian networks <cit.>, are underpinned by the assumption of independent causal mechanisms, a fundamental premise in causality. Intuitively, this supposition holds that causal correlations in SCMs are stable, independent mechanisms akin to unchanging physical laws, rendering these causal mechanisms generalizable. An assumption of a data-generating SCM equates to the presumption that data samples are generated through these universal mechanisms. Hence, constructing a model with generalization ability requires the model to approximate these invariant causal mechanisms. Given such a model, its performance is ensured when data obeys the underlying data generation assumption.
<cit.> initially proposed optimal predictors invariant across all environments (or interventions). Motivated by this work, <cit.> proposed framing this invariant prediction concept as an optimization process, considering one of the most popular data generation assumptions, PIIF. Consequently, numerous subsequent works <cit.>—referred to as invariant learning—considered the initial intervention-based environments <cit.> as an environment variable in SCMs. When these environment variables are viewed as domain indicators, it becomes evident that this SCM also provides theoretical support for DG, thereby aligning many invariant works with the DG setting. Besides PIIF, many works have considered FIIF and anti-causal assumptions <cit.>, which makes these assumptions as popular basics of causal theoretical analyses.
OOD generalization for graph. Extrapolating on non-Euclidean data has garnered increased attention, leading to a variety of applications <cit.>. Inspired by <cit.>, <cit.> proposed that GNNs intrinsically possess superior generalization capability. Several prior works <cit.> explored graph generalization in terms of graph sizes, with <cit.> being the first to study this issue using causal models. Recently, causality modeling-based methods have been proposed for both graph-level tasks <cit.> and node-level tasks <cit.>.
To solve OOD problems in graph, DIR <cit.> selects graph representations as causal rationales and conducts causal intervention to create multiple distributions.
EERM <cit.> generates environments with REINFORCE algorithm to maximize loss variance between environments while adversarially minimizing the loss.
SRGNN <cit.> aims at pushing biased training data to the given unbiased distribution, performed through central moment discrepancy and kernel matching.
To improve interpretation and prediction, GSAT <cit.> learns task-relevant subgraphs by constraining information with stochasticity in attention weights.
CIGA <cit.> models the graph generation process and learns subgraphs to maximally preserve invariant intra-class information.
GREA <cit.> performs rationale identification and environment replacement to augment virtual data examples.
GIL <cit.> proposes to identify invariant subgraphs and infer latent environment labels for variant subgraphs through joint learning.
However, except for CIGA <cit.>, their data assumptions are less comprehensive compared to traditional OOD generalization. CIGA, while recognizing the importance of diverse data generation assumptions (SCMs), attempts to fill the gap through non-trivial extra assumptions without environment information.
Additionally, environment inference methods have gained traction in graph tasks, including EERM <cit.>, MRL <cit.>, and GIL <cit.>. However, these methods face two undeniable challenges. First, their environment inference results require environment exploit methods for evaluation, but there are no such methods that perform adequately on graph tasks according to the synthetic dataset results in GOOD benchmark <cit.>. Second, environment inference is essentially a process of injecting human assumptions to generate environment partitions, but these assumptions are not well compared.
Graph data augmentation for generalization.
Some data augmentation methods, not limited to graph methods, empirically show improvements in OOD generalization tasks. Mixup <cit.>, which augments samples by interpolating two labeled training samples, is reported to benefit generalization.
LISA <cit.> selectively interpolates intra-label or intra-domain samples to further improve OOD robustness.
In the graph area, following Mixup, Graph Mixup <cit.> mixes the hidden representations in each GNN layer, while ifMixup <cit.> directly applies Mixup on the graph data instead of the latent space.
Graph Transplant <cit.> employs node saliency information to select a substructure from each graph as units to mix.
G-Mixup <cit.> interpolates the graph generator of each class and mixes on class-level to improve GNN robustness.
DPS <cit.> extracts multiple label-invariant subgraphs with a set of subgraph generators to train an invariant GNN predictor.
However, few works target OOD problems, and no prior work generates OOD samples that can provably generalize over graph distribution shifts. In contrast, we offer a graph augmentation method to extrapolate in structure and feature for OOD generalization.
§ BROADER IMPACTS
Addressing out-of-distribution (OOD) generalization presents a formidable challenge, particularly in the realm of graph learning. This issue is acutely exacerbated when conducting scientific experiments becomes cost-prohibitive or impractical. In many real-world scenarios, data collection is confined to certain domains, yet extrapolating this knowledge to broader areas, where experiment conduction proves difficult, is crucial. In focusing on a data-centric approach to the OOD generalization problem, we pave the way for the integration of graph data augmentation with graph OOD, a strategy with substantial potential for broad societal and scientific impact.
Our work builds upon the principle of Causal Additivity, a causal assumption widely applicable in graph classification tasks. This assumption can be subjectively verified via logical analysis for common tasks such as natural language sentiment analysis and synthetic tasks where labels are determined by certain structures. The assumption is strongly underpinned by experimental results for chemical property tasks. Although we acknowledge that our assumption may not encompass all cases, it does make headway in addressing a substantial class of problems. As graph OOD generalization is a complex issue in practice, different techniques are required for varying domains and problems. No single method can be expected to resolve all unknown cases, and our future work aims to expand the scope of tasks addressed.
Our research adheres strictly to ethical guidelines and does not raise any ethical issues. It neither involves human subjects nor gives rise to potential negative social impacts or privacy and fairness issues. Furthermore, we foresee no potential for malicious or unintended usage of our work. Nonetheless, we acknowledge that all technological progress inherently carries risks. Consequently, we advocate for ongoing evaluation of the broader implications of our methodology across a range of contexts.
§ THEORETICAL PROOFS
This section presents comprehensive proofs for all the theorems mentioned in this paper, along with the derivation of key intermediate results.
Theorem 3.1
Given an N-sample training dataset D{G_tr}, its N-dimension structural linear extrapolation can generate sets D{G_1} and D{G_2} s.t. (G_1)env<(G_tr)env <(G_2)env for ∀G_tr,G_1,G_2, where < denotes “less in size” for size extrapolation and “lower base complexity” for base extrapolation.
Considering size extrapolation, we prove that 1.sets D{G_1} and D{G_2} contain graph sizes achievable by N-dimension structural linear extrapolation; 2.|X|_G_1<|X|_G_tr <|X|_G_2 holds for ∀G_tr,G_1,G_2.
For N-dimension structural linear extrapolation on training data D{G_tr}, we have Eq. <ref>:
G_sle^N=∑_i=1^N a_i · G_i + ∑_i=1^N∑_j=1^N b_ij· (G_j - G_i) = 𝐚^⊤𝐆 + ⟨ B, 1𝐆^⊤ - 𝐆1^⊤⟩_F.
Let the largest and smallest graph G_ma and G_mi in D{G_tr} be indexed i=ma and i=mi.
We generate D{G_2} using Eq. <ref> with the condition that a_ma=1 and ∑_i=1^N a_i≥ 2. We generate D{G_1} with the condition that ∑_i=1^N a_i=0, ∑_i=1^N∑_j=1^N b_ij=1 and b_(mi)j=1.
By Definition <ref>, D{G_1} and D{G_2} contain graph sizes achievable by N-dimension structural linear extrapolation.
For ∀G_2 ∈ D{G_2}, since a_ma=1 and ∑_i=1^N a_i≥ 2, G_2 contains multiple graphs spliced together; then we have
|X|_G_2>|X|_G_ma≥ |X|_G_tr
for ∀G_tr∈ D{G_tr}.
For ∀G_1 ∈ D{G_1}, since ∑_i=1^N a_i=0, ∑_i=1^N∑_j=1^N b_ij=1 and b_(mi)j=1, G_1 contains only one single subgraph extracted from G_mi and another graph; then we have
|X|_G_1<|X|_G_mi≤ |X|_G_tr
for ∀G_tr∈ D{G_tr}.
Therefore, |X|_G_1<|X|_G_tr <|X|_G_2 holds for ∀G_tr,G_1,G_2.
Considering base extrapolation, we prove that 1.sets D{G_1} and D{G_2} contain graph bases achievable by N-dimension structural linear extrapolation; 2.ℬ_G_1<ℬ_G_tr <ℬ_G_2 holds for ∀G_tr,G_1,G_2, where ℬ denotes the base graph and “<” denotes less complex in graph base. Note that graph bases can be numerically indexed for ordering and comparisons, such as the Bemis-Murcko scaffold algorithm <cit.>.
For N-dimension structural linear extrapolation on training data D{G_tr}, following Eq. <ref>,
let the graphs with the most and least complex graph base G_mo and G_le in D{G_tr} be indexed i=mo and i=le.
We generate D{G_2} using Eq. <ref> with the condition that a_mo=1 and ∑_i=1^N a_i≥ 2. We generate D{G_1} with the condition that ∑_i=1^N a_i=0, ∑_i=1^N∑_j=1^N b_ij=1 and b_(le)j=1, with (G_j - G_i) being a causal graph extraction.
By Definition <ref>, D{G_1} and D{G_2} contain graph bases achievable by N-dimension structural linear extrapolation.
For ∀G_2 ∈ D{G_2}, since a_mo=1 and ∑_i=1^N a_i≥ 2, G_2 contains multiple graphs spliced together including the most complex base; then we have
ℬ_G_2>ℬ_G_mo≥ℬ_G_tr
for ∀G_tr∈ D{G_tr}, adding upon ℬ_G_mo to create more complex base graphs.
For ∀G_1 ∈ D{G_1}, since ∑_i=1^N a_i=0, ∑_i=1^N∑_j=1^N b_ij=1 and b_(le)j=1 with (G_j - G_i) being a causal graph extraction, G_1 contains only a single causal subgraph extracted from G_le; then we have
ℬ_G_1<ℬ_G_le≤ℬ_G_tr
for ∀G_tr∈ D{G_tr}, essentially creating structural linear extrapolations containing no base graphs.
Therefore, ℬ_G_1<ℬ_G_tr <ℬ_G_2 holds for ∀G_tr,G_1,G_2.
This completes the proof.
Theorem 3.2
Given an N-sample training dataset D{G_tr} and its true labeling function for the target classification task f(G),
if D{G_sle^N} is a graph set sampled from the N-dimension structural linear extrapolation of D{G_tr} and Assumption <ref> holds, then for ∀ (G_sle^N,y)∈ D{G_sle^N}, y=f(G_sle^N).
By Definition <ref>,
for N-dimension structural linear extrapolation on training data D{G_tr}, for ∀ (G_sle^N,y) we have G_sle^N
G_sle^N=∑_i=1^N a_i · G_i + ∑_i=1^N∑_j=1^N b_ij· (G_j - G_i) = 𝐚^⊤𝐆 + ⟨ B, 1𝐆^⊤ - 𝐆1^⊤⟩_F,
and the label y for G_sle^N
y = (∑_i=1^N a_i · y_i+∑_i=1^N∑_j=1^N c_ijb_ij· y_j)/(∑_i=1^N a_i+∑_i=1^N∑_j=1^N c_ijb_ij)
=(𝐚^⊤𝐲 + ⟨ C∘ B, 1𝐲^⊤⟩_F)/ (𝐚^⊤1 + ⟨ C, B ⟩_F).
𝐚^⊤𝐆 splices ∑_i=1^N a_i graphs together, and since [G_1, …, G_N] are the N graphs from D{G_tr}, each of G_i contains one and only one causal graph. Under the causal additivity of Assumption <ref>, given G'=G_1+G_2, we have f(G')=ay_1+(1-a)y_2. With a fair approximation of a=1-a=1/2, we can feasibly obtain f(G')=(y_1+y_2)/2. Recursively, for 𝐚^⊤𝐆 we can derive
f(𝐚^⊤𝐆)= (∑_i=1^N a_i · y_i)/(∑_i=1^N a_i).
⟨ B, 1𝐆^⊤ - 𝐆1^⊤⟩_F splices ∑_i=1^N∑_j=1^N b_ij extracted subgraphs together. Among them, ∑_i=1^N∑_j=1^N c_ijb_ij are causal subgraphs, while the others are environmental subgraphs. Similarly, using the causal additivity of Assumption <ref> in a recursive manner, we can derive
f(⟨ B, 1𝐆^⊤ - 𝐆1^⊤⟩_F)= (∑_i=1^N∑_j=1^N c_ijb_ij· y_i)/(∑_i=1^N∑_j=1^N c_ijb_ij).
Combining the results of Eq. <ref> and Eq. <ref>, using Assumption <ref> in a recursive manner, we can derive for 𝐚^⊤𝐆 + ⟨ B, 1𝐆^⊤ - 𝐆1^⊤⟩_F:
f(𝐚^⊤𝐆 + ⟨ B, 1𝐆^⊤ - 𝐆1^⊤⟩_F)= (∑_i=1^N a_i · y_i+∑_i=1^N∑_j=1^N c_ijb_ij· y_j)/(∑_i=1^N a_i+∑_i=1^N∑_j=1^N c_ijb_ij).
By Definition <ref>, we have
f(G_sle^N) =f(𝐚^⊤𝐆 + ⟨ B, 1𝐆^⊤ - 𝐆1^⊤⟩_F)
= (∑_i=1^N a_i · y_i+∑_i=1^N∑_j=1^N c_ijb_ij· y_j)/(∑_i=1^N a_i+∑_i=1^N∑_j=1^N c_ijb_ij)=y.
Therefore, for ∀ (G_sle^N,y)∈ D{G_sle^N}, we have y=f(G_sle^N).
This completes the proof.
Theorem 5.1
If (1) ∃ (X_1,⋯,X_j)∈ P^train from at least 2 environments, s.t. (X_1var, ⋯, X_jvar) span ℝ^j, and (2) ∀X_1 ≠X_2, the GNN encoder of f_ψ maps G_1= (X_1, A,E) and G_2 = (X_2, A,E) to different embeddings,
then with ŷ=f_ψ(X,A,E),
ŷXvar as n_A →∞.
We theoretically prove the statements and Theorem <ref> for FeatX.
We propose to learn and apply a mask M and perturb the non-causal node features to achieve extrapolation xvar, without altering the topological structure of the graph.
Let the domain for x be denoted as 𝒟, which is assumed to be accessible. Valid extrapolations must generate augmented samples with node feature X_A∈𝒟 while X_A P^train(X). Since x is a vector, 𝒟 is also a vector, in which each element gives the domain of an element in x. We ensure the validity of extrapolation with the generalized modulo operation , which we define as
X𝒟=X+i*abs(𝒟), s.t. X𝒟∈𝒟,
where i is any integer and abs(𝒟) calculates the range length of 𝒟.
Therefore, ∀X∈ℝ^p, (X𝒟)∈𝒟.
Given each pair of samples D_ε_1, D_ε_2 with the same label y but different environments ε_1 and ε_2, FeatX produces
X_A = M×((1+λ)X_ε_1 - λ'x_ε_2)𝒟+M×X_ε_1,
(A,E) = (A_ε_1,E_ε_1),
where λ, λ' ∼𝒩(a, b) is sampled for each data pair.
During the process, the augmented samples form a new environment.
We prove Theorem <ref>, showing that, under certain conditions, FeatX substantially solves feature shifts on the selected variant features for node-level tasks.
The proof also evidences that our extrapolation spans the feature space outside P^train(X) for xvar, transforming OOD areas to ID.
Let n_A be the number of samples FeatX generates and f_ψ be the well-trained network with FeatX applied.
Condition is given that
∃ (X_1,⋯,X_j)∈ P^train,(X_1var,⋯, X_jvar) spanℝ^j.
Therefore, by definition,
∀u∈ℝ^j, ∃t=(t_1,t_2,⋯,t_j), t_1,t_2,⋯,t_j∈ℝ^j, s.t.u=t_1X_1var+⋯+t_jX_jvar.
The operation to generate X_A gives
X_A = M×((1+λ)X_ε_1 - λ'X_ε_2)𝒟+M×X_ε_1,
so we have
X_Avar = ((1+λ)X_ε_1var - λ'X_ε_2var)𝒟.
For ∀u, ∃t=(t_1,t_2,⋯,t_j) and (X_1,⋯,X_j)∈ P^train from at least 2 environments.
Without loss of generality, we assume that X_1 and X_2 are from different environments ε_1 and ε_2.
With n_A →∞, there will exist an augmentation sampled between X_1 and X_2, and since λ∼𝒩(a, b), λ∈ℝ,
∃X_A^1 s.t. X^1_Avar = ((1+λ)X_1var - λ'X_2var)𝒟
=(1+λ)X_1var - λ'X_2var+n_1*abs(𝒟),1+λ=t_1,- λ'=t_2,
where n_1 is an integer.
Equivalently,
X^1_Avar =t_1X_1var+t_2X_2var+n_1*abs(𝒟).
The augmentation sample X_A^1 belongs to a new environment, thus in a different environment from X_1,⋯,X_j.
Similarly, with n_A →∞, there will exist an augmentation sampled between X_A^1 and X_3,
∃X_A^2 s.t. X^2_Avar = ((1+λ)X_A^1var - λ'X_3var)𝒟
=(1+λ)X_A^1var - λ'X_3var+n_2*abs(𝒟),λ=0,- λ'=t_3
where n_2 is an integer.
Equivalently,
X^2_Avar =X_A^1var +t_3X_3var+n_2*abs(𝒟)= t_1X_1var+t_2X_2var+t_3X_3var+(n_1+n_2)*abs(𝒟).
The augmentation sample X_A^2 also belongs to the new environment.
Recursively, with n_A →∞, there will exist an augmentation
∃X_A^j-1 s.t.X^j-1_Avar =t_1X_1var+t_2X_2var+⋯+t_jX_jvar+(n_1+n_2+⋯+n_j-1)*abs(𝒟).
Since for u, we have u=t_1X_1var+⋯+t_jX_jvar, therefore, X^j-1_Avar =u+(n_1+n_2+⋯+n_j-1)*abs(𝒟).
With u∈ℝ^j and X^j-1_Avar=((1+λ)X_A^j-2var - λ'X_jvar)𝒟∈ℝ^j by the definition of 𝒟, we have
(n_1+n_2+⋯+n_j-1)*abs(𝒟)=0 and X^j-1_Avar =u.
Therefore, we prove that with n_A →∞,
∀u∈ℝ^j, there exists an augmentation sample X^j-1_A s.t. X^j-1_Avar =u.
That is, the extrapolation strategy of FeatX spans the feature space for xvar.
With the above result, for the feature space of xvar, every data point is reachable. As n_A →∞, every data point of xvar is reached at least once. Let a group of samples with selected and preserved causal features Xinv* be M×Xvar +M×Xinv*, where Xvar∀xvar∈ℝ^j.
Since ∀X_1 ≠X_2, the GNN encoder maps G_1= (X_1, A,E) and G_2 = (X_2, A,E) to different embeddings,
all different samples from M×Xvar +M×Xinv* are encoded into different embeddings, while all having the same label y. For the well-trained network f_ψ, the group of embeddings Z|(M×Xvar +M×Xinv*) are all predicted into class ŷ=y.
In this case,
∀Xvar∈ℝ^j, ŷ =f_ψ(M×Xvar+M×Xinv*,A,E)= y,
therefore
ŷXvar as n_A →∞.
This completes the proof.
Theorem <ref> states that, given sufficient diversity in environment information and expressiveness of GNN, FeatX can achieve invariant prediction regarding the selected variant features. Therefore, FeatX possesses the capability to generalize over distribution shifts on the selected variant features.
Extending on the accuracy of non-causal selection, if xvar*=xvar,
we achieve causally-invariant prediction in feature-based OOD tasks.
Thus, FeatX possesses the potential to solve feature distribution shifts.
§ EXPERIMENTAL DETAILS
§.§ Dataset Details
To evidence the generalization improvements of structure extrapolation, we evaluate G-Splice on 8 graph-level OOD datasets with structure shifts.
We adopt 5 datasets from the GOOD benchmark <cit.>, GOODHIV-size, GOODHIV-scaffold, GOODSST2-length, GOODMotif-size, and GOODMotif-base, using the covariate shift split from GOOD.
GOOD-HIV is a real-world molecular dataset with shift domains scaffold and size.
The first one is Bemis-Murcko scaffold <cit.> which is the two-dimensional structural base of a molecule. The second one is the number of nodes in a molecular graph.
GOOD-SST2 is a real-world natural language sentimental analysis dataset with sentence lengths as domain, which is equivalent to the graph size.
GOOD-Motif is a synthetic dataset specifically designed for structure shifts. Each graph is generated by connecting a base graph and a motif, with the label determined by the motif solely. The shift domains are the base graph type and the graph size.
We construct another natural language dataset Twitter <cit.> following the OOD splitting process of GOOD, with length as the shift domain. In addition, we adopt protein dataset DD and molecular dataset NCI1 following <cit.>, both with size as the shift domain.
All datasets possess structure shifts as we have discussed, thus proper benchmarks for structural OOD generalization.
To show the OOD the generalization improvements of feature extrapolation, we evaluate FeatX on 5 graph OOD datasets with feature shifts. We adopt 5 datasets of the covariate shift split from the GOOD benchmark.
GOOD-CMNIST is a semi-artificial dataset designed for node feature shifts. It contains image-transformed graphs with color features manually applied, thus the shift domain color is structure-irrelevant.
The other 4 datasets are node-level.
GOOD-Cora is a citation network dataset with “word" shift, referring to the word diversity feature of a node.
The input is a small-scale citation network graph, in which nodes represent scientific publications and edges are citation links. The shift domain is word, the word diversity defined by the selected-word-count of a publication.
GOOD-Twitch is a gamer network dataset, with the node feature “language" as shift domain.
The nodes represent gamers and the edge represents the friendship connection of gamers. The binary classification task is to predict whether a user streams mature content. The shift domain of GOOD-Twitch is user language.
GOOD-WebKB is a university webpage network dataset. A node in the network represents a webpage, with words appearing in the webpage as node features. Its 5-class prediction task is to predict the owner occupation of webpages, and the shift domain is university, which is implied in the node features.
GOOD-CBAS is a synthetic dataset. The input is a graph created by attaching 80 house-like motifs to a 300-node Barabási–Albert base graph, and the task is to predict the role of nodes. It includes colored features as in GOOD-CMNIST so that OOD algorithms need to tackle node color differences, which is also typical as feature shift.
All shift domains are structure-irrelevant and provide specific evaluation for feature extrapolation.
§.§ Setup Details
We conduct experiments on 8 datasets with 14 baseline methods to evaluate G-Splice, and on 5 datasets with 16 baselines for FeatX.
As a common evaluation protocol, datasets for OOD tasks provides OOD validation/test sets <cit.> to evaluate the model's OOD generalization abilities. Some datasets also provide ID validation/test sets for comparison <cit.>.
For all experiments, we select the best checkpoints for OOD tests according to results on OOD validation sets; ID validation and ID test are also used for comparison if available.
For graph prediction and node prediction tasks, we respectively select strong and commonly acknowledged GNN backbones.
For each dataset, we use the same GNN backbone for all baseline methods for fair comparison.
For graph prediction tasks, we use GIN-Virtual Node <cit.> as the GNN backbone. As an exception, for GOOD-Motif we adopt GIN <cit.> as the GNN backbone, since we observe from experiments that the global information provided by virtual nodes would interrupt the training process here.
For node prediction tasks, we adopt GraphSAINT <cit.> and use GCN <cit.> as the GNN backbone.
For all the experiments, we use the Adam optimizer, with a weight decay tuned from the set {0, 1e-2, 1e-3, 1e-4} and a dropout rate of 0.5. The number of convolutional layers in GNN models for each dataset is tuned from the set {3, 5}. We use mean global pooling and the RELU activation function, and the dimension of the hidden layer is 300. We select the maximum number of epochs from {100, 200, 500}, the initial learning rate from {1e-3, 3e-3, 5e-3, 1e-4}, and the batch size from {32, 64, 128} for graph-level and {1024, 4096} for node-level tasks. All models are trained to converge in the training process. For computation, we generally use one NVIDIA GeForce RTX 2080 Ti for each single experiment.
§.§ Hyperparameter Selection
In all experiments, we perform hyperparameter search to obtain experimental results that can well-reflect the performance potential of models.
For each dataset and method, we search from a hyperparameter set and select the optimal one based on OOD validation metric scores.
For each baseline method, we tune one or two algorithm-specific hyperparameters. For IRM and Deep Coral, we tune the weight for penalty loss from {1e-1, 1, 1e1, 1e2} and {1, 1e-1, 1e-2, 1e-3}, respectively. For VREx, we tune the weight for VREx's loss variance penalty from {1, 1e1, 1e2, 1e3}. For GroupDRO, we tune the step size from {1e-1, 1e-2, 1e-3}. For DANN, we tune the weight for domain classification penalty loss from {1, 1e-1, 1e-2, 1e-3}. For Graph Mixup, we tune the alpha value of its Beta function from {0.4, 1, 2}. The Beta function is used to randomize the lamda weight, which is the weight for mixing two instances up. For DIR, we tune the causal ratio for selecting causal edges from {0.2, 0.4, 0.6, 0.8} and loss control from {1e1, 1, 1e-1, 1e-2}. For EERM, we tune the learning rate for reinforcement learning from {1e-2, 1e-3, 5e-3, 1e-4} and the beta value to trade off between mean and variance from {1, 2, 3}. For SRGNN, we tune the weight for shift-robust loss calculated by central moment discrepancy from {1e-4, 1e-5, 1e-6}. For DropNode, DropEdge and MaskFeature, we tune the drop/mask percentage rate from {0.05, 0.1, 0.15, 0.2, 0.3}.
For FLAG, we set the number of ascending steps M=3 and tune the ascent step size from the set {1e-2, 1e-3, 5e-3, 1e-4}.
For LISA, we tune the parameters of the Beta function in the same way as Graph Mixup.
For G-Mixup, we set the augmentation number to 10, tune the augmentation ratio from {0.1, 0.2, 0.3} and the lambda range from {[0.1,0.2], [0.2,0.3]}.
For G-Splice, we tune the percentage of augmentation from {0.6, 0.8, 1.0}. The actual number of component graphs f is tuned from {2, 3, 4}, and the augmentation selection is tuned as a 3-digit binary code representing the 3 options, with at least one option applied.
For the pre-training of the bridge generation, hyperparameters regularizing the bridge attribute and KL divergence α and β are tuned from {1.5, 1, 0.5, 0.1}.
When the additional VREx-like regularization is applied, we tune the weight of loss variance penalty from {1, 1e1, 1e2}.
For FeatX, we tune the the shape parameter a and scale parameter b of the gamma function Γ(a, b) from {2, 3, 5, 7, 9} and {0.5, 1.0, 2.0}, respectively.
§ ABLATION STUDIES
§.§ Bridge Generation Studies for G-Splice
§.§.§ VAE as Bridge Generator
In this work, we adopt conditional VAE <cit.> as the major bridge generator for G-Splice due to its adequate capability and high efficiency. We show empirically that VAEs are well suitable for our task.
We reconstruct the generation process with diffusion model <cit.>, a generative model with high capability and favorable performances across multiple tasks.
Diffusion models consist of a diffusion process which progressively distorts a data point to noise, and a generative denoising process which approximates the reverse of the diffusion process.
In our case, the diffusion process adds Gaussian noise independently on each node and edge features encoded into one-hot vectors at each time step. Then the denoising network is trained to predict the noises, and we minimize the error between the predicted noise and the true noise computed in closed-form. During sampling, we iteratively sample bridge indexes and attribute values, and then map them back to categorical values in order that we obtain a valid graph.
We compare performances and computational efficiency of the two generative models. As a baseline for bridge generation, we also present the results of random bridges, where bridges of predicted number and corresponding attributes are randomly sampled from the group of component graphs. Note that we do not apply the regularization in these experiments.
As can be observed in Table <ref>, OOD test results from the two generative models are comparable, both significantly improving over G-Splice-Rand. The diffusion model may be slightly limited in performance gain due to the discreteness approximations during sampling.
The results implies the necessity of generative models in the splicing operation for overall structural extrapolation.
Meanwhile, this shows that VAE is capable of the bridge generation task.
In contract, the training duration of diffusion model is 13 times that of VAE due to the sampling processes through massive time steps.
Overall, we obtain comparable performances from the two generative models, while VAEs are much less expensive computationally. Therefore, empirical results demonstrate that adopting VAE as our major bridge generator is well suitable.
§.§.§ Bridge Generation Design
As we have introduced in Sec <ref>, we generate bridges of predicted number along with corresponding edge attributes between given component graphs to splice graphs.
We do not include new nodes as a part of the bridge, since we aim at preserving the local structures of the component graphs and extrapolating certain global features. More manually add-on graph structures provide no extrapolation significance, while their interpolation influence are not proven beneficial. We evidence the effectiveness of our design with experiments.
We additionally build a module to generate nodes in the bridges. The number of nodes is predicted with a pre-trained predictive model and then a generative model generates the node features.
Moreover, we evaluate the results with fixed instead of generated bridge attributes.
The performances with our original bridge generator, node generation applied and edge attribute generation removed is summarized as follows. Note that we do not apply the regularization in these experiments.
As can be observed in Table <ref>, OOD test performances from the original bridge generator remains the highest. Without attribute generation, fixed bridge attributes degrades the overall performance due to the manual feature of the bridges, which may mislead the model with spurious information. When we include nodes as a part of the bridge, similarly the manually add-on graph structure may inject spurious information to the model and perturb the preservation of local structures, leading to limited improvements. This evidences the effectiveness of our design for bridge generation.
§.§ Comparison of Extrapolation Procedures for G-Splice
We evidence that certain extrapolation procedures specifically benefit size or base/scaffold shifts, as our theoretical analysis in Sec. <ref> and <ref>. For size and base/scaffold shifts on GOODHIV and GOODMotif, we extrapolation with each of the three augmentation options, Ginv, Ginv + f·Genv and f·G, individually and together, and compare the OOD performances. Note that we apply the VREx-like regularization in these experiments.
Let the three augmentation options, Ginv, Ginv + f·Genv and f·G be numbered 1, 2, and 3.
The optimal augmentation options for GOOD-HIV-size, GOOD-HIV-scaffold, GOOD-Motif-size, and GOOD-Motif-base after hyperparameter tuning are 1+3, 2+3, 3, and 2+3, respectively.
As can be observed from Table <ref>, Ginv and f·G have advantages in size shifts, while Ginv + f·Genv and f·G are better for base/scaffold shifts. This matches our theoretical analysis of augmentation procedures.
For size distribution shifts, Ginv and f·G environments enable size extrapolation by creating smaller and larger graphs outside the training distribution, respectively.
For base/scaffold distribution shifts, the two new environments respectively construct graphs without base/scaffold, and graphs with f base/scaffolds, achieving base extrapolation with new base/scaffolds introduced.
Splicing whole graphs has the advantage of extrapolating to larger graphs, simplicity in operation, and little loss in local structural information. Extracting subgraphs allows better flexibility for G-Splice, making graphs smaller than the training size accessible. In addition, the performance gain from f·G shows the effectiveness of the simple splicing strategy by itself.
§.§ Ablation Studies on FeatX
FeatX enables extrapolation the selected variant features. By generating causally valid samples with OOD node features, FeatX essentially expands the training distribution range.
Theoretical analysis evidences that our extrapolation spans the feature space outside P^train(X) for xvar, thereby transforming OOD areas to ID. We further show with experiments that extrapolation substantially benefits feature shifts in OOD tasks compared with interpolation, which can also improve generalization by boosting learning processes.
In addition, we show that our invariance mask and variance score vectors succeed in selecting non-causal features by comparisons between perturbation on selected features and all features.
As can be observed from Table <ref>, whether selecting non-causal features and the choice between interpolation and extrapolation both show significant influence on generalization performances. In all three datasets, extrapolation performances exceed corresponding interpolation performances with a clear gap, demonstrating the benefits of extrapolation by generating samples in OOD area that interpolation cannot reach. In GOODWebKB, perturbing selected non-causal features achieve significant improvements over perturbing all features, regardless of interpolation and extrapolation. This evidences the effectiveness of non-causal feature selection using variance score vectors, empirically supporting our design. In GOODCMNIST and GOODCBAS, since the features are manually added colors, the effect of feature selection is not as obvious as in GOODWebKB, the real world dataset.
Experimental results evidence the effectiveness of the strategies designed in FeatX.
§ METRIC SCORE AND LOSS CURVES
We report the metric score curves and loss curves for part of the datasets in Figure <ref>-<ref>. As can be observed from each pair of curves, our proposed methods, G-Splice and FeatX, consistently achieve better metric scores and lower loss compared with other baselines during the learning process. This evidences the substantial improvements achieved by structure and feature extrapolation, which benefits OOD generalization in essence.
|
http://arxiv.org/abs/2306.05827v1
|
20230609115757
|
Towards the Exploitation of LLM-based Chatbot for Providing Legal Support to Palestinian Cooperatives
|
[
"Rabee Qasem",
"Banan Tantour",
"Mohammed Maree"
] |
cs.CL
|
[
"cs.CL"
] |
Strain-Tuned Magnetic Frustration in a Square Lattice J_1-J_2 Material
J. Chang
July 31, 2023
======================================================================
With the ever-increasing utilization of natural language processing (NLP), we started to witness over the past few years a significant transformation in our interaction with legal texts. This technology has advanced the analysis and enhanced the understanding of complex legal terminology and contexts. The development of recent large language models (LLMs), particularly ChatGPT, has also introduced a revolutionary contribution to the way that legal texts can be processed and comprehended. In this paper, we present our work on a cooperative-legal question-answering LLM-based chatbot, where we developed a set of legal questions about Palestinian cooperatives, associated with their regulations and compared the auto-generated answers by the chatbot to their correspondences that are designed by a legal expert. To evaluate the proposed chatbot, we have used 50 queries generated by the legal expert and compared the answers produced by the chart to their relevance judgments. Finding demonstrated that an overall accuracy rate of 82% has been achieved when answering the queries, while exhibiting an F1 score equivalent to 79%.
§ INTRODUCTION
Natural Language Processing (NLP) has revolutionized the way we interact with legal texts. It has made it easier to analyze and comprehend complex legal texts <cit.>. One of the most recent significant advancements in this field is the development of Large Language Models, and the development of chatbots that are based on such models <cit.>, were ChatGPT is at the forefront of this development <cit.>. With its vast training data and powerful capabilities, ChatGPT has had a profound impact on global users. It provides them with intelligent conversational agents capable of understanding and responding to their queries. The integration of LLMs-powered chatbots has extended beyond the legal domain, finding applications in various fields. However, it is in the realm of legal discourse where these chatbots truly shine <cit.>. They leverage their expertise to assist users in navigating complex legal terms and processes <cit.>.
The huge improvement in LLM-based chatbot technology and the ease of integrating it seamlessly in the context of the legal domain has encouraged us to build a chatbot to provide answers to legal inquiries and questions about Palestinian cooperative law. We noticed that there have been numerous inquiries from cooperative societies and cooperative unions regarding private legal issues. This is mainly because the law is relatively new, having been issued at the end of 2017 <cit.> . Additionally, there is an urgent need to provide legal answers at all times, especially considering the need for a labor-intensive effort to answer such queries. Furthermore, considering the large number of cooperative members, which reached 58,883 at the end of 2021 as reported in <cit.>, there is an urgent need for a chatbot that is available 24/7 to address their legal inquiries and provide timely assistance.
The rest of those article is organized as follows. In Section <ref>, we review the literature and discuss the related works. Section <ref>, introduces the utilized dataset for testing and evaluating our proposed chatbot. In Section <ref>, we discuss the proposed methodology. Section <ref> present the experimental evaluation and results. In Section <ref>, we conclude our work and point to the future directions of our research work in Section <ref>.
§ LITERATURE REVIEW
The use of machine learning (ML) techniques in the legal domain has long history with it a lot of research has integrated the two domains in many fields such as Legal document review <cit.>, Legal prediction <cit.>, Legal writing <cit.> , legal summarization <cit.>, and Legal compliance <cit.>. Prompting can be used to improve the performance of LLMs in different criteria and explore the effectiveness of using prompts in legal judgment prediction (LJP). <cit.> conduct experiments using data from the European Court of Human Rights and the Federal Supreme Court of Switzerland, comparing different prompts with multilingual language models (LLMs) such as mGPT, GPT-J-6B, and GPT-NeoX-20B. The results demonstrate that zero-shot prompt engineering can improve LJP performance with LLMs, yielding better macro-averaged F1 scores, precision, and recall compared to simple baselines. However, the performance of zero-shot learning still falls short of current supervised state-of-the-art results in the field. The paper also highlights the following key findings: prompting can enhance LLM performance in legal judgment prediction, multilingual LLMs can be effective even with training data in a single language, and while zero-shot learning holds promise, further improvements are needed to achieve state-of-the-art outcomes. The authors conclude by emphasizing the potential value of prompting for legal professionals and the accessibility benefits of multilingual LLMs in the field of legal natural language processing (NLP). In an experiment, <cit.> built a fictitious law professor who had a normal week of duties including teaching and community service planned out for her. then they used ChatGPT prompts for each task to test how well the system worked. For six of the seven tasks given, ChatGPT was able to produce workable first drafts in just 23 minutes. The most common tasks, including making a practice exam question or preparing a class handout, showed ChatGPT to be the most proficient. ChatGPT struggled with more complex tasks, especially those that had to do with education, although it still had the potential to save time in some cases. The experiment's findings indicate that ChatGPT, especially service-related jobs, has a lot of promise for reducing some components of the workload for law faculty. Additionally, ChatGPT may enable law professors to spend less time on specific teaching responsibilities, giving up more time for them to concentrate on pedagogy and create innovative teaching strategies. Finally, <cit.> design and implementation of two immigration chatbots to advise their users about immigration legal questions and cases. One answers immigration-related questions, and the other answers legal questions from NBC employees. Both chatbots use supervised learning to learn embeddings for their answers.
§ DATASET
In our research work, we used 5 resources to acquire the input for our chatbot. To do this, we used three official documents which are Law No. 20 of 2017 on Cooperatives, Cooperatives Bylaws, and Housing Cooperatives Bylaws. Also, we created two sets of questions and answers datasets which we will discuss more in section <ref>
§.§ Formal Legal Documents
In order to give the Chatbot the legal context that it needs to answer legal questions, we needed to provide it with the legal documents that the lawyers and Legal advisors depend on and use to answer legal questions but we made some reformatting for these documents where we only kept the necessary articles and definitions these legal documents are:
* Law No. 20 of 2017 on Cooperatives: published in 2017 to govern the cooperative work in Palestine, under which the authority supervising the cooperative work sector in Palestine was established, known as the Cooperative Work Agency (CWA). It also specially regulates cooperative societies and unions.
cooperative members and the local community.
* Cooperatives Bylaws: It is the bylaws that govern the cooperative and the union, which regulate their work and the nature of their activity, based on the provisions of Law Decree No. 20 of 2017.
* Housing Cooperatives Bylaws: It is the bylaws that govern the housing cooperatives, which regulate their work and the nature of their activity, based on the provisions of Law Decree No. 20 of 2017.
§.§ Question and Answers Dataset
In order to support the chatbot to better understand the legal questions we created two other data set which contains a JSON file of questions about the Decree – Law No. (20) Of 2017 On Cooperative cooperatives the two data set are as following:
§.§.§ Human Generated Question Answer Dataset
We asked the legal advisor on cooperatives to create a dataset containing 40 questions and answers about different articles from the Decree – Law No. (20) Of 2017 On Cooperative. The questions and answers cover the basic topics of the definition of a cooperative, the requirements for forming a cooperative, the rights and responsibilities of cooperative members, and the role of the CWA in regulating cooperatives.
§.§.§ Chatgpt Generated Question and Answers
We used the ChatGPT API to generate 5 questions and their corresponding answers for each article of Law No. (20) of 2017 on Cooperative. However, we needed to customize the answers to simulate the response of a real legal advisor. This involved starting the answer by referring to the article number in the law. To achieve this, we utilized the following prompt structure, as shown in Figure <ref>: we first requested the generation of the question and answer, then provided the article itself, and finally, to control the output, we asked ChatGPT to create a dictionary with two keys: "question" and "answer". After ChatGPT generated the dictionary, we appended it to another dictionary to collect the data. This process resulted in 350 questions and their corresponding answers.
This prompt helped us to control the output of the ChatGPT as we can see from the following code snippet
§ METHODOLOGY
In our work, we encountered a vast amount of textual data that exceeded ChatGPT GPT-4's current processing limit at 8,192 tokens <cit.>. In order to take advantage of the ChatGPT API and overcome this obstacle, we utilized a comprehensive solution by employing LlamaIndex<cit.>. This proved to be a strategic decision as it enabled us to index large-scale datasets quickly and efficiently through its tailored features created specifically for Language Models (LLMs). The provided tools excel in generating vectors for every document while keeping them readily available.
To make the text data compatible with the ChatGPT API, we employed LlamaIndex to create an index encompassing all the legal documents and question-answer data at hand. Subsequently, we generated vectors for each document, ensuring that the input size did not exceed 8,192 tokens, while employing a chunk size of 600 tokens. The chosen chunk size of 600 tokens aligned with the requirements of the LLM. Moreover, we configured the maximum chunk overlap to be 50 tokens.
We efficiently stored the generated vectors within the index, enabling their swift retrieval whenever necessary. Leveraging the LlamaIndex query engine, which harnessed the power of ChatGPT in the background, we successfully addressed our legal queries and concerns. Figure <ref> represents the comprehensive pipeline that we implemented for our case study, clearly demonstrating the use of LlamaIndex with ChatGPT and the subsequent vector generation and indexing of the legal documents.
§ EXPERIMENTAL SETUP AND EVALUATION RESULTS
After building the chatbot, we asked the legal advisor to write another 50 questions and their answers for testing purpuses. We then used the written questions to test the chatbot and compared the chatbot's answers to the legal advisor's answers. The chatbot was able to answer 41 questions in general. For example, we asked the chatbot about membership, financial statements, administrative issues, and how to register a new cooperative. The chatbot not only answered the right answers but in some cases, it also cited the law number and article (see Table <ref>.)
Although the chatbot was able to answer 41 questions correctly, not all of them were answered directly. Eight out of the 41 answers were relevant, but not direct. For example, when we asked the chatbot in Arabic "When does the management committee meet in the cooperative?", the chatbot gave us the correct answers, but it combined two answers. The first answer was the meeting of the management committee, and the second answer was the meeting of the general assembly. We have a lot of cases like this, and this is due to not having enough questions and answers for each article. Some articles of the law are short, so 5 questions and answers were enough to give the chatbot the context of the article. However, some articles are long and needed more questions and answers. 5 questions and answers were not enough to give the chatbot the context it needs to understand the difference, for example, between the meeting of the management committee and the general assembly. But in general, it gave us the right answer (See table <ref>).
Finally, when we analyzed the rest of the wrong answers, we found that most of them were due to two reasons. First, there were not enough questions and answers for long articles, which required more explanation for the chatbot. Second, some articles had bylaws that needed to be provided to the chatbot. For example, when we asked the chatbot "Is it permissible to establish more than one general union?", which is illegal, the chatbot answered yes, which is a totally illegal act. (See table. <ref>)
To measure the performance of our chatbot, we used the following metrics:
Overall accuracy: This metric is calculated by dividing the total number of correct answers by the total number of questions asked. The equation for overall accuracy is:
Overall accuracy = Total number of correct answers/Total number of questions× 100
In this case, the chatbot achieved an overall accuracy of 41/50, or 82%. This is a good result, as it means that the model was able to correctly answer 82% of the questions asked. and if we provided more data for it it will increase its accuracy
Overall satisfaction: Many studies used satisfaction scores with other metrics to evaluate their trained chatbots <cit.>, but in our case, we used only the satisfaction score. We did this by letting the legal counsel give a mark for how satisfied they were with the answer. For example, in the case of a right answer, the legal counsel was very satisfied, so she gave it a score of 100%. For wrong answers, the score was 0%. For related answers, the score was between 60% and 85%. We calculated this by computing the average of all satisfaction scores and the total number of questions.
Average satisfaction score = ∑_i=1^n S_i/n
where Si is the satisfaction score for the ith question and n is the total number of questions.
In this case, the chatbot achieved an average satisfaction 78.3%. Which is also a good result for our chatbot.
Confusion matrix: To measure the performance of the chatbot, we used the precision, recall, and F1 score. The confusion matrix is used to measure classification models and use the actual value and the predicted value of the model to compute the precision and recall and then the F1 score. However, since we didn't train the chatbot, we assumed that all the answers of the legal counselor are correct. This assumption will affect the precision value, as the chatbot will not be penalized for incorrectly identifying an answer as wrong.
* Precision: Since we made the assumption that there are no wrong actual answers the precision for class (wrong) is 0, and the precision for class 1 (right + related ) is 1.0, indicating that our chatbot correctly predicted all instances as "right" or "related."
* Recall: The recall for class (wrong) is 0, indicating that our chatbot did not correctly identify any instances as "wrong." The recall for class (right/related) is 0.79, meaning that our chatbot correctly identified 79% of instances labeled as "right" or "related."
* F1-score: The F1-score for class (wrong) is 0, which aligns with the precision and recall being 0. The F1-score for class (right/related) is 0.88, indicating a relatively good balance between precision and recall for this class.
* The accuracy of our chatbot is reported as 0.79, meaning it correctly predicted the label for 79% of the instances in the questions that we provided.
For more information and details on the developed chatbot, please refer to our GitHub Repository at the following link: https://github.com/rabeeqasem/llm_chatbot_legalGithub
§ CONCLUSION
In this paper, we introduced our LLM-based legal chatobot that aims to assist Palestinian cooperatives and their members in finding relevant answers to their legal inquiries. Our objective was to provide accurate and reliable support 24/7 by leveraging the vast amount of publicly-available legal documents that we were able to acquire. After utilizing the chatbot on this extensive dataset, we achieved an overall accuracy of 82% and an F1 score of 79%.
However, as we encountered an enormous volume of text data, we faced challenges with the chatbot's processing limit in terms of the maximum amount of text data that can submitted to through ChatGPT's API. To overcome this obstacle, we implemented a technique called 'vectorization' using LlamaIndex. This process converted the text data into a format that the chatbot could effectively utilize.
Whilst serving as a valuable legal aid for diverse cooperative members, there are certain limitations inherent to the chatbot that should be acknowledged. Our study uncovered instances where the chatbot provided incorrect answers, which could potentially lead users to unintentionally violate legal regulations. Consequently, we believe that continuous development and improvement of the chatbot are necessary to enhance its accuracy and reliability.
Furthermore, it is crucial to be transparent about the chatbot's limitations and ensure that users have access to comprehensive information. This will enable them to make informed decisions about utilizing the chatbot's services. By refining the chatbot and openly communicating its capabilities, we can harness its potential as an invaluable tool for delivering reliable legal support to a diverse audience.
§ FUTURE WORKS
In the future work, we plan to address the challenges highlighted in the previous section through the following steps. First, we will focus on increasing the size of the used dataset by formalizing additional question, with their relevance judgments, and also ensuring that the expert questions are close in terms of their number to those generated by the chatbot. Second, we plan to post-process the answers produced bu the chatbot to further enhance and improve the overall quality, i.e. accuracy of the answers. This may also require the exploitation of legal domain knowledge and semantic resources that can be further utilized for reformulating users' questions in a more legal relevant context.
apalike
|
http://arxiv.org/abs/2306.04993v1
|
20230608072924
|
A Model for Confined Solar Eruptions Including External Reconnection
|
[
"Jun Chen",
"Xin Cheng",
"Bernhard Kliem",
"Mingde Ding"
] |
astro-ph.SR
|
[
"astro-ph.SR",
"physics.space-ph"
] |
0000-0003-3060-0480]Jun Chen
School of Astronomy and Space Science, Nanjing University, Nanjing 210093, China
[email protected]
Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China
0000-0003-2837-7136]Xin Cheng
School of Astronomy and Space Science, Nanjing University, Nanjing 210093, China
[email protected]
Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China
0000-0002-5740-8803]Bernhard Kliem
Institute of Physics and Astronomy, University of Potsdam, Potsdam 14476, Germany
School of Astronomy and Space Science, Nanjing University, Nanjing 210093, China
[email protected]
Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China
The violent disruption of the coronal magnetic field is often
observed to be restricted to the low corona, appearing as a confined
eruption.
The possible causes of the confinement remain elusive. Here,
we model the eruption of a magnetic flux rope in a quadrupolar active region,
with the parameters set such that magnetic X-lines exist
both below and above the rope. This facilitates the onset
of magnetic reconnection in either place but with partly opposing effects on the eruption.
The lower reconnection initially adds poloidal
flux to the rope, increasing the upward hoop force and supporting
the rise of the rope. However, when the flux of the magnetic side lobes enters the lower reconnection,
the flux rope is found to separate from the reconnection site and the flux accumulation ceases.
At the same time, the upper reconnection begins to reduce the poloidal
flux of the rope, decreasing its hoop force; eventually this cuts the rope completely.
The relative weight of the two reconnection processes is varied in the model,
and it is found that their combined effect and the tension force of the overlying field confine the eruption
if the flux ratio of the outer to the inner polarities exceeds
a threshold, which is ∼1.3 for our Cartesian box and chosen parameters.
We hence propose that external reconnection between an erupting flux rope and overlying flux
can play a vital role in confining eruptions.
§ INTRODUCTION
Coronal mass ejections (CMEs) and flares are the two most violent energy release phenomena in the solar atmosphere. They are believed to be caused by the same process in essence, i.e., the eruption of a flux rope, which is defined as a set of twisted field lines around a central axis <cit.>. Nevertheless, the eruption of a flux rope does not always produce a CME.
Based on the statistics of <cit.>,
about 45% of all flares above M1-class are not accompanied by CMEs. However, even for
flares without a CME, an erupting flux rope can often be observed,
although it is eventually confined to the low corona.
Moreover, such failed rope eruptions present an early kinematic evolution similar to successful ones
<cit.>.
The observation of a failed filament eruption in
<cit.> spawned a strong interest in the possible causes of the
confinement.
This particular event can be modeled as a kink-unstable flux rope in the stability domain of the torus instability <cit.>. However, since the helical kink instability appears to occur only in a minority of solar eruptions, their
success or failure
is often discussed in the framework of the properties of the torus instability <cit.>, whose threshold is given by
a critical decay index n_c. The decay index n describes how fast the external poloidal field, (R) (often simply referred to as the background field), declines with height,
n:=-dln(R)/dln R,
where
R denotes
the distance of the rope axis to the center of an assumed approximately toroidal rope.
In the simplest case of a nearly toroidal flux rope shape and zero external toroidal (shear/guide) field, =0, the threshold is near its canonical value n_c=1.5, but varying parameters, in particular the flux rope geometry and shear field strength, cause it to vary in the range n_c∼12
<cit.>.
For n>n_c the rope is torus unstable and erupts. If this condition is fulfilled along the whole path of the rising rope, the eruption can be successful.
This has been supported by a number of case and statistical studies <cit.>. Nevertheless, it was found
that torus-unstable rope eruptions may also suffer from failure if the decay index height profile, n(h), possesses a sufficiently deep minimum, such that a torus-stable height range with n<n_c lies above a torus-unstable height range <cit.>, or if the flux rope rotates strongly <cit.>. That is to say that the occurrence of torus instability is not a sufficient condition for a successful eruption.
Except for the decay property of the background field, the success or failure of an eruption is also influenced by other factors. Numerically and with laboratory experiments, it was revealed that a strong guide field component of the overlying field, >, is able to confine an erupting flux rope
<cit.>.
The cases of an upper torus-stable height range and a strong shear/guide field are often jointly referred to as configurations with a too strong overlying flux, and this is widely considered to be the most common reason for the confinement.
Moreover, the twist of the
rope was found to be another decisive factor to influence the eruption <cit.>. Based on careful analyses of a data-driven magnetohydrodynamic (MHD) simulation, <cit.> proposed that the non-axisymmetry of the rope is an additional critical factor to constrain its eruption.
Inspired by previous observations and simulations, in which the confinement of
eruptions could result from a too strong overlying field <cit.> or from external reconnection between the erupting flux
and the overlying field <cit.>,
we here investigate
the joint action of these related effects
in the specific topology of a quadrupolar source region, which facilitates external reconnection.
The rest of the paper is arranged as follows:
in Section <ref>, the numerical model is detailed.
In Section <ref>, we present the results of the simulations,
followed by a summary and discussions in Section <ref>.
§ METHOD
§.§ Initial magnetic field
The classic model of a force-free flux rope by <cit.> (hereafter TD99)
is illustrated in Figure <ref>(a).
A toroidal ring current of major radius R and minor radius a is centered at (0, 0, -d).
A pair of magnetic charges ± q at (± L, 0, -d) provides the external poloidal field.
For the balance between the upward hoop force and the downward strapping force from the external field,
the equilibrium current I is given by
I=8 π q L R (R^2+L^2)^-3/2/μ_0 [ln(8 R/a)-3/2+l_i/2],
where l_i is the internal self-inductance per unit length of the
tube <cit.>. This quantity depends weakly on the current distribution in the ring.
For simplicity, we set =0.
In this work, we modify the TD99 model to set a flux rope in a quadrupolar active region (Figure <ref>(b)).
This is constructed by adding a second pair of magnetic charges with the strength of
± q_2 at (±L_2, 0, -d) with L_2=2L.
In order to yield the same strapping field strength at the geometrical torus axis
(at distance R from torus center) as for the bipole,
the strength of the inner pair of charges is adjusted to
q_1= q-q_2L_2/L(R^2+L^2/R^2+L_2^2)^3/2.
The flux from the inner pair yields a downward force, and the flux from the outer pair yields an upward force.
We set R=27.5 Mm, a=11.1 Mm, d=7.5 Mm, L=25 Mm, q=10^13 T m^2, and l_i=0.5.
Three initial configurations are created by setting q_2 to {0, -4, -5}× 10^13 T m^2, and the corresponding q_1 are derived from Equation (<ref>), so that - q_2/q_1= 0, 1.246, 1.329, respectively.
These runs are denoted with Bipole, Quadrupole1, and Quadrupole2, respectively.
As shown in Figure <ref>, the quadrupolar configurations possess two X- (null) lines, one above and one below the flux rope, which facilitate the onset of magnetic reconnection. The corresponding heights of the apex of the upper X-line are 50.3 and 45.2 Mm.
§.§ Numerical model
Before the simulation, a normalization is performed referring to the values at the apex of the geometric toroidal axis (0, 0, R-d).
We take the height R-d, initial field strength B_0, density ρ_0, corresponding Alfvén speed V_A=B_0/ √(μ_0 ρ_0) and corresponding Alfvén time τ_A=(R-d)/V_A at this site as the units of the corresponding variables. For example, for V_A=1000 km s^-1, we have τ_A=20 s.
The computations are performed in a Cartesian cubic box of
[-640,640]×[-640,640]×[0,1280] Mm.
We integrate the normalized ideal MHD equations neglecting gravity and thermal pressure:
∂_t ρ = -∇·(ρ u) ,
∂_t (ρ u) =-∇· (ρ u u)
+∇·𝖳 +J× B ,
∂_t B = -∇·(u B-B u) ,
where J≡∇× Bis the current density, 𝖳≡R_e^-1 ρ
[∇ u+(∇ u)^T-(2/3 ∇· u ) 𝖨] is the viscous stress tensor, 𝖨 is the second order unit tensor, ^T denotes the transposition for a second-order tensor, and R_e denotes the fluid Reynolds number.
Closed boundaries are applied (u = 0, at all boundaries),
resulting in an invariant normal magnetogram component (∂ B_z / ∂ t|_z=0 = 0).
<Ref> are integrated by the modified Lax-Wendroff scheme described in <cit.>. In place of the diffusive Lax step,
artificial smoothing <cit.> is applied to ρ through the substitution
ρ_i → (1-c_ρ) ρ_i + c_ρ/6 ∑_j ρ_j, where j are the 6 neighbor grid points of i.
This is similar in structure to the Lax term, which has c_ρ=1, but far less diffusive for small values of c_ρ.
This smoothing is also applied to u and B. The latter introduces numerical resistivity, which facilitates magnetic reconnection.
We set c_ρ= c_u= 0.01-0.1 (exponentially decreasing with height in [0, 100] Mm, and staying at 0.01 in the region above), and choose a small,
uniform c_B = 0.001 to ensure that magnetic diffusion is not significant outside of the reconnection regions.
The nonzero ∇·B resulting
from the finite differences is kept small by the standard diffusive treatment following <cit.>.
The initial density is set to ρ(x, t=0)=|B(x, t=0)|^3/2
(see, e.g., <cit.> for a discussion of this choice).
The initial velocity is set to u(x, t=0)=0.
3D magnetic reconnection preferentially takes place
where a large gradient of magnetic connectivity is present.
Such connectivity change can be quantified by the squashing factor Q <cit.>.
Separatrices are located where Q = ∞,
quasi-separatrix layers (QSLs) are located where Q ≫ 2.
The distribution of Q in
the midplane of the configuration, {y=0},
is here computed following <cit.>.
Two separatrices (QSLs) intersect with each other in a separator (quasi-separator or hyperbolic flux tube <cit.>). Such intersections, jointly referred to
as “(quasi-) separators” of the magnetic field in the following, are the favorable sites for 3D magnetic reconnection <cit.>.
These topological structures allow us to
quantify the 3D reconnection processes at the different locations and their temporal evolution. Because we have set =0, our quadrupolar configurations initially contain true separators, the X-lines, which would change to HFTs if 0.
§ RESULTS
The rise profiles of the magnetic axis' apex point are shown in Figure <ref> for the three runs.
The decay index of the external poloidal field at the initial magnetic axis of Bipole and Quadrupole1–2 configurations are 1.73, 4.65, and 5.77, respectively.
For the Bipole case, the initial flux rope is only
slightly above the marginally unstable state, therefore it takes a
relatively long time to erupt.
The Quadrupole1–2 cases are not only intially positioned much further into
the unstable domain of parameter space, but their
external poloidal field
continues to decrease much faster
with height
up to the null line and field reversal, where n(R) has a pole (Figure <ref>(f)).
Consequently, their instability commences immediately and develops stronger.
It is worth noting that the relative magnetic helicity is largest for the Bipole and far smaller for Quadrupole1–2, with the flux-normalized values being H_m/F^2=-0.15, -0.004, and -0.002, respectively. The small values of the quadrupolar cases result from the opposite relative helicities between the flux rope and the oppositely directed inner and outer bipole fields.
We first focus on the evolution of Quadrupole2.
Two mechanisms drive the acceleration:
torus instability and “flare” reconnection in the vertical (“flare”) current sheet that forms from the initial X-line under
the flux rope. This reconnection feeds poloidal magnetic flux—the strapping flux in the center lobe—into, and generates an upward outflow toward, the flux rope.
However, the flux feeding only takes a short period, up to t≈6 τ_A.
Subsequently,
the flux rope axis quickly separates from
the lower reconnection region (within ∼34 τ_A; see Figure <ref>).
The separation is also obvious from the increasing distance between
the magnetic rope axis and the upper edge of the upward reconnection outflow in Figure <ref>. The reason for the separation and the subsequent decline of the flare reconnection (Figure <ref>) is the change from strapping-flux to side-lobe-flux reconnection, which happens when the bounding separatrices
meet at the flare current sheet (see Figure <ref> at
t=6.0 τ_A).
The reconnected side-lobe flux does not wrap around the erupting flux rope, but rather forms high-lying loops below the rope and above the side lobes (Figure <ref>,
t=9.4 τ_A, 17.8 τ_A). This reconnected flux separates the flux rope from the region of flare reconnection, terminating the support of the eruption by the flare reconnection through both flux accretion and momentum transfer by the reconnection outflow jet.
Figure <ref> shows a high gradient of U_z at the upper (quasi-) separator,
corresponding to an inflow in the reference frame tied to
the (quasi-) separator which rises with the flux rope.
This indicates that
reconnection also acts above the rope in a horizontal current sheet forming from the upper initial X-line
from the very beginning of the simulation. The upper reconnection reduces the constraining overlying flux,
and initially transfers it to the side lobes, joint with the reconnected strapping flux from the center lobe,
as in the breakout model. However, when all strapping flux is reconnected,
the upper reconnection begins to involve the flux rope, building up a connection of each rope footprint to the outer ambient flux sources.
At the same time, the side lobes meet in the flare current sheet (Figure <ref>).
The resulting erosion of the rope flux corresponds to a
reduction of the toroidal current I, which is proportional to the poloidal flux of the rope.
The strapping force in the rope is proportioned to I, while the upward hoop force is proportioned to I^2.
Thus, the upper reconnection with the flux rope weakens the net upward force in the rope
that drives the eruption.
Figure <ref> shows that the main
upward acceleration of the rope ends at t≈5.2 τ_A,
shortly before all strapping flux is reconnected and the upper reconnection and the separation of the rope begin.
The
tension force of the overlying flux rooted in the outer polarities
decelerates the rope in this interval, but the major deceleration,
leading to the confinement,
happens in a much longer subsequent period when all three effects act jointly (Figure <ref>, middle).
A transitory amplification of the reconnection outflows occurs when the side lobes join the lower reconnection because of their higher flux density (|q_2|>q_1).
This enhances the upward tension force of the upper outflow, strongly acting on
the flux rope around t=10 τ_A (Figure <ref>). The resulting
transitory second acceleration of the rope
remains minor in the overall evolution of the eruption (Figure <ref>).
In the early phase,
both lower and upper reconnection are proceeding simultaneously
(Figure <ref>, t = [4.1 τ_A, 5.5 τ_A]).
While the lower reconnection decouples from the flux rope after
t≈6 τ_A,
the upper reconnection acts strongly on the rope
during most of the deceleration phase
(Figure <ref>).
This results in all rope flux being peeled off, eventually destroying the rope
(Figure <ref>, t = 27.9 τ_A).
The run of Quadrupole1 also
shows the separation of the flux rope from the lower
reconnection region, only slightly later, also leading to deceleration.
This eruption is intermediate between the Bipole and Quadrupole2 runs.
It shows
a rise above 19 times of the initial height
at a considerable speed
(U_z≈0.4 V_A) with only a weak deceleration (Figure <ref>).
With the given
box, this eruption is ejective. Because the eruption shows a propagation phase dominated by inertia already from ≈1/10 of the box height, an even larger box is expected to yield the same result.
On the other hand, an exponential phase, a typical characteristic of instability, is not clearly seen here (similar to Run Quadrupole2). Comparing the two quadrupolar configurations, it is obvious that
a stronger q_2
forms a lower null and provides a higher amount of overlying flux that can be reconnected at the upper null; then the effect of restraining the eruption
by the upper reconnection is more significant. Additionally, the downward tension force of the overlying flux is higher.
We have further constrained the point of transition between confined and ejective behavior of the eruption in the quadrupolar configuration and found it to lie in the range q_2∈
[425,450]×10^13 T m^2,
corresponding to -q_2/q_1∈[1.27,1.29].
This range depends on the parameters d/R, a/R, and L/R, as well as (weakly) on the numerical settings.
§ SUMMARY AND DISCUSSIONS
In this Letter, we present a model for confined
solar eruptions
in quadrupolar field configurations with a flux rope pre-existing in the central flux lobe.
A new key feature of the model is the change in character of the upper reconnection
in the current sheet forming from the (quasi-) separator
above the erupting flux rope when all strapping flux in the center lobe has reconnected and the rope itself enters the reconnection. The upper
reconnection supports the eruption initially, in full agreement with the breakout model,
according to which the reconnection
moves overlying flux to the side lobes, decreasing its restraining force
<cit.>.
However, after all strapping flux in the center lobe has been removed, the upper reconnection erodes the flux rope, decreasing its upward hoop force
that drives the eruption and it can eventually destroy the rope.
At the same time, the flux of the side lobes enters the vertical current sheet under the rope.
The lower reconnection, often referred to as “flare reconnection”, also changes its character at this point.
The flux in the upward reconnection outflow then does no longer wrap about the erupting flux rope but rather forms simple loops above the side lobes. This results in a separation of the erupting flux rope from the lower reconnection region, so that the strengthening of the upward force by flux accretion and momentum transfer to the rope terminates,
very soon after the flux rope has risen above the side lobes.
Thus, in eruptions from the central lobe of a quadrupolar configuration,
both the upper and lower reconnection act against the eruption when all strapping flux in the center lobe has been removed. Jointly with the standard tension force of the overlying flux rooted in the outer polarities,
this can confine the eruption.
The mechanism does not require the flux rope to exist before the onset of the eruption.
Rather, it can work in the same way when the formation of the flux rope commences
simultaneously with the eruption <cit.>.
It does require that the strapping flux in the center lobe, i.e.,
the total flux in the center lobe minus the rope flux at the onset of the eruption
(e.g, by onset of the torus instability), be smaller
than the overlying flux rooted in the outer polarities.
This is guaranteed if |q_2|>q_1 but fulfilled also in a range of |q_2| somewhat smaller
than q_1 in the case of a pre-existing flux rope. However,
to explain the confinement of eruptions at the typically observed heights in the low to middle corona,
up to about z∼(3/4) R_⊙ and ∼ 20 times the initial height <cit.>,
the ratio |q_2/q_1| must not be too small.
The runs
Quadrupole1 and 2 and intermediate test runs suggest
a threshold of |q_2/q_1|∼1.3. The threshold depends on the parameters,
primarily on d/R, which generally influences the stability properties of the TD99 flux rope,
and on L_2/L, which determines the height profile of the flux overlying the flux rope and, hence,
the amount of overlying flux jointly with |q_2/q_1|. For larger L_2/L,
the field strength above the rope
decreases less with increasing height, so that the threshold of |q_2/q_1| is expected to decrease slightly. On the other hand, in spherical coordinates the field strength decreases faster, implying a somewhat higher threshold on the Sun.
The threshold value appears consistent with the source-region properties of confined vs. eruptive flares in <cit.> and <cit.>. All
confined events occurred in the central part of complex source regions suggestive of outer overlying flux at least as strong as the central flux around the erupting part of the polarity inversion line. All
ejective events occurred near the periphery of the source region where no such strong outer overlying flux was present.
Our results are also consistent with the confinement <cit.> and
success <cit.> of breakout eruptions that have used |q_2| significantly larger (smaller) than q_1 and showed (did not show) reconnection of side-lobe flux under the erupting flux, respectively.
The reconnection of side-lobe flux in the ejective breakout eruption in <cit.>
begins just when the kinetic energy in the box stops rising (see their Figures 4 and 14);
at this time the flux rope is already too high (z∼4 R_⊙) to be stopped by the remaining overlying flux
, i.e., their flux ratio corresponding to our |q_2/q_1| must be below the treshold for confinement.
As revealed in the Quadrupole1 run, the external reconnection might also responsible for the deceleration of CMEs in the high corona, which is usually ascribed to aerodynamic drag <cit.>. Another implication is the transfer of magnetic helicity and twist from the reconnecting flux rope to the ambient field. It provides a possible interpretation for the formation of large-scale flux ropes as indicated by large-scale filaments in the vicinity of sunspots <cit.>.
§ ACKNOWLEDGEMENTS
J.C., X.C., and M.D.D. are funded by National Key R&D Program of China under grant 2021YFA1600504 and by NSFC grant 12127901. B.K. acknowledges support from the DFG and from NASA through grants 80NSSC19K0860, 80NSSC19K0082, and 80NSSC20K1274.
natexlab#1#1
[Antiochos et al.(1999)Antiochos, DeVore, &
Klimchuk]Antiochos1999apj
Antiochos, S. K., DeVore, C. R., & Klimchuk, J. A. 1999, , 510, 485,
10.1086/306563
[Cheng et al.(2017)Cheng, Guo, & Ding]cheng2017
Cheng, X., Guo, Y., & Ding, M. 2017, Science China Earth Sciences, 60,
1383, 10.1007/s11430-017-9074-6
[Cheng et al.(2020)Cheng, Zhang, Kliem, Török,
Xing, Zhou, Inhester, & Ding]cheng2020apj
Cheng, X., Zhang, J., Kliem, B., et al. 2020, , 894, 85,
10.3847/1538-4357/ab886a
[Cheng et al.(2011)Cheng, Zhang, Liu, &
Ding]cheng2011apjl
Cheng, X., Zhang, J., Liu, Y., & Ding, M. D. 2011, , 732, L25,
10.1088/2041-8205/732/2/L25
[Dedner et al.(2002)]Dedner2002
Dedner, A., Kemm, F., Kröner, D., et al. 2002, J. Comput. Phys., 175, 645
[Démoulin & Aulanier(2010)]Demoulin2010
Démoulin, P., & Aulanier, G. 2010, , 718, 1388,
10.1088/0004-637X/718/2/1388
[DeVore & Antiochos(2008)]DeVore2008apj
DeVore, C. R., & Antiochos, S. K. 2008, , 680, 740,
10.1086/588011
[Guo et al.(2010)Guo, Ding, Schmieder, Li,
Török, & Wiegelmann]guoyang2010apjl
Guo, Y., Ding, M. D., Schmieder, B., et al. 2010, , 725, L38,
10.1088/2041-8205/725/1/L38
[Guo et al.(2019)Guo, Xu, Ding, Chen, Xia, &
Keppens]guo2019apjl
Guo, Y., Xu, Y., Ding, M. D., et al. 2019, , 884, L1,
10.3847/2041-8213/ab4514
[Hassanin & Kliem(2016)]Hassanin2016apj
Hassanin, A., & Kliem, B. 2016, , 832, 106,
10.3847/0004-637X/832/2/106
[Huang et al.(2020)Huang, Cheng, & Ding]huang2020apjl
Huang, Z. W., Cheng, X., & Ding, M. D. 2020, , 904, L2,
10.3847/2041-8213/abc5b0
[Ji et al.(2003)Ji, Wang, Schmahl, Moon, &
Jiang]ji2003apj
Ji, H., Wang, H., Schmahl, E. J., Moon, Y. J., & Jiang, Y. 2003,
, 595, L135, 10.1086/378178
[Jiang et al.(2021b)Jiang, Feng, Liu, Yan,
Hu, Moore, Duan, Cui, Zuo, Wang, & Wei]JiangC al2021
Jiang, C., Feng, X., Liu, R., et al. 2021b, Nature
Astronomy, 5, 1126, 10.1038/s41550-021-01414-z
[Karpen et al.(2012)Karpen, Antiochos, &
DeVore]Karpen2012
Karpen, J. T., Antiochos, S. K., & DeVore, C. R. 2012, , 760, 81,
10.1088/0004-637X/760/1/81
[Kliem & Török(2006)]Kliem Torok2006
Kliem, B., & Török, T. 2006, Phys. Rev. Lett., 96, 255002,
10.1103/PhysRevLett.96.255002
[Koleva et al.(2012)Koleva, Madjarska, Duchlev, Schrijver,
Vial, Buchlin, & Dechev]Koleva al2012
Koleva, K., Madjarska, M. S., Duchlev, P., et al. 2012, , 540,
A127, 10.1051/0004-6361/201118588
[Kumar et al.(2023)Kumar, Karpen, Antiochos, DeVore,
Wyper, & Cho]Kumar2023apj
Kumar, P., Karpen, J. T., Antiochos, S. K., et al. 2023, , 943,
156, 10.3847/1538-4357/acaea4
[Liu et al.(2016)Liu, Kliem, Titov, Chen, Wang, Wang, Liu, Xu, &
Wiegelmann]liurui2016apj
Liu, R., Kliem, B., Titov, V. S., et al. 2016, The Astrophysical Journal,
818, 148
[Lynch et al.(2008)Lynch, Antiochos, DeVore, Luhmann, &
Zurbuchen]Lynch al2008
Lynch, B. J., Antiochos, S. K., DeVore, C. R., Luhmann, J. G., &
Zurbuchen, T. H. 2008, , 683, 1192, 10.1086/589738
[Mikic & Linker(1994)]Mikic Linker1994
Mikic, Z., & Linker, J. A. 1994, , 430, 898, 10.1086/174460
[Myers et al.(2015)Myers, Yamada, Ji, Yoo, Fox,
Jara-Almonte, Savcheva, & Deluca]Myers2015nature
Myers, C. E., Yamada, M., Ji, H., et al. 2015, , 528, 526,
10.1038/nature16188
[Netzel et al.(2012)Netzel, Mrozek, Kołomański, &
Gburek]Netzel2012
Netzel, A., Mrozek, T., Kołomański, S., & Gburek, S. 2012,
, 548, A89, 10.1051/0004-6361/201219208
[Nindos et al.(2015)Nindos, Patsourakos, Vourlidas, &
Tagikas]nindos2015
Nindos, A., Patsourakos, S., Vourlidas, A., & Tagikas, C. 2015, ,
808, 117, 10.1088/0004-637X/808/2/117
[Olmedo & Zhang(2010)]Olmedo2010
Olmedo, O., & Zhang, J. 2010, , 718, 433,
10.1088/0004-637X/718/1/433
[Patsourakos et al.(2020)Patsourakos, Vourlidas,
Török, Kliem, Antiochos, Archontis, Aulanier, Cheng,
Chintzoglou, Georgoulis, Green, Leake, Moore, Nindos, Syntelis,
Yardley, Yurchyshyn, & Zhang]Patsourakos2020
Patsourakos, S., Vourlidas, A., Török, T., et al. 2020, ,
216, 131, 10.1007/s11214-020-00757-9
[Pontin(2011)]Pontin2011
Pontin, D. I. 2011, Advances in Space Research, 47, 1508,
10.1016/j.asr.2010.12.022
[Priest(2000)]Priest2000
Priest, E., ed. 2000, Magnetic reconnection : MHD theory and applications
[Sato & Hayashi(1979)]Sato Hayashi1979
Sato, T., & Hayashi, T. 1979, Physics of Fluids, 22, 1189,
10.1063/1.862721
[Shafranov(1966)]Shafranov1966
Shafranov, V. D. 1966, Rev. Plasma Phys., 2, 103
[Shi et al.(2015)Shi, Wang, Wan, Cheng, Ding, &
Zhang]shitong2015apj
Shi, T., Wang, Y., Wan, L., et al. 2015, , 806, 271,
10.1088/0004-637X/806/2/271
[Sun et al.(2015)Sun, Bobra, Hoeksema, Liu, Li, Shen,
Couvidat, Norton, & Fisher]sunxudong2015
Sun, X., Bobra, M. G., Hoeksema, J. T., et al. 2015, , 804, L28,
10.1088/2041-8205/804/2/L28
[Titov & Démoulin(1999)]TD99
Titov, V., & Démoulin, P. 1999, Astronomy and Astrophysics, 351, 707
[Titov(2007)]Titov2007
Titov, V. S. 2007, ApJ, 660, 863, 10.1086/512671
[Titov et al.(2002)Titov, Hornig, &
Démoulin]Titov2002
Titov, V. S., Hornig, G., & Démoulin, P. 2002, Journal of
Geophysical Research (Space Physics), 107, 1164, 10.1029/2001JA000278
[Török & Kliem(2003)]Torok Kliem2003
Török, T., & Kliem, B. 2003, , 406, 1043,
10.1051/0004-6361:20030692
[Török & Kliem(2005)]Torok Kliem2005
Török, T., & Kliem, B. 2005, , 630, L97, 10.1086/462412
[Vršnak et al.(2004)Vršnak, Ruždjak,
Sudar, & Gopalswamy]Vrsnak2004aa
Vršnak, B., Ruždjak, D., Sudar, D., & Gopalswamy, N. 2004,
, 423, 717, 10.1051/0004-6361:20047169
[Wang et al.(2017)Wang, Liu, Wang, Liu, Chen, Liu,
Zhou, & Zhang]wangdong2017
Wang, D., Liu, R., Wang, Y., et al. 2017, , 843, L9,
10.3847/2041-8213/aa79f0
[Wang & Zhang(2007)]WangY ZhangJ2007
Wang, Y., & Zhang, J. 2007, , 665, 1428, 10.1086/519765
[Zhang et al.(2022)Zhang, Chen, Liu, & Wang]FastQSL
Zhang, P., Chen, J., Liu, R., & Wang, C. 2022, , 937, 26,
10.3847/1538-4357/ac8d61
[Zhong et al.(2021)Zhong, Guo, & Ding]zhongze2021nc
Zhong, Z., Guo, Y., & Ding, M. D. 2021, Nature Communications, 12, 2734,
10.1038/s41467-021-23037-8
[Zhou et al.(2019)Zhou, Cheng, Zhang, Wang, Wang, Liu,
Zhuang, & Cui]zhouzhenjun2019
Zhou, Z., Cheng, X., Zhang, J., et al. 2019, , 877, L28,
10.3847/2041-8213/ab21cb
|
http://arxiv.org/abs/2306.04314v1
|
20230607101950
|
Cross-Genre Argument Mining: Can Language Models Automatically Fill in Missing Discourse Markers?
|
[
"Gil Rocha",
"Henrique Lopes Cardoso",
"Jonas Belouadi",
"Steffen Eger"
] |
cs.CL
|
[
"cs.CL"
] |
Point in polygon calculation using vector geometric methods with application to geospatial data.
[
================================================================================================
Available corpora for Argument Mining differ along several axes, and one of the key differences is the presence (or absence) of discourse markers to signal argumentative content.
Exploring effective ways to use discourse markers has received wide attention in various discourse parsing tasks, from which it is well-known that discourse markers are strong indicators of discourse relations.
To improve the robustness of Argument Mining systems across different genres, we propose to automatically augment a given text with discourse markers such that all relations are explicitly signaled.
Our analysis unveils that popular language models taken out-of-the-box fail on this task; however, when fine-tuned on a new heterogeneous dataset that we construct (including synthetic and real examples), they perform considerably better.
We demonstrate the impact of our approach on an Argument Mining downstream task, evaluated on different corpora, showing that language models can be trained to automatically fill in discourse markers across different corpora, improving the performance of a downstream model in some, but not all, cases.
Our proposed approach can further be employed as an assistive tool for better discourse understanding.
§ INTRODUCTION
Argument Mining () is a discourse parsing task that aims to automatically extract structured arguments from text.
In general, an argument in NLP and machine learning is a graph-based structure, where nodes correspond to Argumentative Discourse Units (ADUs), which are connected via argumentative relations (e.g., support or attack) <cit.>.
Available corpora for differ along several axes, such as language, genre, domain, and annotation schema <cit.>.
One key aspect that differs across different corpora (and even across different articles in the same corpus) is the presence (or absence) of discourse markers (DMs) <cit.>.
These DMs are lexical clues that typically precede ADUs.
Exploring effective ways to use DMs has received wide attention in various NLP tasks <cit.>, including related tasks <cit.>.
In discourse parsing <cit.>, DMs are known to be strong cues for the identification of discourse relations <cit.>.
Similarly, for , the presence of DMs are strong indicators for the identification of ADUs <cit.> and for the overall argument structure <cit.> (e.g., some DMs are clear indicators of the ADU role, such as premise, conclusion, or major claim).
The absence of DMs makes the task more challenging, requiring the system to more deeply capture semantic relations between ADUs <cit.>.
Ideally, an system should be able to exploit the presence of explicit DMs as clear indicators of the writer's intention to better capture the argument structure conveyed in the text.
However, when such surface-level indicators are not provided, the system should be robust enough to capture implicit relations between the corresponding ADUs.
To close the gap between these two scenarios (i.e., relations explicitly signaled with DMs vs. implicit relations), we ask whether recent large language models (LLMs) such as BART <cit.>, T5 <cit.> and ChatGPT[<https://openai.com/blog/chatgpt/>], can be used to automatically augment a given text with DMs such that all relations are explicitly signaled.
Due to the impressive language understanding and generation abilities of recent LLMs, we speculate that such capabilities could be leveraged to automatically augment a given text with DMs.
However, our analysis unveils that such language models (LMs), when employed in a zero-shot setting, underperform in this task.
To overcome this challenge, we hypothesize that a sequence-to-sequence (Seq2Seq) model fine-tuned to augment DMs in an end-to-end setting (from an original to a DM-augmented text) should be able to add coherent and grammatically correct DMs, thus adding crucial signals for systems.
Our second hypothesis is that downstream models can profit from automatically added DMs because these contain highly relevant signals for solving tasks, such as ADU identification and classification.
Moreover, given that the Seq2Seq model is fine-tuned on heterogeneous data, we expect it to perform well across different genres.
To this end, we demonstrate the impact of our approach on an downstream task, evaluated on different corpora.
The proposed approach is illustrated in Figure <ref>.
The example contains different versions of the same paragraph.
The version “Original text” was extracted from the Persuasive Essays corpus (PEC) <cit.> and contains explicit DMs provided by the essay's author.
“Text without DMs” shows how challenging it is to grasp the argument structure when the text is deprived of DMs.
Finally, “Text augmented with DMs” illustrates a DM-augmented version of the paragraph based on our proposed approach, where the automatic explicitation of DMs provides useful indicators to unveil the argumentative content.
Our experiments indicate that the proposed Seq2Seq models can augment a given text with relevant DMs; however, the lack of consistency and variability of augmented DMs impacts the capability of downstream task models to systematically improve the scores across different corpora.
Overall, we show that the DM augmentation approach improves the performance of systems in some corpora and that it provides a viable means to inject explicit indicators when the argumentative reasoning steps are implicit.
Besides improving the performance of systems, especially across different genres, we believe that this approach can be useful as an assistive tool for discourse understanding, e.g., in education contexts.
In summary, our main contributions are:
(i) we propose a synthetic template-based test suite, accompanied by automatic evaluation metrics and an annotation study, to assess the capabilities of state-of-the-art LMs to predict DMs;
(ii) we analyze the capabilities of LMs to augment text with DMs, finding that they underperform in this task;
(iii) we compile a heterogeneous collection of DM-related datasets on which we fine-tune LMs,
showing that we can substantially improve their ability for DM augmentation, and
(iv) we evaluate the impact of end-to-end DM augmentation in a downstream task and find that it improves the performance of systems in some, but not all, cases.
§ FILL-IN-THE-MASK DISCOURSE MARKER PREDICTION
We now assess whether current state-of-the-art language models can predict coherent and grammatically correct DMs.
To this end, we create an artificial dataset that allows us to evaluate whether language models are sensitive to specific semantically-critical edits in the text.
These targeted edits are based on the DMs that precede each ADU and the claim's stance.
When manipulated accordingly, the edits can entail relevant changes in the argument structure.
To illustrate the crucial role of DMs and stance for argument perception, consider Example 1 in Figure <ref>, in which we have omitted the DMs and the stance-revealing word.
Without these pieces of information, we are unable to map the argument to a concrete structure.
Examples 2 and 3 show that it is possible to obtain different text sequences with opposite stances, illustrating the impact that a single word (the stance in this case) can have on the structure of the argument.
Indeed, the role of the premises changes according to the stance (e.g., “X1” is an attacking premise in Example 2 but a supportive premise in Example 3), reinforcing that the semantically-critical edits in the stance impact the argument structure.
We also note that, even though the text sequences are deprived of DMs, we can infer (based on the semantics of the content) the corresponding argument structure by revealing the stance.
Finally, Examples 4 and 5 show that making the DMs explicit improves the readability and makes the argument structure clearer (reducing the cognitive interpretation demands required to unlock the argument structure compared to Examples 2 and 3).
Examples 4 and 5 also show that by changing the stance, adjustments of the DMs that precede each ADU are required to obtain a coherent text sequence.
Overall, Figure <ref> illustrates the key property that motivated our artificial dataset: subtle edits in content (i.e., stance and DMs) might have a relevant impact on the argument structure.
On the other hand, some edits do not entail any difference in the argument structure (e.g., the position of ADUs, such as whether the claim occurs before/after the premises).
To assess the sensitivity of language models to capture these targeted edits, we use a “fill-in-the-mask” setup, where some of the DMs are masked, and the models aim to predict the masked content.
To assess the robustness of language models, we propose a challenging testbed by providing text sequences that share similar ADUs but might entail different argument structures based on the semantically-critical edits in the text previously mentioned (Section <ref>).
To master this task, models are not only required to capture the semantics of the sentence (claim's stance and role of each ADU) but also to take into account the DMs that precede the ADUs (as explicit indicators of other ADUs' roles and positioning).
§.§ Artificial dataset
Each sample comprises a claim and one (or two) premise(s) such that we can map each sample to a simple argument structure.
Every ADU is preceded by one DM that signals its role.
Claims have a clear stance towards a given position (e.g., “we should introduce carbon taxes”).
This stance is identifiable by a keyword in the sentence (e.g., “introduce”).
To make the dataset challenging, we also provide samples with opposite stances (e.g., “introduce” vs. “abolish”).
We have one premise in support of a given ⟨ claim, stance ⟩, and another against it. For the opposite stance, the roles of the premises are inverted (i.e., the supportive premise for the claim with the original stance becomes the attacking premise for the claim with the opposite stance).
For the example in Figure <ref>, we have the following core elements:
claim = “we should <STANCE> carbon taxes”,
original stance = “introduce”,
opposite stance = “abolish”,
premise support (original stance) = “humanity must embrace clean energy in order to fight climate change”, and
premise attack (original stance) = “ecological concerns add further strain on the economy”.
Based on these core elements, we follow a template-based procedure to generate different samples.
The templates are based on a set of configurable parameters: number of ADUs ∈{2, 3}; stance role ∈{original, opposite}; claim position ∈{1, 2}; premise role ∈{support, attack}; supportive premise position ∈{1, 2}; prediction type ∈{dm_1, dm_2, dm_3}.
Additional details regarding the templates can be found in Appendix <ref>.
We generate one sample for each possible configuration, resulting in 40 samples generated for a given instantiation of the aforementioned core elements.
DMs are added based on the role of the ADU that they precede, using a fixed set of DMs (details provided in Appendix <ref>).
Even though using a fixed set of DMs comes with a lack of diversity in our examples, the main goal is to add gold DMs consistent with the ADU role they precede.
We expect that the language models will naturally add some variability in the set of DMs predicted, which we compare with our gold DMs set (using either lexical and semantic-level metrics, as introduced in Section <ref>).
Examples 4 and 5 in Figure <ref> illustrate two samples from the dataset.
In these examples, all possible masked tokens are revealed (i.e., the underlined DMs).
The parameter “prediction type” will dictate which of these tokens will be masked for a given sample.
To instantiate the “claim”, “original stance”, “premise support” and “premise attack”, we employ the Class of Principled Arguments (CoPAs) set provided by <cit.>.
Details regarding this procedure can be found in Appendix <ref>.
The test set contains 15 instantiations of the core elements (all from different CoPAs), resulting in a total of 600 samples (40 samples per instantiation × 15 instantiations).
For the train and dev set, we use only the original stance role (i.e., stance role = original), reducing to 20 the number of samples generated for each core element instantiation.
We have 251 and 38 instantiations of the core elements[The train set contains 12 CoPAs, and the dev set contains 2. To increase the number of instantiations in these partitions, we include all possible motions for each CoPA. Note that different instantiations based on the same CoPA have different claim content but share the same premises (i.e., premises are attached to a given CoPA).]
resulting in a total of 5020 and 760 samples (for the train and dev set, respectively).
To avoid overlap of the core elements across different partitions, a CoPA being used in one partition is not used in another.
§.§ Automatic evaluation metrics
To evaluate the quality of the predictions provided by the models (compared to the gold DMs), we explore the following metrics.
(1) Embeddings-based text similarity: “word embs” based on pre-trained embeddings from spaCy library[<https://spacy.io/>], “retrofit embs” based on pre-trained embeddings from LEAR <cit.>, “sbert embs” using pre-trained sentence embeddings from the SBERT library <cit.>.
(2) Argument marker sense (“arg marker”): DMs are mapped to argument marker senses (i.e., “forward”, “backward”, “thesis”, and “rebuttal”) <cit.>; we consider a prediction correct if the predicted and gold DM senses match.
(3) Discourse relation sense (“disc rel”): we proceed similar to “arg marker” but using discourse relation senses <cit.>. These senses are organized hierarchically into 3 levels (e.g., “Contingency.Cause.Result”) – in this work, we consider only the first level of the senses (i.e., “Comparison”, “Contingency”, “Expansion”, and “Temporal”).
Appendix <ref> provides additional details on how these metrics are computed.
For all metrics, scores range from 0 to 1, and obtaining higher scores means performing better (on the corresponding metric).
§.§ Models
We employ the following LMs:
BERT (“bert-base-cased”) <cit.>, XLM-RoBERTa (XLM-R) (“xlm-roberta-base”) <cit.>, and BART (“facebook/bart-base”) <cit.>.
As a first step in our analysis, we frame the DM augmentation problem as a single mask token prediction task.
For BART, we also report results following a Seq2Seq setting (), where the model
receives the same input sequence as the previously mentioned models (text with a single mask token) and
returns a text sequence as output.
BART is a Seq2Seq model which uses both an encoder and a decoder from a Transformer-based architecture <cit.>. Consequently, we can explore the Seq2Seq capabilities of this model to perform end-to-end DM augmentation, contrary to the other models which only contain the encoder component (hence, limited to the single mask-filling setting).
Including in this analysis provides a means to compare the performance between these formulations: simpler single-mask prediction from vs Seq2Seq prediction from .
For the Seq2Seq setting, determining the predicted DM is more laborious, requiring a comparison between the input and output sequence.
Based on a diff-tool implementation[<https://docs.python.org/3/library/difflib.html>], we determine the subsequence from the output sequence (in this case, we might have a varied number of tokens being predicted, from zero to several tokens) that matches the “<mask>” token in the input sequence.
Note that the output sequence might contain further differences as compared to the input sequence (i.e., the model might perform further edits to the sequence besides mask-filling); however, these differences are discarded for the fill-in-the-mask DM prediction task evaluation.
§.§ Experiments
Table <ref> shows the results obtained on the test set of the Artificial dataset for the fill-in-the-mask DM prediction task.
In a zero-shot setting,
performs clearly best, leading all evaluation dimensions except for “arg marker”.
BART models obtain the lowest scores for all metrics.
We explore fine-tuning the models on the Artificial dataset (using the train and dev set partitions).
We use BERT for these fine-tuning experiments.
Comparing fine-tuned with zero-shot , we observe clear improvements in all metrics.
Increases are particularly striking for “arg marker” and “disc rel” where models improve from 0.24-0.44 to almost 0.9.
Thus, we conclude that the DM slot-filling task is very challenging in a zero-shot setting for current LMs, but after fine-tuning, the performance can be clearly improved.
To analyze the extent to which models overfit the fine-tuning data, we selected samples from the training set by constraining some of the parameters used to generate the artificial dataset.
The test set remains the same as in previous experiments.
In , only samples containing a single sentence are included (besides single sentences, the test set also includes samples containing two sentences with 3 ADUs that the model was not trained with).
For , only samples containing two sentences with 3 ADUs are included
(thus, even though the model is trained to make predictions in the first sentence, it is not explicitly trained to make predictions when only a single sentence is provided, as observed in some samples in the test set).
In , we only include samples where the mask token is placed at the beginning of the first sentence (consequently, the model is not trained to predict DMs in other positions).
For , we only include samples where the mask is placed in the DM preceding the second ADU of the first sentence.
For both and , the model is not trained to make predictions in the second sentence; hence, we can analyze whether models generalize well in these cases.
Comparing the different settings explored for fine-tuning experiments, we observe that constraining the training data to single-sentence samples (“2 ADUs”) or to a specific “pred type” negatively impacts the scores (performing even below the zero-shot baseline in some metrics).
These findings indicate that the models fail to generalize when the test set contains samples following templates not provided in the training data.
This exposes some limitations of recent LMs.
§.§ Error analysis
We focus our error analysis on the discourse-level senses: “arg marker” and “disc rel”.
The goal is to understand if the predicted DMs are aligned (regarding positioning) with the gold DMs.
Regarding zero-shot models, we make the following observations.
and often predict DMs found in the DMs list for both senses, meaning they can predict conventional and sound DMs. However, there is some confusion between the labels “backward” and “forward” vs. “rebuttal”, and “Comparison” vs. “Expansion”, indicating that the models are not robust to challenging “edge cases” in the Artificial dataset (i.e., different text sequences in the dataset where subtle changes in the content entail different argument structures and/or sets of DMs).
BART-based models predict more diverse DMs, increasing the number of predictions not found in the DM lists.
We observe less confusion between these labels, indicating that these models are more robust to the edge cases.
As the automatic evaluation metrics indicate, fine-tuned performs better than the zero-shot models.
Nonetheless, we still observe some confusion between the labels “backward” and “forward” vs. “rebuttal” and “Contingency” vs. “Comparison”, even though these confusions are much less frequent than in zero-shot models.
Some examples from the test set of the Artificial Dataset are shown in Table <ref> (Appendix <ref>). We also show the predictions made by the zero-shot models and fine-tuned model in Table <ref> (Appendix <ref>).
§.§ Human evaluation
We conduct a human evaluation experiment to assess the quality of the predictions provided by the models in a zero-shot setting.
Furthermore, we also aim to determine if the automatic evaluation metrics correlate with human assessments.
We selected 20 samples from the Artificial dataset, covering different templates.
For each sample, we get the predictions provided by each model analyzed in a zero-shot setting (i.e., , , , ), resulting in a total of 80 samples; after removing repeated instances with the same prediction, each annotator analyzed 67 samples.
Three annotators performed this assessment.
Each annotator rates the prediction made by the model based on two criteria:
* Grammaticality: Is the predicted content grammatically correct given the context? The annotator is asked to rate with one of the following options: (-1): no, incorrect or out-of-context; (0): neutral or ambiguous; (+1): yes, it is appropriate.
* Coherence: Is the connotation of the predicted content correct/appropriate taking into account the surrounding context? Options: (-1): incorrect, predicted content is in the opposite sense; (0): neutral or ambiguous; (+1): correct, right sense.
Appendix <ref> shows some of the samples presented to the annotators and corresponding ratings.
We report inter-annotator agreement (IAA) using Cohen's κ metric <cit.>, based on the scikit-learn <cit.> implementation.
For “Grammaticality”, we obtain a Cohen's κ score of 0.7543 (which corresponds to “substantial” agreement according to the widely used scale of values proposed by <cit.>); for “Coherence”, a “moderate” agreement of 0.5848.
Overall, IAA scores indicate that human annotators can perform this task with reasonable agreement; however, assessing the “Coherence” criteria consistently (especially considering all the subtle changes in content that lead to different argument structures) is a challenging task that requires more cognitive effort.
Analyzing disagreements, we observe that most of them occur when one of the annotators provides the label “neutral or ambiguous” (0) while the other annotator considers either (+1) or (-1).
We also perform a correlation analysis between criteria “Grammaticality” and “Coherence” to determine if the labels provided by each annotator for these criteria are correlated. We obtain Pearson correlation coefficients of 0.0573, 0.0845, and 0.1945 (very low correlation).
Therefore, we conclude that annotators can use the criteria independently.
To determine whether automatic evaluation metrics (Section <ref>) are aligned with human assessments, we perform a correlation analysis.
To obtain a gold standard label for each text sequence used in the human evaluation, we average the labels provided by the three annotators.
The results for the correlation analysis are presented in Table <ref> (using the Pearson correlation coefficient).
For the “Grammaticality” criterion, “retrofit embs” is the metric most correlated with human assessments, followed closely by “word embs”.
Regarding the “Coherence” criterion, “disc rel” is the most correlated metric.
Intuitively, metrics based on discourse-level senses are more correlated with the “Coherence” criterion because they capture the discourse-level role that these words have in the sentence.
§ DATA
In this section, we introduce the corpora used in the end-to-end DM augmentation (Section <ref>) and downstream task (Section <ref>) experiments.
In Section <ref>, we describe three corpora containing annotations of argumentative content that are used in our experiments only for evaluation purposes.
Then, in Section <ref>, we describe the corpora containing annotations of DMs.
These corpora are used in our experiments to train Seq2Seq models to perform end-to-end DM augmentation.
§.§ Argument Mining data
Persuasive Essays corpus (PEC) <cit.>
contains token-level annotations of ADUs and their relations from student essays. An argument consists of a claim and one (or more) premise(s), connected via argumentative relations: “Support” or “Attack”.
Arguments are constrained to the paragraph boundaries (i.e., the corresponding ADUs must occur in the same paragraph).
Each ADU is labeled with one of the following: “Premise” (), “Claim” (), or “Major Claim” ().
A paragraph might contain zero or more arguments.
The distribution of token-level labels is: (15%), (45%), (8%), (32%).
PEC contains 402 essays, 1833 paragraphs, 1130 arguments, 6089 ADUs, and an average of 366 tokens per essay.
PEC is a well-established and one of the most widely explored corpora for tasks.
Microtext corpus (MTX) <cit.>
contains token-level annotations of ADUs from short texts (six or fewer sentences) written in response to trigger questions, such as “Should one do X”.
Each microtext consists of one claim and several premises.
Each ADU is labeled with one of the following: or .
Note that all tokens are associated with an ADU (MTX does not contain tokens).
It contains 112 microtexts.
The distribution of token-level labels is: (18%) and (82%).
Hotel reviews corpus (Hotel) <cit.>
contains token-level annotations of ADUs from Tripadvisor hotel reviews.
Each sub-sentence in the review is considered a clause. Annotators were asked to annotate each clause with one of the following labels: “Background” (), , “Implicit Premise” (), , , “Recommendation” (), or “Non-argumentative” (O).
It contains 194 reviews, with an average of 185 tokens per review.
The distribution of token-level labels is: (7%), (39%), (8%), (7%), (22%), (5%), (12%).
We expect this to be a challenging corpus for ADU boundary detection and classification because:
(a) it contains user-generated text with several abbreviations and grammatical errors;
(b) the label space is larger; and
(c) the original text is mostly deprived of explicit DMs.
§.§ Data with DM annotations
Artificial dataset (AD)
Dataset proposed in this paper to assess the capabilities of LMs to predict DMs.
Each sample contains a claim and one (or two) premise(s).
Every ADU is preceded by one DM that signals its role (further details are provided in Section <ref>).
Discovery <cit.>
provides a collection of adjacent sentence pairs ⟨ s1, s2 ⟩ and corresponding DM y that occur at the beginning of s2.
This corpus was designed for the Discourse Connective Prediction (DCP) task, where the goal is to determine y (from a fixed set of possible DMs) based on s1 and s2; e.g., s1 = “The analysis results suggest that the HCI can identify incipient fan bearing failures and describe the bearing degradation process.”, s2 = “The work presented in this paper provides a promising method for fan bearing health evaluation and prognosis.”, and y = “overall,”.
Input sequences are extracted from the Depcc web corpus <cit.>, which consists of English texts collected from commoncrawl web data.
This corpus differs from related work by the diversity of the DMs collected (a total of 174 different classes of DMs were collected, while related work collected 15 or fewer classes of DMs, e.g., the DisSent corpus <cit.>).
PDTB-3 <cit.>[<https://catalog.ldc.upenn.edu/LDC2019T05>]
contains annotations of discourse relations for articles extracted from the Wall Street Journal (WSJ). These discourse relations describe the relationship between two discourse units (e.g., propositions or sentences) and are grounded on explicit DMs occurring in the text (explicit discourse relation) or in the adjacency of the discourse units (implicit discourse relation). For explicit relations, annotators were asked to annotate: the connective, the two text spans that hold the relation, and the sense it conveys based on the PDTB senses hierarchy. For implicit relations, annotators were asked to provide an explicit connective that best expresses the sense of the relation. This resource has been widely used for research related to discourse parsing.
§ END-TO-END DISCOURSE MARKER AUGMENTATION
In Section <ref>, based on experiments in a challenging synthetic testbed proposed in this work (the Artificial Dataset), we have shown that DMs play a crucial role in argument perception and that recent LMs struggle with DM prediction.
We now move to more real-world experiments.
As detailed in Section <ref>, our goal is to automatically augment a text with DMs such that downstream task models can take advantage of explicit signals automatically added.
To this end, we need to conceive models that can automatically (a) identify where DMs should be added and (b) determine which DMs should be added based on the context.
We frame this task as a Seq2Seq problem.
As input, the models receive a (grammatically sound) text sequence that might (or not) contain DMs.
The output is the text sequence populated with DMs.
For instance, for tasks, we expect that the DMs should be added preceding ADUs.
Model
We employ recent Seq2Seq language models, namely:
BART (“facebook/bart-base”) <cit.> and T5 (“t5-base” and “t5-large”) <cit.>.
We use default “AutoModelForSeq2SeqLM” and “Seq2SeqTrainingArguments” parameters provided by the HuggingFace library, except for the following: scheduler = “constant” and max training epochs = 5.
§.§ Evaluation
As previously indicated, explicit DMs are known to be strong indicators of argumentative content, but whether to include explicit DMs inside argument boundaries is an annotation guideline detail that differs across corpora.
Furthermore, in the original corpora (e.g., the corpora explored in this work, Section <ref>), DMs are not directly annotated.
We identify the gold DMs based on a heuristic approach proposed by <cit.>.
For PEC, we consider as gold DM the span of text that precedes the corresponding ADU; more concretely, the span to the left of the ADU until the beginning of the sentence or the end of a preceding ADU is reached.
For MTX, DMs are included inside ADU boundaries; in this case, if an ADU begins with a span of text specified in a DM list,[The longest occurring one, using the same list proposed by <cit.>, which is composed of DMs that can be found in PEC and PDTB.] we consider that span of text as the gold DM and the following tokens as the ADU.
For Hotel, not explored by <cit.>, we proceed similarly to MTX.
We would like to emphasize that following this heuristic-based approach to decouple DMs from ADUs (in the case of MTX and Hotel datasets) keeps sound and valid the assumption that DMs typically precede the ADUs; this is already considered and studied in prior work <cit.>, only requiring some additional pre-processing steps to be performed in this stage to normalize the corpora in this axis.
To detokenize the token sequences provided by the corpora, we use sacremoses[<https://github.com/alvations/sacremoses>].
As Seq2Seq models output a text sequence, we need to determine the DMs that were augmented in the text based on the corresponding output sequence.
Similar to the approach detailed in Section <ref>, we use a diff-tool implementation, but in this case, we might have multiple mask tokens (one for each ADU).
Based on this procedure, we obtain the list of DMs predicted by the model, which we can compare to the list of gold DMs extracted from the original input sequence.
To illustrate this procedure, consider the example in Figure <ref>.
The gold DMs for this input sequence are: [“”, “However”, “”, “In my opinion”].
An empty string means that we have an implicit DM (i.e., no DM preceding the ADU in the original input sequence); for the remaining, an explicit DM was identified in the original input sequence.
The predicted DMs are: [“Indeed”, “However”, “Furthermore”, “In fact”].
In terms of evaluation protocol, we follow two procedures:
(a) Explicit DMs accuracy analysis: based on the gold explicit DMs in the original input sequence, we determine whether the model predicted a DM in the same location and whether the prediction is correct (using the automatic evaluation metrics described in Section <ref>). For sense-based metrics, only gold DMs that can be mapped to some sense are evaluated. With this analysis, we aim to determine the quality of the predicted DMs (i.e., if they are aligned with gold DMs at the lexical and semantic-level).
(b) Coverage analysis: based on all candidate DMs (explicit and implicit) that could be added to the input sequence (all elements in the gold DM list), we determine the percentage of DMs that are predicted. The aim of this analysis is to determine to which extent the model is augmenting the data with DMs in the correct locations (including implicit DMs, which could not be evaluated in (a)).
For an input sequence, there may be multiple DMs to add; our metrics average over all occurrences of DMs.
Then, we average over all input sequences to obtain the final scores, as reported in Table <ref> (for explicit DMs accuracy analysis) and Table <ref> (for coverage analysis).
Importantly, further changes to the input sequence might be performed by the Seq2Seq model (i.e., besides adding the DMs, the model might also commit/fix grammatical errors, capitalization, etc.), but we ignore these differences for the end-to-end DM augmentation assessment.
§.§ Data preparation
In the following, we describe the data preparation for each corpus (we use the corpora mentioned in Section <ref>) to obtain the input and output sequences (gold data) for training the Seq2Seq models in the end-to-end DM augmentation task.
Artificial dataset
For each sample, we generate a text sequence without any DM (as shown with text sequences “2” and “3” in Figure <ref>) for the input sequence and another text sequence with DMs preceding all ADUs (text sequences “4” and “5” in Figure <ref>) for the output sequence.
Discovery
For each sample, we employ the concatenation of the sentences without any DM in between for the input sequence (i.e., “s1. s2”) and with the corresponding DM at the beginning of s2 for the output sequence (i.e., “s1. y s2”).
PDTB-3
As input, we provide a version of the original text where all explicit DMs are removed.
We also perform the following operations to obtain a grammatically sound text:
(a) if the DM is not at the beginning of a sentence and if it is not preceded by any punctuation mark, we replace the DM with a comma – other options would be possible in specific cases, but we found the comma substitution to be a reasonable option
(e.g.,
“[...] this is a pleasant rally but it's very selective [...]” is converted to “[...] this is a pleasant rally, it's very selective [...]”);
(b) if the DM occurs at the beginning of a sentence, we uppercase the content that follows immediately after the removed DM.
As output, we provide a version of the original text where the implicit DMs are also added.
Adding implicit DMs also requires an extra pre-processing step, namely: if the DM occurs at the beginning of a sentence, we lowercase the content that follows the added DM.
§.§ Results
Setup
For evaluation purposes, we analyze the capability of Seq2Seq models to augment a given text with DMs using two versions of the input data:
(a) the original input sequence (“Input data: original”), which contains the explicit DMs originally included in the text;
(b) the input sequence obtained after the removal of all explicit DMs (“Input data: removed DMs”).
The first setting can be seen as the standard application scenario of our proposed approach, where we ask the Seq2Seq model to augment a given text, which might or might not contain explicit DMs. We expect the model to take advantage of explicit DMs to better capture the meaning of the text and to automatically augment the text with implicit DMs.
The second setting is challenging because the Seq2Seq model is asked to augment a text deprived of explicit signals.
To remove the explicit DMs from the original input sequence (“Input data: removed DMs”), we use the annotations of ADUs provided in corpora.
As described in Section <ref>, we follow a heuristic approach <cit.> to identify the gold DMs that precede each ADU. Then, we remove the corresponding DMs from the input sequence and perform the same operations described in Section <ref> for PDTB-3 to obtain a grammatically sound text sequence (e.g.,
“A great number of plants and animals died out because they were unable to fit into the new environment.” is converted to “A great number of plants and animals died out, they were unable to fit into the new environment.”).
Explicit DMs accuracy analysis
Table <ref> details the results.
The evaluation is performed on the test set of PEC, employing the automatic evaluation metrics described in Section <ref>.
We do not employ the remaining corpora from Section <ref> because the percentage of explicit DMs is relatively low.
We start our analysis with the “Input data: removed DMs” setting.
First, we observe that the pre-trained model underperforms in the DM augmentation task in a zero-shot setting (“none” in the column “fine-tune data”) because it will not automatically augment the text with DMs without being explicitly trained to do it.
Then, we compare the scores obtained when fine-tuning the pre-trained model on each corpus individually (“Discovery”, “AD”, and “PDTB”).
We observe that the best scores on the:
(a) embeddings-based metrics (i.e., “word embs”, “retrofit embs”, and “sbert embs”) are obtained when fine-tuning on AD, which we attribute to the restricted set of DMs used in the training data and, consequently, the predictions made by the model are more controlled towards well-known DMs;
(b) “disc rel” metric is obtained fine-tuning on PDTB, which indicates that this corpus is relevant to improve the models on the “Coherence” axis;
(c) “arg marker” metric is obtained fine-tuning on Discovery.
We also provide results when fine-tuning on the combination of the three corpora in the training set;
we consider , , and pre-trained models.
We make the following observations:
(i) surprisingly, performs worse when fine-tuned on the combination of all datasets compared to the best individual results;
(ii) is superior to in all metrics;
(iii) performs better than in most metrics (except for “arg marker”), indicating that larger models do not necessarily perform better in this task.
Regarding the “Input data: original” setting, we analyze the results obtained after fine-tuning on the combination of the three corpora.
As expected, we obtain higher scores across all metrics compared to “Input data: removed DMs”, as the Seq2Seq model can explore explicit DMs (given that we frame it as a Seq2Seq problem, the model might keep, edit, or remove explicit DMs) to capture the semantics of text and use this information to improve on implicit DM augmentation.
underperforms in this setting compared to the T5 models. We observe higher variations in the scores for the metrics “arg marker” and “disc rel”, with obtaining remarkable improvements, almost 10 percentage points above , which itself is 30 points above .
Coverage analysis
Detailed results are provided in Table <ref>.
The evaluation is performed on the test set of each corpus described in Section <ref>.
For reference, the results obtained in the original data (i.e., containing only the original explicit DMs, which corresponds to “Input data: original” without the intervention of any Seq2Seq model) are: 73% for PEC, 44% for MTX, and 15% for Hotel.
We explore the same input data settings and Seq2Seq pre-trained models, and fine-tune with data previously detailed for the explicit DM accuracy analysis.
Analyzing the results obtained with the “Input data: removed DMs” setting, we observe that: employed in a zero-shot setting underperforms the task (because it will not augment the text with DMs); fine-tuning on individual corpora improves the scores (Hotel seems to be the most challenging corpus); a model trained solely on PDTB obtains the lowest scores across all corpora, while Discovery and AD perform on par in PEC and MTX, but the model trained on AD stands out with higher scores on Hotel.
The scores obtained after fine-tuning on individual corpora are superior to the reference values reported for the original data (except for PDTB on PEC), meaning that the Seq2Seq models successfully increase the coverage of ADUs being preceded by explicit DMs (even departing from the input data deprived of DMs, i.e., “Input data: removed DMs” setting).
Combining the corpora positively impacts the scores on Hotel (23 percentage points above best individual results), with similar scores obtained on PEC and MTX.
Surprisingly, again obtains lower scores.
For the “Input data: original” setting, we again obtain higher scores.
These improvements are smaller for Hotel because the original text is mostly deprived of explicit DMs.
Finally, we observe that in this setting, we can obtain very high coverage scores across all corpora: 98% for PEC, 95% for MTX, and 69% for Hotel.
§.§ Comparison with ChatGPT
ChatGPT[<https://openai.com/blog/chatgpt/>] <cit.> is a popular large language model built on top of the GPT-3.5 series <cit.> and optimized to interact in a conversational way.
Even though ChatGPT is publicly available, interacting with it can only be done with limited access. Consequently, we are unable to conduct large-scale experiments and fine-tune the model on specific tasks.
To compare the zero-shot performance of ChatGPT in our task, we run a small-scale experiment and compare the results obtained with the models presented in Section <ref>.
We sampled 11 essays from the test set of the PEC (totaling 60 paragraphs) for this small-scale experiment, following the same evaluation protocol described in Section <ref>.
Detailed results can be found in Appendix <ref>.
Even though our fine-tuned models surpass ChatGPT in most of the metrics (except for “disc rel”), especially in terms of coverage, it is remarkable that ChatGPT, operating in a zero-shot setting, is competitive.
With some fine-tuning, better prompt-tuning or in-context learning, we believe that ChatGPT (and similar LLMs) might excel in the proposed DM augmentation task.
§ DOWNSTREAM TASK EVALUATION
We assess the impact of the end-to-end DM augmentation approach detailed in Section <ref> on an downstream task, namely: ADU identification and classification.
We operate on the token level with the label set:
{O}∪{B, I}× T, where T is a corpus-specific set of ADU labels.
For instance, for PEC, T = {, , }.
This subtask is one of the most fundamental in the process and considered in many other studies <cit.>.
Its reduced complexity compared to tasks that also include relation identification makes a subsequent analysis of the impact of the proposed approach easier.
§.§ Experimental setup
We assess the impact of the proposed DM augmentation approach in the downstream task when the Seq2Seq models (described in Section <ref>) are asked to augment a text based on two versions (similar to Section <ref>) of the input data:
(a) the original input sequence (“Input data: original”);
(b) the input sequence obtained after the removal of all explicit DMs (“Input data: removed DMs”).
In Figure <ref>, we illustrate the experimental setup process using “Input data: original” (step 1), where X corresponds to the original token sequence and Y to the original label sequence (as provided in the gold annotations).
The process is similar for “Input data: removed DMs”.
In step 2 (Fig. <ref>), a Seq2Seq model performs DM augmentation.
Since Seq2Seq models work with strings and the input data is provided as token sequences, we need to detokenize the original token sequence (resulting in X_S in Fig. <ref>). All tokenization and detokenization operations are performed using sacremoses.
At the end of this step, we obtain the output provided by the Seq2Seq model (i.e., X_S^M, the text augmented with DMs), which will be used as the input data for the downstream task model (the model trained and evaluated on the downstream task) in the following steps.
Given that the output provided by the Seq2Seq model is different from the original token sequence X (based on which we have the gold annotations), we need to map the original label sequence (Y) to the modified token sequence (i.e., X^M, the token sequence obtained after tokenization of the Seq2Seq output string X_S^M). To this end, in step 3 (Fig. <ref>), we employ an annotation projection procedure, detailed in Appendix <ref>.
Based on this annotation projection procedure, we train the downstream model using the modified token sequence (X^M) and the corresponding label sequence obtained via annotation projection (i.e., Y^M, the original label sequence mapped to the modified token sequence).
Then, using the trained model, we obtain, in step 4 (Fig. <ref>), the predictions for the test set (i.e., Z^M, which also contains modified sequences).
For a fair comparison between different approaches, in step 5 (Fig. <ref>), we map back the predicted label sequence (Z^M) to the original token sequence (i.e., Z corresponds to the predicted label sequence mapped to the original token sequence), using the same annotation projection procedure.
Consequently, despite all the changes made by the Seq2Seq model, we ensure that the downstream task evaluation is performed on the same grounds for each approach.
This is crucial to obtain insightful token-level and component-level (i.e., ADUs in this downstream task) metrics.
As evaluation metrics, we use the following:
(a) seqeval[<https://huggingface.co/spaces/evaluate-metric/seqeval>] is a popular framework for sequence labeling evaluation typically used to evaluate the performance on chunking tasks such as named-entity recognition and semantic role labeling;
(b) flat token-level macro-F1 as implemented in scikit-learn <cit.>.
For the downstream task, we employ a BERT model (“bert-base-cased”) following a typical sequence labeling approach.
We use default “AutoModelForTokenClassification” and “TrainingArguments” parameters provided by the HuggingFace library, except for the following: learning rate = 2e-5, weight decay = 0.01, max training epochs = 50, and evaluation metric (to select the best epoch based on the dev set) is token-level macro f1-score (similar to prior work, e.g., <cit.>).
§.§ Results
Table <ref> shows the results obtained for the downstream task.
The line identified with a “none” in the column “DM augmentation model” refers to the scores obtained by a baseline model in the corresponding input data setting, in which the downstream model receives as input the data without the intervention of any DM augmentation Seq2Seq model; for the remaining lines, the input sequence provided to the downstream model was previously augmented using a pre-trained Seq2Seq model (, , or ) fine-tuned on the combination of the three corpora described in Section <ref> (“Discovery + AD + PDTB”).
For “Input data: original”, the scores in “none” correspond to the current state-of-the-art using a BERT model.
For “Input data: removed DMs”, the scores in “none” correspond to a very challenging setup for sequence labeling models because they are asked to perform the downstream task without explicit signals.
We start our analysis by comparing the baseline models, i.e., rows (1) and (A).
When the DMs are removed from the data, row (1), we observe an expected drop in the scores in all the metrics on the PEC and MTX corpora: the task is more challenging without DMs.
Given that the original data in the Hotel corpus is already scarce regarding DMs, we observe slightly lower scores for the metric “token macro f1” with the removal of DMs. Surprisingly, we observe higher scores (by a small margin) for the remaining metrics.
Most of the results obtained from DM augmentation models that receive as input the original data (“Input data: original”; rows (A, B, C, D)) are superior to the scores obtained in the setting “Input data: removed DMs” (rows (1, 2, 3, 4)).
However, even though in Section <ref> we observed clear improvements in all metrics for the setting “Input data: original”, these improvements are reflected with limited effect in the downstream task.
Comparing row (1), which is the baseline with DMs removed, to rows (2, 3, 4), which give the results after adding DMs with the DM augmentation models previously described, we observe:
(i) consistent improvements for the MTX dataset, i.e., the results of (2, 3, 4) are better than (1) in all cases;
(ii) for PEC, all rows (1, 2, 3, 4) have mostly similar scores across the metrics;
(iii) only clearly improves, and only for “token accuracy”, over (1) for Hotel.
Comparing row (A), which is the baseline with original data, to (B, C, D), which give the results after performing DM augmentation on top of the original data with the Seq2Seq models previously described, we observe:
(i) the most consistent improvements are obtained again for the MTX dataset, where we observe a 3 percentage point improvement in “seqeval f1” for over the baseline;
(ii) improves upon the baseline for Hotel according to 2 out of 3 metrics;
(iii) there are no improvements for PEC, the baseline performs better according to all three metrics.
To summarize, we observe that in a downstream task, augmenting DMs automatically with recent LMs can be beneficial in some, but not all, cases.
We believe that with the advent of larger LMs (such as the recent ChatGPT), the capability of these models to perform DM augmentation can be decisively improved in the near future, with a potential impact on downstream tasks (such as tasks).
Overall, our findings not only show the impact of a DM augmentation approach for a downstream task but also demonstrate how the proposed approach can be employed to improve the readability and transparency of argument exposition (i.e., introducing explicit signals that clearly unveil the presence and positioning of the ADUs conveyed in the text).
Finally, we would like to reinforce that the DM augmentation models were fine-tuned on datasets (i.e., “Discovery + AD + PDTB”) that
(a) were not manually annotated for tasks and
(b) are different from the downstream task evaluation datasets (i.e., PEC, MTX, and Hotel).
Consequently, our results indicate that DM augmentation models can be trained on external data to automatically add useful DMs (in some cases, as previously detailed) for downstream task models, despite the differences in the DM augmentation training data and the downstream evaluation data (e.g., domain shift).
§.§ Error analysis
We manually sampled some data instances from each corpora (Section <ref>) and analyzed the predictions (token-level, for the ADU identification and classification task) made by the downstream task models.
Furthermore, we also analyzed the DMs automatically added by the Seq2Seq models, assessing whether it is possible to find associations between the DMs that precede the ADUs and the corresponding ADU labels.
In Appendix <ref>, we provide a detailed analysis for each corpus and show some examples.
Overall, we observe that DM augmentation models performed well in terms of coverage, augmenting the text with DMs at appropriate locations (i.e., preceding the DMs). This observation is in line with the conclusions taken from the “Coverage analysis” in Section <ref>.
However, we observed that the presence of some DMs that are commonly associated as indicators of specific ADU labels (e.g., “because” and “moreover” typically associated to ) are not consistently used by the downstream model to predict the corresponding ADU label accordingly (i.e., the predicted ADU label varies in the presence of these DMs).
We attribute this to the lack of consistency (we observed, for all corpora, that some DMs are associated to different ADU labels) and variability (e.g., on PEC, in the presence of augmented DMs, the label does not contain clear indicators; in the original text, these indicators are available and explored by the downstream model) of augmented DMs.
We conclude that these limitations in the quality of the predictions provided by the DM augmentation models conditioned the association of DMs and ADU labels that we expected to be performed by the downstream model.
Based on this analysis, our assessment is that erroneous predictions of DMs might interfere with the interpretation of the arguments exposed in the text (and, in some cases, might even mislead the downstream model).
This is an expected drawback from a pipeline architecture (i.e., end-to-end DM augmentation followed by the downstream task).
However, on the other hand, the potential of DM augmentation approaches is evident, as the presence of coherent and grammatically correct DMs can clearly improve the readability of the text and of argument exposition in particular (as illustrated in the detailed analysis provided in Appendix <ref>).
§ RELATED WORK
Argument mining
Given the complexity of the task, it is common to divide the task in a set of subtasks <cit.>, namely: ADU identification, ADU classification (e.g., premise vs. claim), Argumentative Relation Identification (ARI, e.g., link vs. no-link), and Argumentative Relation Classification (ARC, e.g., support vs. attack).
In this paper, we focus on ADU identification and classification as downstream tasks (Section <ref>).
The standard BiLSTM with a CRF output layer emerged as the state-of-the-art architecture for token-level sequence tagging, including argument mining <cit.>.
Current state-of-the-art on ADU identification and classification employs BERT <cit.> or Longformer <cit.> as base encoders (in some cases, with a CRF layer on top), typically accompanied with specific architectures to tackle a target corpus or task-specific challenges <cit.>.
We follow these recent trends by employing a BERT-based sequence labeling model. Since our goal is to assess the impact of the proposed DM augmentation approach, we keep the architecture as simple and generic as possible (standard BERT encoder with a token classification head), but competitive with recent state-of-the-art (as detailed in Section <ref>).
Some prior work also studies across different corpora.
Given the variability of annotation schemas, dealing with different conceptualizations (such as tree vs. graph-based structures, ADU and relation labels, ADU boundaries, among others) is a common challenge <cit.>.
Besides the variability of annotated resources, corpora tend to be small <cit.>.
To overcome these challenges, some approaches explored transfer learning:
(a) across different corpora <cit.>;
(b) from auxiliary tasks, such as discourse parsing <cit.> and fine-tuning pre-trained LMs on large amounts of unlabeled discussion threads from Reddit <cit.>; and
(c) from corpora in different languages <cit.>.
Exploring additional training data is pointed out as beneficial across different subtasks, especially under low-resource settings; however, domain-shift and differences in annotation schemas are typically referred to as the main challenges.
Our approach differs by proposing DM augmentation to improve the ability of models across different genres, without requiring to devise transfer learning approaches to deal with different annotation schemas: given that the DM augmentation follows a text-to-text approach, we can employ corpus-specific models to address the task for each corpus.
The role of discourse context
As a discourse parsing task, prior work on looked at the intersection between argumentation structures and existing discourse parsing theories (e.g., RST, PDTB), with several studies pointing out that improvements can be obtained for tasks by incorporating insights from related discourse parsing tasks <cit.>.
From the state-of-the-art in discourse parsing tasks, it is well known that discourse markers play an important role as strong indicators for discourse relations <cit.>.
In the field of , such lexical clues have also been explored in prior work, either via handcrafted features <cit.> or encoding these representations in neural-based architectures <cit.>.
Including DM in their span representations, <cit.> report state-of-the-art results for ADU classification, ARI, and ARC.
These works rely on the presence of explicit DMs anteceding ADUs, which is a viable assumption for some of the corpora containing texts written in English.
To obtain a system that is robust either in the presence or absence of such lexical clues, we propose to automatically augment the text with the missing DMs using state-of-the-art Seq2Seq models.
Our proposal complements prior work findings (e.g., including DMs in span representations improves performance across different subtasks) as we propose a text-to-text approach that can be employed to augment the input text provided to state-of-the-art models.
Aligned with our proposal, <cit.> frames ARC as a plausibility ranking prediction task. The notion of plausibility comes from adding DMs (from a handcrafted set of 4 possible DM pairs) of different categories (support and attack) between two ADUs and determining which of them is more plausible.
They report promising results for this subtask, demonstrating that explicitation of DMs can be a feasible approach to tackle some subtasks.
We aim to go one step further by: (a) employing language models to predict plausible DMs (instead of using a handcrafted set of DMs) and (b) proposing a more realistic DM augmentation scenario, where we receive as input raw text and we do not assume that the ADU boundaries are known.
However, relying on these DMs also has downsides.
In a different line of work, <cit.> show that the models they employ to address the task of ARC tend to focus on DMs instead of the actual ADU content. They argue that such a system can be easily fooled in cross-document settings (i.e., ADUs belonging to a given argument can be retrieved from different documents), proposing a context-agnostic model that is constrained to encode only the actual ADU content as an alternative.
We believe that our approach addresses these concerns as follows:
(a) for the tasks addressed in this work, arguments are constrained to document boundaries (cross-document settings are out of scope);
(b) given that the DM augmentation models are automatically employed for each document, we hypothesize that the models will take into account the surrounding context and adapt the DMs predictions accordingly (consequently, the downstream model can rely on them).
Explicit vs. Implicit relations in discourse parsing
In discourse parsing, it is well-known that there exists a clear gap between explicit (relations that are marked explicitly with a DM) and implicit (relation between two spans of text exists, but is not marked explicitly with a DM) relation classification, namely, 90% vs. 50% of accuracy (respectively) in 4-way classification (as indicated by <cit.>).
To improve discourse relation parsing, several works focused on enhancing their systems for implicit relation classification:
removing DMs from explicit relations for implicit relation classification data augmentation <cit.>;
framing explicit vs. implicit relation classification as a domain adaptation problem <cit.>;
learning sentence representations by exploring automatically collected large-scale datasets <cit.>;
multi-task learning <cit.>;
automatic explicitation of implicit DMs followed by explicit relation classification <cit.>.
To close the gap between explicit and implicit DMs, our approach follows the line of work on explicitation.
However, we work in a more challenging scenario, where the DM augmentation and downstream tasks are performed at the paragraph level (i.e., from raw text instead of a sentence-pair classification task that assumes that the ADUs are given).
§ CONCLUSIONS
In this paper, we propose to automatically augment a text with DMs to improve the robustness of systems across different genres.
First, we describe a synthetic template-based test suite created to assess the capabilities of recent LMs to predict DMs and whether LMs are sensitive to specific semantically-critical edits in the text.
We show that LMs underperform this task in a zero-shot setting, but the performance can be improved after some fine-tuning.
Then, we assess whether LMs can be employed to automatically augment a text with coherent and grammatically correct DMs in an end-to-end setting.
We collect a heterogeneous collection of DM-related datasets and show that fine-tuning LMs in this collection improves the ability of LMs in this task.
Finally, we evaluate the impact of augmented DMs performed by the proposed end-to-end DM augmentation models on the performance of a downstream model (across different corpora).
We obtained mixed results across different corpora.
Our analysis indicates that DM augmentation models performed well in terms of coverage; however, the lack of consistency and variability of the augmented DMs conditioned the association of DMs and ADU labels that we expected to be performed by the downstream model.
In future work, we would like to assess how recent LLMs perform in these tasks. Additionally, we would like to increase and improve the variability and quality of the heterogeneous collection of data instances used to fine-tune the end-to-end DM augmentation models (possibly including data related to tasks that might inform the models about DMs that are more predominant in domains), as improving in this axis might have a direct impact in the downstream task performance.
We believe that our findings are evidence of the potential of DM augmentation approaches. DM augmentation models can be deployed to improve the readability and transparency of arguments exposed in written text, such as embedding this approach in assistive writing tools.
§ LIMITATIONS
One of the anchors of this work is evidence from prior work that DMs can play an important role to identify and classify ADUs; prior work is mostly based on DMs preceding the ADUs.
Consequently, we focus on DMs preceding the ADUs.
We note that DMs following ADUs might also occur in natural language and might be indicative of ADU roles.
However, this phenomenon is less frequent in natural language and also less studied in related work <cit.>.
The Artificial Dataset proposed in Section <ref> follows a template-based approach, instantiated with examples extracted from the CoPAs provided by <cit.>.
While some control over linguistic phenomena occurring in the dataset was important to investigate our hypothesis, the downside is a lack of diversity.
Nonetheless, we believe that the dataset contains enough diversity for the purposes studied in this work (e.g., different topics, several parameters that result in different sentence structures, etc.). Future work might include expanding the dataset with more templates and data instances.
Our proposed approach follows a pipeline architecture: end-to-end DM augmentation followed by the downstream task. Consequently, erroneous predictions made by the DM augmentation model might mislead the downstream task model.
Furthermore, the end-to-end DM augmentation employs a Seq2Seq model.
Even though these models were trained to add DMs without changing further content, it might happen in some cases that the original ADU content is changed by the model.
We foresee that, in extreme cases, these edits might lead to a different argument content being expressed (e.g., changing the stance, adding/removing negation expressions, etc.); however, we note that we did not observe this in our experiments.
In a few cases, we observed minor edits being performed to the content of the ADUs, mostly related to grammatical corrections.
We point out that despite the limited effectiveness of the proposed DM augmentation approach in improving the downstream task scores in some settings, our proposal is grounded on a well-motivated and promising research hypothesis, solid experimental setup, and detailed error analysis that we hope can guide future research.
Similar to recent trends in the community (Insights NLP workshop <cit.>, ICBINB Neurips workshop and initiative[<http://icbinb.cc/>], etc.), we believe that well-motivated and well-executed research can also contribute to the progress of science, going beyond the current emphasis on state-of-the-art results.
§ ACKNOWLEDGMENTS
Gil Rocha is supported by a PhD grant (SFRH/BD/140125/2018) from Fundação para a Ciência e a Tecnologia (FCT).
This work was supported by LIACC, funded by national funds through FCT/MCTES (PIDDAC), with reference UIDB/00027/2020.
The NLLG group is supported by the BMBF grant “Metrics4NLG” and the DFG Heisenberg Grant EG 375/5–1.
§ ARTIFICIAL DATASET - TEMPLATES
The templates are based on a set of configurable parameters, namely:
* number of ADUs ∈{2, 3}: sample might contain 2 ADUs following the structure “dm_1 X_1, dm_2 X_2.”, where one of the ADUs (X_1 or X_2) is a claim and the other a premise; or contain 3 ADUs (claim and both premises) following the structure “dm_1 X_1, dm_2 X_2. dm_3, X_3.”;
* stance role ∈{original, opposite}: each sample contains a single claim, which might employ the “original” or “opposite” stance;
* claim position ∈{1, 2}: the claim is always in the first sentence, either in the beginning (1) or end (2) of the sentence (to avoid the unusual text sequence where we have two premises in a single sentence followed by an isolated claim in the second sentence);
* premise role ∈{support, attack}: only used when “number of ADUs” = 2; dictates which of the premises is chosen;
* supportive premise position ∈{1, 2}: only used when “number of ADUs” = 3, indicates whether the supportive premise should occur before (1) the attacking premise or after (2);
* prediction type ∈{dm_1, dm_2, dm_3}: let dm_i be the option chosen, then the mask token will be placed in DM preceding the ADU in position i (if “number of ADUs” = 2 only dm_1 and dm_2 are allowed).
§ ARTIFICIAL DATASET - DMS SET
DMs are added based on the role of the ADU that they precede, using the following fixed set of DMs.
If preceding the claim, then we add one the following: “I think that”, “in my opinion”, or “I believe that”.
For the supportive premise, if in position dm_3 we add one of the following: “moreover”, “furthermore”, or “indeed”. Otherwise, one of the following: “because”, “since”, or “given that”.
For the attacking premise, in dm_3 we add: “however”, “on the other hand”, or “conversely”. Otherwise, “although”, “even though”, or “even if”.
§ ARTIFICIAL DATASET - INSTANTIATION PROCEDURE BASED ON COPAS
CoPAs are sets of propositions that are often used when debating a recurring theme (e.g., the premises mentioned in Section <ref> and used in Figure <ref> are related to the theme “Clean energy”).
For each CoPA, <cit.> provide two propositions that people tend to agree as supporting different points of view for a given theme.
We use these propositions as supportive and attacking premises towards a given claim.
Each CoPA is also associated with a set of motions to which the corresponding theme is relevant.
A motion is defined as a pair ⟨ action, topic ⟩, where an action is a term coming from a closed set of allowed actions (e.g., abolish, adopt, legalize, etc.), and a topic is a Wikipedia title.
For example, for the theme “Clean energy”, we can find the motion ⟨ introduce, carbon taxes ⟩, which can be written as “we should introduce carbon taxes”.
We use these motions as claims in our Artificial dataset.
Based on these instantiations and the set of templates, we can generate different samples that resemble real-world arguments.
The “opposite stance” is not provided in the original resources from <cit.>. For a specific motion, we manually selected the action (from the set of allowed actions) that could be employed as “opposite stance” (e.g., ⟨ abolish, carbon taxes ⟩).
§ AUTOMATIC EVALUATION METRICS
The evaluation metrics employed to assess the quality of the predicted DMs are the following:
* word embeddings text similarity (“word embs”): Cosine similarity using an average of word vectors. Based on pre-trained embeddings “en_core_web_lg” from Spacy library [<https://spacy.io/>];
* retrofitted word embeddings text similarity (“retrofit embs”): Cosine similarity using an average of word vectors. Based on pre-trained embeddings from LEAR [<https://github.com/nmrksic/lear>];
* sentence embeddings text similarity (“sbert embs”): we use the pre-trained sentence embeddings “all-mpnet-base-v2” from SBERT library <cit.>, indicated as the model with the highest average performance on encoding sentences over 14 diverse tasks from different domains. To compare gold and predicted DMs representations, we use cosine similarity;
* argument marker sense (“arg marker”): list of 115 DMs from <cit.>. Senses are divided in the following categories: “forward”, “backward”, “thesis”, and “rebuttal” indicators. Each gold and predicted DM is mapped to one of the senses based on a strict lexical match with the list of DMs available for each sense. If the DM is not matched, then we assign the label “none”. If the gold DM is “none”, we do not consider this instance in the evaluation (the DM is out of the scope for the list of DMs available in the senses list, so we cannot make a concrete comparison with the predicted DM);
* discourse relation sense (“disc rel”): we use a lexicon of 149 English DMs called “DiMLex-Eng” <cit.>. These DMs were extracted from PDTB 2.0 <cit.>, RST-SC <cit.>, and Relational Indicator List <cit.>. Each DM maps to a set of possible senses. For a given DM, we choose the sense that the DM occurs more frequently. Senses are organized hierarchically in 3 levels (e.g., the DM “consequently” is mapped to the sense “Contingency.Cause.Result”). In this work, we consider only the first level of the senses (i.e., “Comparison”, “Contingency”, “Expansion”, and “Temporal”) as a simplification and to avoid the propagation of errors between levels (i.e., an error in level 1 entails an error in level 2, and so on). Each prediction is mapped to one of the senses based on a strict lexical match. If the word is not matched, then we assign the label “none”;
§ FILL-IN-THE-MASK DISCOURSE MARKER PREDICTION - ERROR ANALYSIS
Table <ref> shows some instances and the corresponding gold DMs (accompanied by the discourse-level senses “arg marker” and “disc rel” in parenthesis) from the test set of the Artificial Dataset.
In this sample, we highlight some of the challenging instances that can be found in the Artificial dataset.
More concretely, instances with id 1 and 2 belong to the same instantiation of the core elements, but the template used to generate the instances differs in a single parameter (i.e., the parameter “premise role” which changes the premise that is presented, requiring the model to predict different DMs); instances 3 and 4 belong to the same instantiation of the core elements, but the template used to generate the instances differs in two parameters (i.e., “stance role” that dictates the stance of the claim and “supportive premise position” that dictates the position of the supportive premise, requiring the model to predict the same DM in both instances); and so on.
The predictions made by the zero-shot models and fine-tuned model (described in Section <ref>) for the corresponding instances are shown in Table <ref>.
For example, comparing the predictions made for instances with id 1 and 2, we can observe that both zero-shot and do not change the semantics of the prediction even though the differences in content require such changes, while both zero-shot BART-based models and are robust and change the prediction accordingly.
§ HUMAN EVALUATION - ADDITIONAL DETAILS
Table <ref> shows some of the text sequences analyzed in the human evaluation.
§ END-TO-END DM AUGMENTATION RESULTS - COMPARISON WITH CHATGPT
Table <ref> shows the results obtained for the small-scale end-to-end DM augmentation experiment with ChatGPT, including both the explicit DMs accuracy and coverage analysis.
§ ANNOTATION PROJECTION
To map the label sequence from the original sequence to the modified sequence, we implement the Needleman-Wunsch algorithm <cit.>, a well-known sequence alignment algorithm.
As input, it receives the original and modified token sequences.
The output is an alignment of the token sequences, token by token, where the goal is to optimize a global score.
This algorithm might include a special token (the “gap” token) in the output sequences.
Gap tokens are inserted to optimize the alignment of identical tokens in successive sequences.
The global score attributed to a given alignment is based on a scoring system. We use default values: match score (tokens are identical) = 1, mismatch score (tokens are different but aligned to optimize alignment sequence) = -1, gap penalty (gap token was introduced in one of the sequences) = -1. To determine whether two tokens are identical, we use strict lexical match (case insensitive).
Using the aligned sequences, we map the labels from the original to the modified token sequence.
Figure <ref> illustrates some examples of the output obtained when employing the annotation projection procedure described in this section to the three corpora explored in this work.
“Original text” corresponds to the original text from which gold annotations for ADU identification and classification are provided in the corpora. “Text augmented with DMs + Annotation Projection” corresponds to the text obtained after performing DM augmentation (using the pre-trained Seq2Seq model fine-tuned on the combination of the corpora “Discovery + AD + PDTB”, as described in Section <ref>) when we provide as input the text deprived of DMs (i.e., “Input data: removed DMs”) and the corresponding ADU identification and classification labels obtained after performing annotation projection.
The underlined text highlights differences between the original and DM augmented text. These differences require a projection of the original label sequence to the label sequence for the corresponding DM augmented text, which is performed using the proposed annotation projection procedure.
§ DOWNSTREAM TASK EVALUATION - ERROR ANALYSIS
We show some examples of the gold data and predictions made by the downstream task models for the corpora explored in this work.
For each example, we provide:
* “Gold”: the gold data, including the ADU boundaries (square brackets) and the ADU labels (acronyms in subscript);
* “Input data: X (Y)”: where “X” indicates the version of the input data provided to the DM augmentation model, and “Y” indicates whether we perform DM augmentation or not (“none” indicates that we do not perform DM augmentation and indicates that we perform DM augmentation using the pre-trained Seq2Seq model fine-tuned on the combination of the corpora “Discovery + AD + PDTB”).
Figures <ref> and <ref> show two examples from PEC.
Figure <ref> shows a paragraph containing a and in the “Gold” data annotations.
We observe that in the “Input data: original (none)” setup, the model predicts frequently in the presence of a DM that can be mapped to the “arg marker” sense “thesis” (e.g., “in conclusion”, “in my opinion”, “as far as I am concerned”, “I believe that”, etc.).
Similar patterns can be observed in the “Gold” data annotations.
We were not able to find similar associations in the “Input data: removed DMs ()” setup, for instance.
As illustrated in “Input data: removed DMs (none)”, the distinction between and is very challenging in the absence of such explicit signals.
The distinction between and can also be challenging, as exemplified in Figure <ref>.
We observe that some DMs might be associated to ADU labels more strongly than others (e.g., in Figure <ref>, “therefore” is associated to predictions, while “firstly” cannot be associated to a particular label).
Surprisingly, we observed that some DMs that are commonly associated as indicators of either or ADUs (e.g., “because” and “moreover” typically associated to ) are not consistently used by the downstream model to predict the corresponding ADU label accordingly.
Figure <ref> shows an example from MTX.
Regarding the setups containing the original data (i.e., “Gold” annotations and the predictions made for “Input data: original (none)”), besides a single occurrence of “therefore” and “nevertheless”, all the remaining do not contain a DM preceding them (this analysis is constrained to the test set).
Some of the ADUs labeled as are preceded with DMs (most common DMs are: “and” (6), “but” (10), “yet” (4), and “besides” (3)), even though most of them (44) are not preceded by a DM (numbers in parentheses correspond to the number of occurrences in the test set for the “Gold” annotations, similar numbers are obtained for “Input data: original (none)”).
DM augmentation approaches performed well in terms of coverage, with most of the ADUs being preceded by DMs.
We can observe in Figure <ref> that some ADU labels become more evident after the DM augmentation performed by the models proposed in this work (“Input data: removed DMs ()” and “Input data: original ()”), such as the presence of the DM “clearly” indicating and the presence of “besides”, “because” or “but” indicating .
Finally, Figure <ref> shows an example from Hotel.
Similar to the observations made for MTX, in the setups containing the original data (i.e., “Gold” annotations and the predictions made for “Input data: original (none)”), most ADUs are not preceded by DMs.
The only exception is the DM “and” that occurs with some frequency preceding (10 out of 199 ADUs labeled as ) and (4 out of 41).
For instance, in Figure <ref>, 9 ADUs were annotated and none of them is preceded by a DM; making the annotation of ADUs (arguably) very challenging.
Despite the lack of explicit clues, downstream models perform relatively well in this example, only missing the two gold s (not identified as an ADU in one of the cases and predicted as in the other case) and erroneously labeling as the only sentence in the gold data that is not annotated as an ADU.
Also similar to MTX, DM augmentation approaches performed well in terms of coverage, with most ADUs being preceded by DMs.
However, as observed in Figure <ref>, the impact on the downstream model predictions is small (the predictions for all the setups are similar, the only exception is the extra split on “so was the bathroom” performed in “Input data: removed DMs ()”, even though this span of text is similar in all setups).
We point out that, particularly in this text genre, adding DMs to signal the presence of ADUs might contribute to improving the readability of arguments exposed in the text, as exemplified by the DM augmentation performed by the models proposed in this work (“Input data: removed DMs ()” and “Input data: original ()” in Figure <ref>).
|
http://arxiv.org/abs/2306.01578v1
|
20230602144553
|
Forward Neutrinos from Charm at Large Hadron Collider
|
[
"Atri Bhattacharya",
"Felix Kling",
"Ina Sarcevic",
"Anna M. Stasto"
] |
hep-ph
|
[
"hep-ph",
"hep-ex"
] |
#1 #1
#1 #1
#1 #1
#1 #1
[0.4ex]2pt0.8pt
[0.4ex]2pt0.8pt
[0.4ex]2pt0.8pt
[0.4ex]2pt0.8pt
[description]leftmargin=0.3cm
[itemize]leftmargin=0.5cm
|
http://arxiv.org/abs/2306.06243v1
|
20230609203611
|
Maximum number of symmetric extensions in the random graph
|
[
"Stepan Vakhrushev",
"Maksim Zhukovskii"
] |
math.CO
|
[
"math.CO",
"math.PR"
] |
Portfolio reshaping under 1st order stochastic dominance constraints
by the exact penalty function methods
Vladimir NorkinGratefully acknowledges funding by Volkswagenstiftung (Volkswagen Foundation) Alois Pichler0000-0001-8876-2429 https://orcid.org/0000-0001-8876-2429orcid.org/0000-0001-8876-2429. DFG, German Research Foundation – Project-ID 416228727 – SFB 1410. Contact: mailto:[email protected]@math.tu-chemnitz.de
December 20, 2022
===========================================================================================================================================================================================================================================================================================================================================================
It is known that after an appropriate rescaling the maximum degree of the binomial random graph converges in distribution to a Gumbel random variable. The same holds true for the maximum number of common neighbours of a k-vertex set, and for the maximum number of s-cliques sharing a single vertex. Can these results be generalised to the maximum number of extensions of a k-vertex set for any given way of extending of a k-vertex set by an s-vertex set? In this paper, we generalise the above mentioned results to a class of “symmetric extensions” and show that the limit distribution is not necessarily from the Gumbel family.
§ INTRODUCTION
Bollobás <cit.> and Ivchenko <cit.> proved that under some restrictions on the edge probability p, the (appropriately rescaled) maximum degree converges in distribution to a Gumbel random variable.
Let p=const∈(0,1). Let Δ_n be the maximum degree of G(n, p). For every integer n ≥ 2 set
a_n = pn + √(2p(1-p)n ln n)(1 - lnln n/4ln n - ln (2√(π))/2 ln n),
b_n = √(p(1-p)n/2 ln n).
Then
Δ_n - a_n/b_nd→η, n→∞,
where η has cdf
e^-e^-x (i.e. it is a standard Gumbel random variable), and d→ denotes convergence in distribution.
This result was extended by Ivchenko <cit.> to p=o(1) such that pn/ln^3 n→∞.
The central result of the extreme value theory is the Fisher–Tippet–Gnedenko theorem <cit.> claiming that, if, for an infinite sequence of independent and identically distributed (i.i.d.) random variables {ξ_i}_i∈ℕ and some non-random a_n, b_n the distribution of ξ^(n) - a_n/b_n converges weakly to a non-degenerate distribution (here, as usual, ξ^(n)=max{ξ_1,…ξ_n}), then this limit distribution belongs to one of the following three families of distributions: Gumbel, Weibull or Fréchet, and the conditions for the limit distribution to belong to one of theses families are known. Note that this result is not applicable to the degree sequences of random graphs since they constitute triangular arrays of dependent random variables. However, the degree sequence can be approximated by independent binomial random variables in the following sense. A fixed vertex of G(n, p) has the binomial distribution Bin(n-1, p) with n-1 trails and success probability p. In <cit.> it was proven that, for the maximum D_N of N independent binomial random variables ξ_N,1, ξ_N,2 , … , ξ_N,N∼Bin(M, p), where M = M(N) = ω(ln^3 N), p = const, and for every x ∈ℝ, the following is true:
Pr(D_N ≤ pM + √(2p(1-p)M ln N)[1 - lnln N/4ln N - 2√(π)/2ln N + x/2ln N]) → e^-e^-xas N →∞.
It is easy to see that in the case M = n - 1, N = n this result gives the same scaling constants and limit distribution as in Theorem <ref>. This is not unexpected since every pair of vertices in G(n, p) is almost independent — the dependency is only due to the single adjacency relation between these two vertices.
However, as we will see below, the limit distributions of similar statistics in G(n,p) not necessarily belong to any of the above three families of distributions.
To work with dependent random variables (degrees), Bollobás used the method of moments. Namely, let us denote by X the number of vertices with degree greater than a_n + b_n x. It turns out that the r-th moment of the random variable X converges in distribution to the r-th moment of the Poisson random variable with mean e^-x. From this it follows (see <cit.>) that lim_n→∞ Pr (X = 0) =e^-e^-x, which implies the result.
Recently <cit.>, Rodionov and the second author of the paper generalised Theorem <ref> for the maximum number of common neighbours of k vertices Δ_n, k in G(n, p), where k is an arbitrary fixed positive integer. Let p^k ≫ln^3 n/n, 1 - p ≫√(lnln n/n), then appropriately scaling Δ_n, k converges in distribution to a standard Gumbel random variable as well. The authors used a different approach for the following reasons:
(1) in the case k>1 the variance of the analogous random variables approaches infinity that makes the method of moments no longer applicable directly;
(2) it is computationally difficult (and not clear that it is possible to do in general) to estimate higher moments of the analogous random variable X.
But it turns out that it is enough to condition the probability space on certain “frequent” events, then, for the conditional probability, prove that E X (X - 1) ∼ ( E X)^2, and finally apply some bounds on the probability of “non-existence” that are inspired by the method of Arratia et al <cit.>. Note that another possible approach to overcome dependencies between weakly dependent random variables is the Stein–Chen method (see, for example, <cit.>) for establishing Poisson approximations. For example, Malinovsky <cit.> recently presented a proof of Theorem <ref> using this method.
Finally, in <cit.> a similar result for the maximum number of s-cliques sharing a single vertex was proven.
Note that all the above statistics are particular cases of extension numbers that were studied by Spencer in <cit.>, who was inspired by the fact that properties of these statistics constitute the basis of the argument for the validity of first order 0-1 laws for sparse random graphs <cit.>. These statistics also appear to be useful in many other applications, see, e.g. <cit.>. An extension is simply a rooted subgraph of a given graph isomorphic to a fixed pattern rooted graph. Formally, let H be a graph with a distinguished set of roots R={u_1,…,u_t}, and let S={u_t+1,…,u_s} be all the other vertices of H (expansion set). An (R,H)-extension of a tuple of vertices T=(x_1,…,x_t) is a graph G on {x_1,…,x_s} such that for all i<j such that j>t, the vertices u_i, u_j are adjacent in H if and only if x_i,x_j are adjacent in G. Fix a rooted graph (R,H) and a t-tuple T from [n]:={1,…,n}. Denote by X(T):=X_(R,H)(T) — extension count — the total number of (R,H)-extensions of T in G(n,p) (note that we count extensions as not necessarily induced subgraphs). Spencer <cit.> proved the law of large numbers for the number of extensions in the case when (R,H) is grounded (there is at least one edge between the set of roots and the expansion set in H) and strictly balanced (extensions in which all proper subextensions have a strictly lower density) rooted graph and p is large enough.
These results were recently refined in <cit.>.
In the current paper we consider G(n, p=const) (in order to avoid hard technical details; however, at least some of our results can be generalised to a wider range of p=p(n)) and address the following general question.
Given a rooted graph (R,H), what is the asymptotical distribution of the maximum of X(T)
over all possible choices of r-tuples T?
More precisely, are there a_n and b_n such that max_T X(T)-a_n/b_n converges weakly to a non-generate distribution, and what is the limit distribution if this is the case? For convenience, we consider only fully grounded rooted graphs (R,H), i.e. every root has at least 1 non-root neighbour. This assumption does not cause any loss in generality since clearly roots that are not adjacent to non-root vertices do not affect the maximum statistics we are looking at. In this paper we answer positively to the above question under certain conditions on (R, H). More precisely, let us call (R,H) symmetric, if the set of root vertices R can be divided into disjoint classes so that each non-root vertex is either not connected to the root set in H or is connected to all vertices of exactly one class. So the expansion set S(H) forms an arbitrary graph, and the only constraint is that the bipartite graph between S(H) and R is a disjoint union of complete bipartite subgraphs. Further in this section, we state the main result of our paper claiming a limit law for every symmetric extension. It generalises all the above mentioned results. Let us give various examples (see Fig. <ref>) of symmetric rooted graphs including the three instances for which the limit law was known:
a) H is a single edge with a single root. Then X_(R, H)(v) = deg(v), the asymptotic distribution of the maximum degree was described in Theorem <ref>.
b) H is a star graph with k rays, all leaves are roots. In this case X_(R, H)(v_1, …, v_k) = deg(v_1, … , v_k), that denotes the number of common neighbours of vertices v_1, … , v_k in G(n, p), the respective maximum was studied in <cit.>. In what follows, we denote by deg_G(U) and N_G(U) the number of common neighbours and the set of common neighbours of vertices from the set U in G respectively. We omit the subscript G, when the host graph G is clear from the context.
c) H is an s-clique with a single vertex being root. So X_(R, H)(v) is the number of s-clicks that share v. The respective maximum was studied in <cit.>.
Note that in the above three cases the bipartite graph between the set of roots and the expansion set is complete (i.e. there is a single class of roots), which appears to be crucial for the limit distribution to be from the Gumbel family. Let us give other two illustrative examples of symmetric extensions with several classes of roots:
d) H consists of a set of roots and an expansion set of equal size m, the bipartite graph between them is a matching, and the expansion set induces an m-clique. We call such an extension a bijective (m-)clique extension. Note that in this case there are m classes of roots, each one consists of a single vertex.
e) H is a simple path between two vertices x_1, x_2, the set of roots is R = {x_1, x_2}. There are exactly two classes of roots {x_1} and {x_2}. Note that the respective maximum statistics is the maximum number of paths of a given length between a pair vertices.
As we will see later, the limit distributions of the maximum statistics related to the last two extensions do not belong to the Gumbel family.
Let us now introduce the necessary notations and state the main result of our paper. Consider a symmetric fully grounded rooted graph (R, H) with h vertices and f edges induced by the expansion set S(H). Let its set of roots R be divided into classes (in accordance with the definition of classes of roots of symmetric extensions) such that, for every i∈[r], there are exactly m_i classes of size k_i (here, k_1< … < k_r are cardinalities of all the root classes that are presented in H). It turns out that the limiting distribution (but not the scaling constants) depends solely on the bipartite rooted subgraph of H consisting of the same set of roots, vertices that are adjacent to at least one root in H and edges between the roots and non-roots. This subgraph is defined by the vector W(H):=((m_1, k_1), (m_2, k_2), … ,(m_r, k_r)) as well as the vector of cardinalities of sets of vertices from the expansion set that are adjacent to all roots from a class (over all classes). Thus, to determine this subgraph completely, we consider g_ij, i∈[r], j∈[m_i], being the number of common neighbours of the jth root class of size k_i in the expansion set. Without loss of generality we assume that g_i,1≥…≥ g_i, m_i for every i∈[r]. Let us denote by g_i:=∑_j=1^m_i g_i,j the number of vertices adjacent to all roots from a certain class of size k_i, and by g:=∑_i=1^r g_i the total number of vertices adjacent to at least one root. Finally, let s≥ 0 be the number of vertices from the expansion set that are not adjacent to roots. Clearly,
|R| + g + s = h .
Within the above notations, define
[ a_n=n^s + g-1 p^f/g_1, 1! g_1, 2! … g_r, m_r![n p^∑_i=1^r k_i g_i + √(2n ln n)×; ×(∑_i=1^r g_i p^k_i (g_i - 1)√(k_i p^k_i(1-p^k_i))(1 - ln(k_i!)/2k_i ln n - ln[4π k_i ln n]/4k_i ln n)) ] ,; b_n=n^s + g - 1 p^f/g_1, 1! g_1, 2! … g_r, m_r!√(n/2 ln n) p^∑_i=1^r k_i g_i . ]
Then
max_T X(T) - a_n/b_n∑_i=1^r √(1-p^k_i/k_i p^k_i)∑_j=1^m_i g_i, jη_i, j ,
where the vectors η_i=(η_ij, j∈[m_i]) are mutually independent and have densities
p_η_i(x_1, x_2, … ,x_m_i) = e^-x_1· e^-x_2·…· e^-x_m_i· e^-e^-x_m_i· I(x_1≥ x_2≥…≥ x_m_i) .
Let us now briefly discuss the methods of the proof. It seems natural that the maximum number of extensions is achieved at the set of roots whose classes have maximum number of common neighbours. For example, it turns out that the maximum number of paths of a given length is drawn between two vertices with the first and the second maximum degrees. In the same way, a pair of vertices with maximum number of common neighbours has maximum possible number of k-cliques inside its neighbourhood. This can be proven using a conditional maximisation method that we distill from <cit.> and develop and generalise in the present paper. In <cit.> in this way the limit distribution of the maximum number of k-cliques sharing a single vertex was studied. Let us briefly recall the main line of the proof. For every vertex i of the random graph, consider its degree deg(i), and let Y_i be the expected number of k-cliques containing i conditioned on deg(i). The key argument that allows to transfer the limit distribution of max Y_i to the desired maximum number of k-cliques sharing a single vertex is
Let X(n) ∈^d, d = d(n), be a sequence of random vectors, a_n and b_n — two sequences of constants, and F be a continuous cdf.
Let for any x such that 0 < F(x) < 1:
* ∏_i=1^d Pr( Y_i ≤ a_n + b_n x) → F(x),
* Pr(max_i ∈ [d] Y_i ≤ a_n + b_n x) → F(x),
* for any fixed ϵ > 0,
Pr(|X_i - Y_i| > ϵ b_n) = o(1) Pr(Y_i > a_n + b_n x) uniformly over all i ∈ [d].
Then Pr(max_i ∈ [d] X_i ≤ a_n + b_nx) → F(x) as well.
In the present paper we generalise this techniques to symmetric rooted graphs with arbitrary root classes. This is possible since the conditional expectation is a monotone function of cardinalities of common neighbourhoods of root classes. For this reason, we find the limiting distribution of the vector of maximums Δ^j_n, k_i, i∈[r], j∈[m_i], where Δ^j_n, k is the jth maximum number of common neighbours of a k-set in G(n, p). This generalises the main result of <cit.>. Note that, in particular, we show that whp the maximums are achieved at disjoint sets of roots (m_1 sets of size k_1, m_2 sets of size k_2, etc). Thus, this is possible to find explicitly the average number of (R, H)-extensions of these maximising sets of roots.
Let us now apply Theorem <ref> to rooted graphs described in a)-e). All these rooted graphs have r=1.
Note that all the rooted graphs defined in a), b), c) have m_1=1 implying that the limit distribution belongs to the Gumbel family. In particular, consider a rooted graph with k_1 roots and g pairwise adjacent non-roots, that are also adjacent to every root. This rooted graph generalises all rooted graphs from a), b), c). For the maximum number max_T X(T) of such extensions in G(n, p) we get (we let k=k_1)
Let r=1, m_1=1 and s=0. Let
[ a_n = (np^k)^g-1 p^g 2/g![np^k + √(2n ln n) g√(kp^k(1-p^k))(1 - ln(k!)/2k ln n - ln[4π k ln n]/4 kln n)] ,; b_n = n^g-1 p^g 2 + kg/(g-1)!√(n(1-p^k)/2k p^k lnn) . ]
Then max_T X(T) - a_n/b_nd→η, where η has cdf e^-e^-x.
Note that this number max_T X(T) is exactly the maximum number of g-cliques with at least k common neighbours of their vertices. It is worth mentioning that this claim was announced in <cit.>, however its complete proof was not presented.
Let us apply Theorem <ref> to the case d). Here W(H) = (m, 1), v = 0, g_1, j = 1, g_1 = g = m, f = m 2. By Theorem <ref>, we get that the cdf of the limiting random variable equals
F(x) = ∫_-∞^x/m∫_t_m^(x-t_m)/(m-1)…∫_t_2^x-t_m - … - t_2 e^-e^-t_m e^-t_m e^-t_m-1… e^-t_1 dt_1 … dt_m .
After accurate calculations, we can verify that its density function equals
ρ(x) = e^-x(e^-e^-x/m/m! +
P(x)∫_-∞^-e^-x/me^t/tdt)
for some polynomial P since F(x) can be represented as
[ F(x)=∫_-∞^x/m e^-e^-t_me^-mt_m/(m-1)!dt_m - e^-x∑_i=2^m1/(i-1)! I_i(x) , where; I_i(x) = ∫_-∞^x/m e^-e^-t_m∫_t_m-1^(x-t_m)/(m-1)…∫_t_i+1^(x-∑_j=i+1^m t_j)/i dt_m … dt_i. ]
Note that Ei(y)= ∫_-∞^ye^t/t dt is an exponential integral which is not an elementary function. Thus:
Let (R,H) be a rooted graph presented on Fig. <ref>.d) with a clique of size m≥ 2. Let
a_n = (np)^m-1p^m 2[np + √(2n ln n) m√(p(1-p))(1 - ln[4πln n]/4ln n)],
b_n = (np)^m-1p^m 2√(np(1-p)/2 ln n).
Then max_T X(T) - a_n/b_nd→η, where η has cdf described in (<ref>).
Finally, we apply Theorem <ref> to the case e), which corresponds to the maximum number of paths with ℓ>3 edges between two vertices (ℓ=2, 3 are special cases of Corollaries <ref> and <ref> respectively). Here W(H) = (2, 1), v = ℓ-3, g_1, j = 1, g_1 = g = 2, f = ℓ-2. Note that the limit distribution is a particular case of (<ref>) with m=2 since, as we noted above, the limit distribution depends only on W(H) and (g_ij), so its density equals
ρ(x) = d/dx∫_-∞^x/2∫_t_2^x-t_2 e^-e^-t_2 e^-t_2 e^-t_1 dt_1 dt_2 = -e^-x∫_-∞^-e^-x/2e^t/tdt.
Thus, we got the following result:
Let (R,H) be a rooted graph presented on Fig. <ref>.e) with a path of length ℓ≥ 4. Let
a_n = (np)^ℓ-2p [np + 2√(2n ln n p(1-p))(1 - ln[4πln n]/4ln n)],
b_n = (np)^ℓ-2p √(np(1-p)/2 ln n).
Then max_T X(T) - a_n/b_nd→η, where η has density -e^-xEi(-e^-x/2).
So, indeed, the limit distributions of the maximum statistics from d) and e) does not belong to the Gumbel family.
The rest of the paper is organised as follows. In Section <ref> we recall and state several auxiliary claims about the random graph related to the binomial distribution that we use later in the proof. Section <ref> is devoted to the joint limit distribution of scaled maximum numbers of common neighbours. The main result is proved in Section <ref>. Section <ref> is devoted to a discussion of further questions.
§ PRELIMINARIES
When working with maximum numbers of extensions, we frequently use asymptotical expressions for tails of binomial distribution from <cit.>, that follow from the de Moivre–Laplace limit theorem. In particular, the de Moivre–Laplace limit theorem immediately implies
Fix ℓ∈ℕ and x > 0. Consider arbitrary ℓ vertices a_1, a_2, … ,a_ℓ in the random graph. Then
Pr(|(a_1, … ,a_ℓ) - np^ℓ| > √(2x np^ℓ(1-p^ℓ)ln n)) = 1 + o(1)/n^x√(π x ln n).
Let us denote for convenience Γ_ℓ = np^ℓ + √(2ℓ np^ℓ(1-p^ℓ) ln n). By the union bound, the number of common neighbours of every set of ℓ vertices is at most Γ_ℓ. Further in the work, in many places we restrict the probability space of graphs to only those graphs in which this property is satisfied for all ℓ≤ k, where k is a predefined fixed integer. We call this subspace 𝒬_n (omitting the dependence of k in the notation since it is always clear from the context), this narrowing would not affect convergences of probabilities to 0 or 1.
We also use the main result from <cit.> about the limit distribution of the maximum number of common neighbours.
Let Δ_n, k^m (k, m ∈) be the m-th highest number of common neighbours of k vertices in G(n, p), where the maximum is taken over all possible k-tuples of distinct vertices. Let the probability of drawing an edge p = p(n) ∈ (0, 1) be such that
p^k ≫ln^3 n/n, 1 - p ≫√(lnln n/n) as n →∞.
Let
a_n, k = np^k + √(2kp^k(1-p^k)n ln n)( 1 - ln(k!)/2k ln n - ln[4π k ln n]/4k ln n), b_n, k = √(p^k(1-p^k)n/2k ln n).
Then Δ_n,k^m - a_n, k/b_n, k converges in distribution to a random variable with cdf e^-e^-x∑_j=0^m-1e^-jx/j!.
We also use the asymptotics of the probability that a fixed k-set U has more than a_n,k + xb_n,k common neighbours. Denoting this event by B_U(x), using the de Moivre–Laplace limit theorem, it is easy to see (the full proof can be found in <cit.>) that
Pr(B_U(x)) ∼k!/n^k e^-x as n →∞.
In Appendix, we prove the useful technical lemma which is stated below. It claims that the maximum numbers of common neighbours are achieved at non-overlapping sets. We use this lemma to show that the maximum number of extensions is achieved at those disjoint root classes that, in turn, admit maximum numbers of respective subextensions by common neighbours.
Let m_i, k_i ∈ℕ, i∈[r], r∈ℕ, and all k_i be distinct. Let U_i,j, i∈[r], j∈[m_i], be k_i-sets such that cardinalities of their common neighborhoods are maximum, i.e. for every i∈[r] (U_i,1) ≥…≥(U_i, m_i) are cardinalities of m_i biggest common neighborhoods among all k_i-sets. Then whp all U_i,j are disjoint.
We move the proof to Appendix B since it is actually a generalisation of a particular case of this result proven (implicitly) in <cit.>, and we use exactly the same proof strategy.
§ JOINT DISTRIBUTION OF MAXIMA
The limit distribution of the scaled maximum number of extensions in Theorem <ref> is in fact entirely determined by the joint distribution of the maximum numbers of common neighbours of sets of vertices of respective sizes, which is studied in this section. In the first subsection, we find the joint distribution of Δ_n,k_i:=Δ^1_n,k_i, i∈[r], — maxima cardinalities of common neighborhoods of k_i vertices for distinct k_1,…,k_r.
In the second subsection, using this result, we find the limit joint distribution of the first m_i largest numbers of common neighbours of k_i vertices, i ∈ [r].
§.§ Maximum neighborhoods
It is shown here that the scaled maximum numbers of common neighbours are almost independent. More precisely, the following generalisation of Theorem <ref> (for constant p) is proved:
Let some x_1, x_2, … , x_r ∈ be fixed. Then
Pr(Δ_n, k_1 - a_n, k_1/b_n, k_1≤ x_1, Δ_n, k_2 - a_n, k_2/b_n, k_2≤ x_2, …, Δ_n, k_r - a_n, k_r/b_n, k_r≤ x_r) → e^-e^-x_1· e^-e^-x_2·…· e^-e^-x_r as n→∞,
where constants a_n, k_i, b_n, k_i are defined in (<ref>).
Denote by X_i = X_i(x_i), i ∈ [r], the number of sets of k_i vertices that have a “large” number of common neighbours, namely, more than a_n, k_i + b_n, k_i x_i. Then our goal is to bound Pr(X_1 = 0, X_2 = 0, …, X_r = 0).
Lower bound
Pr(X_1 = 0, X_2 = 0, …, X_r = 0) ≥ Pr(X_1 = 0) Pr(X_2 = 0) … Pr(X_r = 0)
is a consequence of <cit.> — an application of the well-known FKG-inequality <cit.>. Indeed, the properties of the absence of sets with a large number of common neighbours are decreasing functions of the edges of the random G(n, p). The limit of the right-hand side of this bound coincides with the limit distribution in Claim <ref> due to Theorem <ref>.
Upper bound is in fact similar to the proof of <cit.> and follows almost directly from <cit.>. Let us recall the requirements and the statement of this lemma.
Let us denote by T the set of all subsets of vertices in G(n, p) of one of the sizes k_1, k_2, …, k_r. We consider two families of events: {B_U} and {B̃_U} = {B_U ∩{G ∈𝒬_n}}, where U = {u_1, … ,u_k_i}, i∈[r], is an arbitrary set in T. Note that x_i is substituted into the definition of B_U=B_U(x_i) according to the size of U. Thus our aim is to bound Pr( ⋂_U ∈ TB_U) ≤ Pr( ⋂_U ∈ TB̃_̃Ũ). To do this, we use the following key lemma from <cit.>.
Let (A_i)_i ∈ [d] be the set of events with non-zero probabilities. If sets (D_i ⊂ [d] \{i})_i∈[d] satisfy
Pr( ⋃_j ∈ [i-1] \ D_i A_j | A_i ) - Pr( ⋃_j ∈ [i-1] \ D_i A_j ) ≤φ ,
for some φ≥ 0 and all i∈[d], then
Pr(⋂_i ∈ [d]A_i) ≤∏_i∈[d] Pr(A_i) + φ( 1 - ∏_i ∈ [d] Pr ( A_i) ) + Δ ,
where Δ = Δ(A, D) = ∑_i∈[d] Pr( A_i ∩⋃_j ∈ [i-1]∪ D_i A_j) ∏_ℓ∈ [d] \ [i] Pr (A_ℓ).
It is useful to choose D_i to be the set of all j≠ i so that A_j strongly depends on A_i. We order all U∈ T, and let A_i=B̃_U for the ith set U. We also let j∈ D_i whenever the jth set of T has a non-empty intersection with the ith set from T. Then
∏ Pr (A_i)=∏_U∈ T Pr(B̃_U) = exp[ ∑_U∈ Tln (1 - Pr(B̃_U)) ] = exp[ ∑_i∈ [r] -λ_k_i + o(1) ],
where λ_k = ∑_U ⊂ [n], |U| = k Pr (B̃_U). In <cit.> it is proved that λ_k ∼ e^-x_k as n →∞. Thus, it suffices to verify that Δ=o(1) and φ=o(1).
Let us first prove that Δ=o(1). In the proof of Lemma <ref> it is shown that for arbitrary i, j ∈ [r] and an arbitraty C ∈ℝ
∑_U ∩ V ≠∅, |U| = k_i, |V| = k_j, U ≠ V Pr((U) > a_n, k_i + C √(n/ln n), (V) > a_n, k_j + C √(n/ln n), G ∈𝒬_n) → 0 .
Choose C sufficiently small and get
Δ≤∑_U∈ T,V∈ T: V∩ U≠∅(B̃_U∩B̃_V)=o(1) .
In remains to prove that ϕ=o(1). For every U∈ T
Pr( ⋃_V ∩ U = ∅. B̃_V | B̃_U ) - Pr( ⋃_V ∩ U = ∅B̃_V ) ≤
≤ Pr( ⋃_V ∩ U = ∅_G \ U(V) > a_n,k_i + x_i b_n, k_i - |U| ) - Pr( ⋃_V ∩ U = ∅B̃_V ) ,
where k_i=k_i(V)=|V| and x_i=x_i(V) is defined accordingly. So due to the union bound and the de Moivre–Laplace limit theorem we get
Pr( ⋃_V ∩ U = ∅. B̃_V | B̃_U ) - Pr( ⋃_V ∩ U = ∅B̃_V ) ≤∑_V ⊂ T Pr ((V) ∈ [-k_r, 0] + a_n,k_i + x_i b_n, k_i ) → 0
uniformly over i ∈ [d], implying that φ = o(1) and completing the proof.
§.§ First m_i maxima
For i∈[r] and j ∈[m_i], let ξ_i, j be the centered and normalised j-th maximum number of common neighbours of k_i vertices in G(n, p) with the scaling constants defined in (<ref>), i.e.
ξ_i, j = Δ_n, k_i^ j - a_n, k_i/b_n, k_i .
The purpose of this section is to find the limiting distribution of the random vector ξ comprising all s = ∑_i=1^r m_i random variables ξ_i, j, i∈[r], j ∈ [m_i].
For x∈ℝ^s we will denote its coordinates by x_i, j, i∈[r], j ∈ [m_i], for convenience. Clearly, it is sufficient to study the distribution of ξ on the set Y = {x ∈^s : ∀ i ∈ [r] x_i, m_i≤ x_i , m_i - 1≤…≤ x_i, 1}, since from the definition ξ_i, m_i≤ξ_i, m_i-1…≤ξ_i, 1 for every i ∈ [r]. Fix x∈ℝ^s. For i∈[r], set A(i) = {ξ_i, 1≤ x_i, 1, ξ_i, 2≤ x_i, 2, … , ξ_i, m_i≤ x_i, m_i}.
For i∈[r], t∈[m_i] and 1≤ℓ_1≤ℓ_2≤…≤ℓ_t-1≤ m_i, define
A(i; ℓ_1, …, ℓ_t-1)={ξ_i, 1∈ [x_i, ℓ_1, x_i, ℓ_1+1], …, ξ_i, t-1∈ [x_i, ℓ_t-1, x_i, ℓ_t-1+1], ξ_i, t≤ x_i, m_i}
— the event, saying that each ξ_ij (but the smallest one) is between two consecutive coordinates of x. Clearly, A(i) is the disjoint union of all possible A(i; ℓ_1, …, ℓ_t-1). So, in order to find the distribution of ξ it is sufficient to find it on all Cartesian products of events A(i; ℓ_1, …, ℓ_t-1) over i∈[r]. As we will see later, in order to compute density of the limit distribution of ξ, it is sufficient to find the measure of a one “simple brick” D = D_1 ×…× D_r, where:
D_i = {ξ_i, 1∈ [x_i, 2, x_i, 1],ξ_i, 2∈ [x_i, 3, x_i, 2], … ,
ξ_i, m_i-1∈ [x_i, m_i, x_i, m_i-1], ξ_i, m_i≤ x_i, m_i} .
Let us also restrict the probability space only to those graphs in which the first m_i maxima numbers of common neighbours of k_i-sets are reached at non-overlapping sets over all i∈[r]. We denote this event as DisjRoots. From Lemma <ref> whp DisjRoots happens, so the limit of Pr(D_1 × D_2 ×…× D_r) is the same as the probability limit of D' = D_1 × D_2 ×…× D_r ∩ DisjRoots.
Now we consider the set of disjoint events D'(w), w∈ W, where W = (U_ij, i∈[r], j∈[m_i-1]) — the set of all tuples of disjoint sets U_i,j of size k_i, and
D'(w) = ⋂_i=1^r {∀ j∈[m_i-1] (U_i, j) - a_n, k_i/b_n, k_i∈ [x_i, j+1, x_i, j], max_V_i ∈ G/U, |V_i| = k_i(V_i) - a_n, k_i/b_n, k_i≤ x_i, m_i},
where U = _ i∈[r], j∈[m_i-1] U_i,j. It is obvious that
∑_w Pr(D'(w)) - Pr(DisjRoots) ≤ Pr(D') ≤∑_w Pr(D'(w)),
so it is enough to estimate the sum of Pr(D'(w)) over w∈ W. The total number of vectors in W is
|W| = n k_1·n - k_1 k_1·…·n - (m_1 - 2)k_1 k_1·n - (m_1 - 1)k_1 k_2·…·
·n - (m_1 - 1)k_1 - (m_2-2)k_2 k_2·…·n - ∑ (m_i-1)k_i + k_r k_r = n^|U| (1+o(1))/(k_1!)^m_1-1 (k_2!)^m_2-1… (k_r!)^m_r-1 .
Let us order pairs (i, j) lexicographically. Denote G_ij = G/⋃_(i',j') < (i,j)U_i',j'. Then we have for each w ∈ W:
D'(w) = {∀ i∈[r] ∀ j∈[m_i-1] _G_ij(U_i, j) - a_n, k_i + ϵ_i, j/b_n, k_i∈ [x_i, j+1, x_i, j], .
. ∀ i∈[r] max_V_i ∈G/U k_i_G/U(V_i) - a_n, k_i + ϵ_i/b_n, k_i≤ x_i, m_i},
where ϵ_i,j and ϵ_i are random variable equal to the number of common neighbours of U_i,j and V_i respectively among the union of the previous ones in the our enumeration {U_i, j}. It is clear that for all i∈[r], j∈[m_i], ϵ_i, ϵ_i, j < |R|=const. Using this, and the consequence of the De Moivre-Laplace theorem (<ref>) we get that the probability limit is
lim_n→∞ Pr(D'(w)) = ∏_i=1^r ( (k_i)! (e^-x_i,2 - e^-x_i, 1)/n^k_i×…×(k_i)!(e^-x_i, m_i - e^-x_i, m_i-1)/n^k_i) ×
×lim_n→∞ Pr(max_V_1 ∈G/U k_1_G/U(V_1) - a_n, k_1/b_n, k_1≤ x_1, m_1, …, max_V_r ∈G/U k_r_G/U(V_r) - a_n, k_r/b_n, k_r≤ x_r, m_r).
Using the probability limit for the last factor from Claim <ref> and the asymptotics on |W| (<ref>), we get
lim_n→∞ Pr(D) = lim_n→∞ Pr(D') = ∑_w ∈ Wlim_n→∞ Pr(D'(w)) =
= ∏_i=1^r ( (e^-x_i, 2 - e^-x_i, 1) · (e^-x_i, 3 - e^-x_i, 2) ·…· (e^-x_i, m_i - e^-x_i, m_i - 1) ) ·∏_i=1^r e^-e^-x_i, m_i := F(x) .
We denote by 𝒜 the set of all Cartesian products of A(i; ℓ_1, …, ℓ_t-1) over i ∈ [r]. In the same way as above, it is easy to see that the limit probability of the j-th set A_j = A(1; ℓ^1_1, …, ℓ_t_1-1^1)×…× A(r; ℓ_1^r, …, ℓ_t_r-1^r) ∈𝒜 is
T_j(x) := ∏_i=1^r (e^-x_i, ℓ_1^i+1 - e^-x_i, ℓ_1^i) · (e^-x_i, ℓ_2^i+1 - e^-x_i, ℓ_2^1) ·…· (e^-x_i, ℓ_(t_1-1)^i+1 - e^-x_i, ℓ_(t_1-1)^i) ·∏_i=1^r e^-e^-x_i, m_i .
It is easy to see that the density of limit distribution of ξ equals
p(x_1, …, x_s) = ∂^s/∂ x_1 …∂ x_s∑_j=1^|𝒜| T_j(x_1, …, x_s) = ∂^s/∂ x_1 …∂ x_s F(x_1, …, x_s) =
= ∂^s/∂ x_1 …∂ x_s∏_i=1^r (e^-x_i, 2 - e^-x_1, 1) (e^-x_i, 3 - e^-x_1, 2) ·…· (e^-x_i, m_i - e^-x_i, m_i - 1) · e^-e^-x_i, m_i .
Expanding all brackets and differentiating, we obtain
ξ converges in distribution to a random vector with an absolutely continuous distribution with pdf p(x_1, …, x_s) = ∏_i=1^r p_i(x_i, 1, x_i, 2, … x_i, m_i), where each
p_i(x_1, x_2, … ,x_m_i) = e^-x_1· e^-x_2·…· e^-x_m_i· e^-e^-x_m_i· I(x_1≥ x_2≥…≥ x_m_i) .
Note that Theorem <ref> and Theorem <ref> are particular cases of Claim <ref> for constant p.
§ PROOF OF THE MAIN RESULT
In this section we prove the main result of the paper, Theorem <ref>, by implementing the conditional maximisation method described in Introduction. Let us consider in G(n,p) an arbitrary ordered set of vertices T of cardinality |R| and its partition into root classes A_ij, i∈[r], j∈[m_i]. Let Y(T) be the number of (R, H)-extensions conditioned on numbers of common neighbours for all root classes A_ij. Thus
Y(T) = E ( . X_(R, H)(T) | _G(A_ij), i∈[r], j∈[m_i]) .
The general idea is to find the limit distribution of a scaled max_T Y(T) and then prove that the maximum number of extensions max_T X(T) is not much different from it and so converges to the same distribution. It is worth noting that we can not do the same as in <cit.> and directly apply Lemma <ref> since the first condition is not satisfied in our settings: the product of probabilities does not converge to the limit distribution of maxima. However, we state a more general lemma, which is sufficient for our purposes:
Let X=X(n) ∈^d, d = d(n), be a sequence of random vectors. Let a_n and b_n be two sequences of real constants, and let F be a continuous cdf. Let, for any x∈ℝ such that 0 < F(x) < 1,
* Pr(max_i ∈ [d] Y_i ≤ a_n + b_n x) → F(x),
* for any fixed ε > 0,
∑_i=1^d Pr(|X_i - Y_i| > ε b_n) = o(1) .
Then Pr(max_i ∈ [d] X_i ≤ a_n + b_nx) → F(x) for all x∈ℝ.
The proof of this lemma is similar to the proof of Lemma <ref>; it can be found in Appendix A. We verify the first requirement in Lemma <ref> with cdf defined in (<ref>) in Section 4.1. The second condition is verified using Janson inequality and a similar (but weaker) upper tail bound in Section 4.2 completing the proof of Theorem <ref>.
§.§ Convergence of the expected conditional number of extensions
Here we will havily rely on Claim <ref>.
Consider an arbitrary set of vertices T of size |R| and its partition in accordance with W(H):
T = _i=1^r A_i, A_i = _j=1^m_i A_i,j, where |A_i,j| = k_i .
Then for Y(T) defined in (<ref>) we have:
Y(T) = p^f (n-h+s) · (n-h+s-1) ·…· (n-h+1) · E(. S(T) | (A_i,j), i∈[r], j∈[m_i]) ,
where S(T) is the number of (R, H')-extensions of T in G(n, p), and H' is obtained from H by deleting all non-root vertices that are not adjacent to roots and also all edges between all the remaining non-root vertices.
Let us estimate the conditional expectation of S(T). From the definition of symmetric extensions, each vertex of this “first” level in H is connected to exactly one of the sets of roots corresponding to A_i,j in G(n,p). Note that, if U is the set of all common neighbours of A_1, 1 in G(n, p), then it may happen that some other A_i,j has common neighbours in U or that some roots from T belong to U. Then obviously
∏_i∈[r],j∈[m_i](A_i,j) - |R| - g g_i,j≤ E(.S(T) | (A_i,j), i∈[r], j∈[m_i])) ≤∏_i∈[r],j∈[m_i](A_i,j) g_i,j .
Thus, assuming that all (A_i,j)→∞ as n→∞, we get that
Y(T)=p^f n^s ∏_i∈[r],j∈[m_i](A_i,j)^g_i,j/g_i,j!(1+O(1/(A_i,j))) .
Denote ψ_i, j = (A_i, j) - a_n, k_i/b_n, k_i with constants a_n, k, b_n, k defined in (<ref>). Note that for every i∈[r], the first m_i maxima of ψ_i,j over A_i,j equal ξ_i, 1≥ξ_i,2…≥ξ_i,m_i, where ξ_i,j are defined in Section 3.2. Since a_n, k∼ n, b_n, k∼√(n/ln n), and whp ψ_i,j = O(ln n) (we further restrict the space of graphs to those in which this condition is satisfied, the convergence of probabilities does not change), then whp
Y(T) = a(n) + b(n) + o(n^s+g-1√(n/ln n)),
where
a(n) = p^f n^s/∏_i∈[r],j∈[m_i] g_i,j!∏_i=1^r a_n, k_i^∑_j=1^m_i g_ij∼p^f n^g+s-1/∏_i∈[r],j∈[m_i] g_i,j!( np^∑_i=1^r k_i ∑_j=1^m_i g_i,j + .
+ . √(2n ln n)(∑_i=1^r(∑_j=1^m_i g_ij) p^k_i (∑_j=1^m_i g_i,j - 1)√(k_ip^k_i(1-p^k_i))(1 - ln(k_i!)/2k_i ln n - ln[4π k_i ln n]/4k_i ln n))) = a_n ,
b(n) = p^fn^s/∏_i∈[r],j∈[m_i] g_i,j!∑_i=1^r [ a_n, k_i^∑_j=1^m_i g_i,j - 1( ∏_i' ≠ i^r a_n, k_i'^∑_j=1^m_i' g_i',j) b_n, k_i(∑_j=1^m_i g_i,jψ_i, j)]
∼p^fn^s+g-1/∏_i∈[r],j∈[m_i] g_i,j!√(n/2 ln n) p^∑_i=1^r k_i ∑_j=1^m_i g_i,j( ∑_i=1^r √(1-p^k_i/k_i p^k_i)∑_j=1^m_i g_i, jψ_i, j)
= b_n (∑_i=1^r √(1-p^k_i/k_i p^k_i)∑_j=1^m_i g_i, jψ_i, j) .
By Claim <ref> and Slutsky's theorem,
max_T Y(T) - a_n/b_n = ∑_i=1^r √(1-p^k_i/k_i p^k_i)∑_j=1^m_i g_i, jξ_i, j + o_ P(1) η ,
where η has cdf defined in (<ref>). Note that the equality in (<ref>) holds true due to the descending order of g_i,j for each fixed i since ξ_i,1≥…≥ξ_i, m_i. It is also worth noting that whp the maximum of Y(T) coincides with the point-wise maximum (i.e. is achieved at A_i,j that have maximum numbers of common neighbours). Finally, Lemma <ref> together with (<ref>) imply the first requirement in Lemma <ref>.
The pdf of η could be found explicitly due to Claim <ref>. Note that in the case r=1, we may divide both parts of (<ref>) by √(1-p^k_1/k_1 p^k_1) avoiding the dependency of the limit distribution of p.
§.§ Deviation from the expected conditional number of extensions
Here, using Janson-type correlation inequalities, we check the condition (<ref>):
∑_T Pr(|X(T) - Y(T)| > ε b_n) = o(1) .
Obviously, it suffices to show that uniformly over all root sets T in G(n,p), |T|=|R|, the probability of such deviation is o(1/n^|R|). We use the same notation for A_i,j as in the previous section. Due to Claim <ref> and the union bound, with probability o(1/n^|R|) for at least one of the constantly many sets A_i,j in the decomposition of T the number of common neigbours (A_i, j) differs from np^k_i by more than √(2|R|np^k_i(1-p^ k_i) ln n). Let 𝒮_i be the set of all integers that differ from np^k_i by at most √(2|R|np^k_i(1-p^ k_i) ln n). Then
Pr(|X(T) - Y(T)| > b_n ε) ≤max_s_i,j∈𝒮_i Pr(|X(T) - Y(T)| > b_n ε|(A_i, j) = s_i,j, i∈[r], j∈[m_i] ) + o(1/n^|R|).
Let us first get an upper tail bound using the inequality from <cit.>. For convenience we recall this inequality below:
[V. Rödl, A. Ruciński <cit.>]
Let Γ_p be a binomial random subset of a finite set Γ, and let ℱ be a family of subsets in Γ. Let Z=∑_F∈ℱ I(F⊂Γ_p) count the number of times when F∈ℱ appear as subsets of Γ_p. Let D be the maximum (over F) number of sets in ℱ that overlap with a single F∈ℱ. Then, for every t≥ 0,
Pr(Z ≥ EZ + t) ≤ (D + 1)exp[-t^2/4(D + 1)( E Z + t/3)] .
Now we fix s_i,j∈𝒮_i, i∈[r], j∈[m_i], and also fix subsets S_i,j∈[n]∖ T of sizes s_i,j. Assume that N(A_i,j) = S_i,j for all i∈[r], j∈[m_i]. In order to apply Claim <ref>, we let Γ to be the set of all edges that have both end-points outside T. Let Z count the number of (R,H)-extensions of T. Then the family ℱ consists of sets of edges induced by sets of vertices of size O(n^h-|R|), and thus D=O(n^h-|R|-2). Recall that b_n = Θ(n^s+g-1√(n/ln n)) = Θ(n^h-|R|-1√(n/ln n)) by (<ref>). Therefore, from (<ref>) and the definition of S it follows that that E(X(T) . | (A_1,1), … , (A_r, m_r)) = Θ(n^h-|R|). Thus, using Claim <ref>:
Pr(X(T) - Y(T) > b(n)ε. | (A_i, j)=s_i,j, i∈[r], j∈[m_i] ) ≤ n^O(1)exp[-Θ(n/ln n) ] .
To get the lower tail bound, we use the Janson's inequality <cit.>. Since the expected number D of edge-crossing extensions is O(n^2(h-|R|) - 2), we get:
Pr(X(T) - Y(T) < -b(n) ε. | (A_i, j)=s_i,j, i∈[r], j∈[m_i] ) ≤exp[-b_n^2 ε^2/2D] = exp[-Θ(n/ln n)] .
Combining (<ref>) and (<ref>), we finish the proof of (<ref>) and, thus, the proof of Theorem <ref> as well.
§ FURTHER QUESTIONS
We believe that our techniques can be used to prove the convergence of a rescaled maximum number of extensions even for non-symmetric (R,H), while it should be hard to find the limit distribution.
In particular, for the probably easiest non-symmetric (R, H) consisting of two roots v_1,v_2 and two adjacent non-roots u_1,u_2 such that u_1 is adjacent to both v_1,v_2, and u_2 is only adjacent to v_2 (see Fig. [fig:bad_example]2), we need a local limit theorem for vectors of dependent binomial random variables, which may be hard to eliminate.
Also, achieving a sufficient upper bound for Δ to apply Lemma <ref> could be technically very involved. Though we shall note that vertices of H that are not adjacent to R do not cause any additional difficulties.
Note that Bollobás <cit.>, Ivchenko <cit.> and Rodionov, Zhukovskii <cit.> studied also m-th maxima of cardinalities of common neighborhoods. It is of interest to get similar results for arbitrary symmetric extensions, while it might be not so evident when r>1 (let us recall that r is the number of different cardinalities of root classes) or when r=1 and m>2.
Finally, our results can be generalised to p=p(n)=o(1) (but p>n^-ε for some small enough constant ε>0) when r=1. For larger r, the limit distribution that we get depends on p. So, for r>1 and p=o(1), the limit behaviour of the maximum number of extensions should be different.
§ ACKNOWLEDGEMENTS
Stepan Vakhrushev is supported by Russian Science Foundation, project 22-11-00131.
prob_method N. Alon, J.H. Spencer, The Probabilistic Method, Third Edition, John Wiley & Sons (2008).
arratia R. Arratia, L. Goldstein, L. Gordon, Two moments suffice for Poisson approximations: the Chein-Stein method, The Annals of Probability 17:1 (1989) 9–25.
moments P. Billingsley, Probability and measure, 3d Edition, Wiley (2012).
rand-proc-trian T. Bohman, A. Frieze, E. Lubetzky, Random triangle removal, Advances in Mathematics, 280 (2015) 379–438.
rand-proc-dynconc T. Bohman, P. Keevash, Dynamic concentration of the triangle-free process, Random Structures & Algorithms, 58 (2021) 221–293.
rand-proc-evol T. Bohman, P. Keevash, The early evolution of the H-free process, Inventiones mathematicae, 181 (2010) 291–336.
bolobas B. Bollobás, The distribution of the maximum degree of a random graph, Discrete Mathematics, 32 (1980) 201–203.
carleman K.L. Chung, A Course in Probability Theory, 2d ed, Academic Press, New York, (1974).
FT R.A. Fisher, L.H.C. Tippett, Limiting forms of the frequency distribution of the largest or smallest member of a sample, Mathematical Proceedings of the Cambridge Philosophical Society, 24 (1928) 180–190.
ftg B. Gnedenko, Sur La Distribution Limite Du Terme Maximum D'Une Serie Aleatoire, Annals of Mathematics, 44:3 (1943) 423–453.
main M. Isaev, I. Rodionov, R. Zhang, M. Zhukovskii, Extremal independence in discrete random systems, Annales de l'Institut Henri Poincaré (B) (to appear), preprint arXiv:2105.04917.
ivchenko G.I. Ivchenko, On asymptotic behaviour of the degrees of vertices in a random graph, Theory of Probability & Its Applications, 18:1 (1973) 195–203.
stein-chen S. Janson, Coupling and Poisson Approximation, Acta Applicandae Mathematicae, 34 (1994) 7–15.
janson_ineq S. Janson, T. Łuczak, A. Ruciński, Random graphs, Wiley (2000).
zero-one-when T. Łuczak, J. Spencer, When does the zero-one law hold?, Journal of the American Mathematical Society, 4 (1991) 451–468.
malinovsky Y. Malinovsky, A note on the distribution of the extreme degrees of a random graph via the Stein–Chen method, (2022) arXiv:2204.05881.
nad_mitov S. Nadarajah, K. Mitov, Asymptotics of maxima of discrete random variables, Extremes, 5:3 (2002) 287–294.
common I. Rodionov, M. Zhukovskii, The distribution of the maximum number of common neighbors in the random graph, European Journal of Combinatorics, 107: 103602 (2023).
rodl_rucinski V. Rödl, A. Ruciński, Random graphs with monochromatic triangles in every edge coloring, Random Structures & Algorithms, 5 (1994) 253–270.
zero-one-law S. Shelah, J. Spencer, Zero-one laws for sparse random graphs, Journal of the American Mathematical Society, 1 (1988) 97–115.
warnke M. Šileikis, L. Warnke, Counting extensions revisited, Random Structures & Algorithms, 61 (2022) 3–30.
spencer2 J.H. Spencer, Threshold functions for extension statements, Journal of Combinatorial Theory, Series A, 53 (1990) 286–305.
spencer J.H. Spencer, Counting extensions, Journal of Combinatorial Theory, Series A, 55 (1990) 247–255.
§ APPENDIX
§.§ A. Proof of Lemma <ref>
Let us denote A_i=A_i(x):={Y_i > a_n+b_nx}, B_i := {X_i > a_n+b_nx} for all i ∈ [d]. Note that it is sufficient to prove Lemma <ref> for all x∈ℝ such that 0<F(x)<1. Let us fix such an x∈ℝ. Find δ > 0 such that 0 < F(x-δ) ≤ F(x+δ) < 1. Let ε∈ (0, δ). We also denote A_i^ε := A_i(x+ε). The following inequalities hold:
Pr(⋃_i∈[d] A_i^ε) - Pr(⋃_i∈[d] B_i) ≤ Pr(⋃_i∈[d] A_i^ε\⋃_i∈[d] B_i) ≤∑_i∈[d] Pr(A_i^ε\ B_i) .
The condition (<ref>) implies ∑_i∈[d] Pr(A_i^ε\ B_i) = o(1), so
Pr(⋃_i∈[d] B_i) ≥ Pr(⋃_i∈[d] A_i^ε) - o(1) .
But from the first requirement in Lemma <ref>
1 - Pr(⋃_i∈[d] A_i^ε) F(x+ε) .
Recalling that F is continuous and that the above holds for any ε∈ (0, δ), we conclude that
1- Pr(⋃_i∈[d] B_i) ≤ F(x) + o(1) .
The lower bound 1- Pr(∪_i∈[d] B_i) ≥ F(x) - o(1) is obtained similarly, using the events A_i^-ε:=A_i(x-ε) and the relation ∑_i∈[d] Pr(B_i \ A_i^-ε) = o(1) that follows directly from the condition (<ref>).
§.§ B. Proof of Lemma <ref>
Since all the considered parameters are constants, it is sufficient to prove that, for any positive integers k_1 ≥ k_2 ≥ k and m_1, m_2 whp the intersection of U_1, m_1 with U_2, m_2 does not equal to k. Let us denote this event by A:=A(k_1, k_2, k, m_1, m_2). Let us separately consider the case when the second set is a subset of the first set, i.e. k_1 > k_2 = k.
Let us estimate the probability of A by the union bound over all choices of two sets U_1⊂ U_2 on the role of U_1, m_1 and U_2, m_2:
Pr(A) ≤n k_1-k_2·n-k_1+k_2 k_2· Pr(U_1, m_1={1, …, k_1}, U_2, m_2={1, …, k_2}).
Fix ε > 0. From Theorem <ref> the limit distribution of the maximum number of common neighbours implies that there exists a constant C = C(ε) and an index n_0 starting from which:
Pr(|Δ_k_1, n^m_1 - a_k_1,n| ≥ C ·√(n/ln n)) < ε / 4 ,
Pr(|Δ_k_2, n^m_2 - a_k_2,n| ≥ C ·√(n/ln n)) < ε / 4 .
Hence, for n > n_0:
Pr(A) ≤n k_1-k_2·n-k_1+k_2 k_2· Pr( |(1, …, k_i) - a_k_i, n| < C √(n/ln n) for i=1,2 ) + ε/2.
We write the internal probability in the following simple way:
Pr( |(1, …, k_1) - a_k_1, n| < C √(n/ln n), |(1, …, k_2) - a_k_2, n| ≤ C √(n/ln n)) ≤
≤∑_X ⊂ [n]: ||X| - a_k_1, n|/√(n/ ln n)≤ C Pr( N(1, …, k_1) = X ) Pr(.|(1, …, k_2) - a_k_2, n | ≤ C √(n/ln n)| N(1, …, k_1) = X) .
By the triangle inequality, the conditional probability in (<ref>) is bounded from above by the probability that the number of neighbours of U_2 in [n] \ (X ∪ U_1) differs from a_k_2, n - |X| by no more than 2C√(n/ln n). By the de Moivre-Laplace limit theorem, the probability of the latter event approaches 0 as n →∞.
From (<ref>) and (<ref>) we get:
0 ≤ Pr(A) ≤ o[ n k_1-k_2·n-k_1+k_2 k_2· Pr(|(1, …, k_1) - a_k_1, n| ≤ C √(n/ln n))] + ε/2 .
From (<ref>) and (<ref>) it follows that Pr(|(1, …, k_1) - a_k_1,n| ≤ C√(n/ln n)) = O(n^-k_1) implying that the first summand in the right hand side of (<ref>) approaches 0 as n →∞. Due to arbitrariness of ε, the proof is completed.
Now consider the case when none of the sets is nested in the other, i.e. 1 ≤ k < min(k_1, k_2). In <cit.>, this statement is proven in the particular case k_1 = k_2. Our proof is similar, and we will use the bounds from <cit.> to get our results as well.
First, let's narrow down the probability space to graphs with a “small” number of common neighbours:
Pr(A) ≤ Pr(A ∩{G(n,p) ∈𝒬_n}) + Pr(G(n,p) ∉𝒬_n).
As discussed in Section 2, the second term tends to 0. In what follows, we estimate only the joint probability. Fix ε > 0. From Theorem <ref> there exists a constant C = C(ε) such that starting from some n_0 ∈ℕ:
Pr(Δ_k_1, n^m_1 - a_k_1,n≤ -C ·√(n/ln n)) < ε / 4,
Pr(Δ_k_2, n^m_2 - a_k_2,n≤ -C ·√(n/ln n)) < ε / 4 .
Then similarly to the previous case:
Pr(A ∩{G(n,p) ∈𝒬_n}) ≤n kn - k k_1 - kn - k_1 k_2 - k×
× Pr((1, …, k_1) - a_k_1, n/√(n / ln n) > -C, (k_1 - k + 1, …, k_1 + k_2 - k) - a_k_2, n/√(n / ln n) > -C, G(n,p) ∈𝒬_n ) + ε/2 .
Hence, it suffices to prove that the fourth factor (probability of the event) is o(n^-(k_1 + k_2 - k)). Denote b_1 = a_k_1, n - C √(n/ln n), b_2 = a_k_2, n - C √(n/ln n). It is obvious from the definition of 𝒬_n that
Pr(([k_1]) > b_1, ([k_1+k_2]∖ [k_1] - k) > b_2, G(n,p) ∈𝒬_n ) ≤
≤∑_i Pr(ξ_n, p^k=i) Pr(ξ_i, p^k_1-k > b_1 - (k_2-k)) Pr(ξ_i, p^k_2-k > b_2 - (k_1-k)) +
+ Pr(ξ_n, p^k≤ np^k-√(2(k_1+k_2)p^k(1-p^k)n ln n )),
where the summation is over i∈(np^k-√(2(k_1+k_2)p^k(1-p^k)n ln n), Γ_k]. From Claim <ref> we get that the second term is n^-(k_1 + k_2)(1 + o(1))/2√((k_1 + k_2)πln n) = o(n^-(k_1+k_2)). Therefore, it suffices to estimate only the first sum.
By the de Moivre–Laplace limit theorem, uniformly over i:
Pr(ξ_n, p^k = i) = exp[-(np^k - i)^2/2np^k(1-p)^k]/√(2π n p^k(1-p^k))(1+ o(1)) .
By the de Moivre–Laplace limit theorem (here we skip the computations, that can be found in <cit.>):
Pr(ξ_i, p^k_1 - k > b_1 - (k_2 - k)) ≤√(1 - p^k_1-k)e^-(b_1 - i p^k_1-k)^2/2i p^k_1-k(1-p^k_1-k)(1+o(1))/√(2πln n)(√(2k_1(1-p^k_1)) - √(2k(p^k_1-k - p^k_1))) ,
and the same bound holds true with k_1 replaced with k_2 and b_1 replaced with b_2. From (<ref>) and (<ref>), we get that the first summand in right-hand side of (<ref>) is O(1)/√(n)ln n∑ e^-g(i)
, where
g(i) = (np^k - i)^2/2np^k(1-p^k) + (ip^k_1 - k - b_1)^2/2ip^k_1 - k(1-p^k_1 - k) + (ip^k_2 - k - b_2)^2/2ip^k_2 - k(1-p^k_2 - k) .
Denote i = np^k + x√(np^k(1-p^k)ln n), x ∈ (-√(2(k_1 + k_2)), √(2k)]. Then the first term in g(i) becomes x^2/2ln n.
After the replacement, we get:
g(i) = g̃_p(x) ln n + ĝ_p(x) lnln n(1 + o(1)),
where
g̃_p(x) = x^2/2 + x^2(p^k_1 - k - p^k_1)/2(1 - p^k_1 - k) + x^2(p^k_2 - k - p^k_2)/2(1 - p^k_2 - k) - 2√(2k_1)√((p^k_1 - k - p^k_1)(1 - p^k_1))x/2(1-p^k_1 - k) -
- 2√(2k_2)√((p^k_2 - k - p^k_2)(1 - p^k_2))x/2(1-p^k_2 - k) + 2k_1(1-p^k_1)/2(1-p^k_1 - k) + 2k_2(1-p^k_2)/2(1-p^k_2 - k),
and ĝ_p(x) is negative and bounded from below by a constant (in the same way as in <cit.>). It follows from the size of the summation segment that it suffices for us to show that g̃_p(x) ≥ k_1+k_2-k + ω(lnln n/ln n). We need the positive term ω(lnln n/ln n) to overcome the negative contribution of ĝ_p(x).
We set g̃_p(x)=1/2(g̃_1,p(x)+g̃_2,p(x)), where
g̃_j,p(x)=x^2(1+p^k_j-k - 2p^k_j )- 4√(2k_j)√((p^k_j-k - p^k_j)(1-p^k_j))x + 4k_j(1-p^k_j)/2(1-p^k_j-k), j=1,2.
In the same way as in <cit.>, we get that, for every j∈{1, 2}, g̃_j,p(x) ≥ 2k_j - k + ω(lnln n/ln n) completing the proof.
|
http://arxiv.org/abs/2307.00114v1
|
20230630195715
|
A Personalized Household Assistive Robot that Learns and Creates New Breakfast Options through Human-Robot Interaction
|
[
"Ali Ayub",
"Chrystopher L. Nehaniv",
"Kerstin Dautenhahn"
] |
cs.RO
|
[
"cs.RO",
"cs.AI",
"cs.HC"
] |
A simpler and parallelizable O(√(log n))-approximation
algorithm for Sparsest Cut
Vladimir Kolmogorov
Institute of Science and Technology Austria (ISTA)
[email protected]
============================================================================================
empty
empty
For robots to assist users with household tasks, they must first learn about the tasks from the users. Further, performing the same task every day, in the same way, can become boring for the robot's user(s), therefore, assistive robots must find creative ways to perform tasks in the household. In this paper, we present a cognitive architecture for a household assistive robot that can learn personalized breakfast options from its users and then use the learned knowledge to set up a table for breakfast. The architecture can also use the learned knowledge to create new breakfast options over a longer period of time. The proposed cognitive architecture combines state-of-the-art perceptual learning algorithms, computational implementation of cognitive models of memory encoding and learning, a task planner for picking and placing objects in the household, a graphical user interface (GUI) to interact with the user and a novel approach for creating new breakfast options using the learned knowledge. The architecture is integrated with the Fetch mobile manipulator robot and validated, as a proof-of-concept system evaluation in a large indoor environment with multiple kitchen objects. Experimental results demonstrate the effectiveness of our architecture to learn personalized breakfast options from the user and generate new breakfast options never learned by the robot.
§ INTRODUCTION
With a rapid increase in the aging population worldwide <cit.>, research is being conducted to develop autonomous robots that can assist older adults in their homes. These assistive robots are being designed for various roles, such as caretakers, cleaning robots, and home assistants <cit.>. To create robots that can assist users with household tasks, the robots will first need to learn the preferences of the users related to the assistive tasks. For example, for the task of setting up a table for breakfast, the robot must first learn the different kinds of breakfasts that the user likes. Further, after learning the user preferences, the robot must find creative ways to perform the assistive tasks, because performing the same task every day can become boring for the user. For example, setting up the same breakfast option for the user over multiple days could become boring and the user might want to try new things. Therefore, in this paper, our goal is to develop a computational architecture that can allow a household assistive robot to learn different breakfast options from its user, use the learned knowledge to set up a table for breakfast, and also create new breakfast options for the user.
For a household assistive robot to perform tasks, it needs the semantic knowledge of the household i.e. objects (e.g. bowl, spoon) and related contexts (e.g. kitchen). The robot must also be able to reason on the semantic knowledge to perform tasks using the objects in the household. Extensive research has been conducted in recent years to create semantic reasoning architectures for performing assistive tasks in household environments <cit.>. Most of these works use a pre-specified knowledge base to perform household tasks. However, in the real world, different users can have different preferences about the tasks that they need assistance with. Therefore, for such cases, we need to develop personalized household robots <cit.> that can learn about the tasks that the users need assistance with, from the users. Research has also been conducted on creativity for robots. Most research in this field has been on developing cognitive architectures for social robots to create new artistic drawings <cit.>, or for humanoid robots to perform creative dance moves <cit.>. However, these works are not directly applicable to household assistive robots for completing tasks in creative ways.
In this paper, we develop a cognitive architecture that allows a robot to learn different breakfast options using the objects in the household from its user, set up the learned breakfast options on a table upon request from the user, and create new breakfast options for the user over the long term. The architecture allows the robot to interact with its user using a graphical user interface (GUI) and learn different breakfast options. Inspired by the dual memory theory of mammalian memory <cit.>, the breakfast options taught by the user, grounded in the processed sensory data of the robot, are stored in the long-term episodic memory. The architecture also keeps track of different breakfasts eaten by the user over multiple days and stores them in short-term memory (STM). The architecture can access the learned knowledge from the episodic memory and plan lower-level actuator commands for the robot to set up a table for the learned breakfasts. The architecture can further reason on the knowledge stored in the episodic memory to generate a semantic knowledge graph which can be used to create new breakfast options. The user can ask the robot to set up a previously learned breakfast or create a new breakfast option through the GUI. We integrate the proposed architecture on the Fetch mobile manipulator robot <cit.> and test it in a large indoor space with 9 common kitchen objects. Experimental results confirm that the robot can accurately learn different breakfast options from the user and set them up on a table. The results also show that the robot can create various new breakfast options that were never observed by the robot in its experience in the household context.
§ RELATED WORK
Socially assistive robots have been developed in recent years that can be interactive meal partners for older adults in long-term care homes <cit.>. These robots, however, only interact with older adults to suggest different meal options and do not physically perform the task of setting up the table for a meal. Various cognitive architectures have been developed that can use the semantic knowledge of a household environment and physically perform tasks in the household, such as fetching an object, setting up a table for breakfast, cleaning a table <cit.>. Although these robots can perform different tasks in a household environment, they perform only a pre-programmed set of tasks, and they do not adapt to the preferences of their users. For example, the mobile manipulator robot in <cit.> can set up a table for only one type of breakfast. This can also get boring for the users if the robot sets up the same breakfast every single day over multiple weeks. In such cases, the robot must create new breakfast options for its users.
Research for developing creative robots has been limited to creating artistic drawings or dancing robots. For example, Augello et al. <cit.> develop a cognitive architecture for social robots that can create a new drawing while collaborating with a human. Infantino et al. <cit.> and Manfre et al. <cit.> develop cognitive architectures to enable creativity in humanoid robots so that they can dance in pleasant manners. These works, however, are not applicable to household assistive robots that can perform household tasks in creative ways. Research has also been conducted on developing cognitive architectures that can allow social robots to stimulate creativity in children <cit.>. These architectures, however, do not allow a robot to be creative but rather stimulate creativity in children.
With the advent of deep learning, generative adversarial networks (GANs) have been developed that can generate new data the model never learned <cit.>. These networks can learn general semantic representations about different household contexts (e.g. bedroom) from a large amount of training data, and then generate new images that were never seen by the model. One of the main limitations of these models is that they can generate many random images which do not belong to any context, such as creating random images that do not look like a bedroom context. Therefore, they cannot be applied to make assistive robots creative, as the robot would make many mistakes, which can hurt the trust of its user towards the robot <cit.>. Further, GANs also require a large amount of training data to learn, which might be infeasible in real-world situations where the robot learns from the supervision provided by its users. Real users (especially older adults) would be unwilling to provide hundreds and thousands of examples of a single task to teach the robot. In this paper, we use Gaussian processes <cit.> as generative models to create new breakfast options, as these models have been shown to work with limited data <cit.>.
§ CONTEXTUAL MEMORY SYSTEM FOR A CREATIVE ROBOT
Figure <ref> shows our cognitive architecture for a creative breakfast setting robot. Different computational modules in the architecture were integrated using ROS on the Fetch mobile manipulator robot. Note that all the modules are stand-alone, therefore they can be reused as blocks in different frameworks. These modules are described below:
§.§ Robot's Sensors
The Fetch mobile manipulator robot was used for this project <cit.>. Fetch consists of a mobile base and a 7 DOF arm. The robot also contains an RGB camera, a depth sensor and a Lidar sensor. These sensors can be used for 3D perception, slam mapping, and obstacle detection in the robot's environment. In our architecture, the mobile base, the 7 DOF arm, and all three sensors are used for perception, manipulation, mapping, and navigation in an indoor environment.
§.§ Perceptual System
The perceptual system of the architecture takes an RGB image and point cloud data as input from the robot's sensors, and parses this data into separate objects. We use the YOLOv2 object detector <cit.> for the detection of objects in the RGB images. The 2D bounding boxes from YOLO are converted into 3D coordinates using the point cloud data. We collected ∼5000 images of 9 household objects used in our experiments and trained the YOLO object detector on the collected data. The perceptual system, thus, parses the input images and outputs the object categories, 2D bounding boxes and 3D coordinates for all the objects in the image.
§.§ Memory Encoding
The data obtained from the robot's sensors or the perceptual system must be encoded into a low dimensional feature space (also called a latent variable), before it can be used to reason about the entities in the world (e.g. objects in the household). In this paper, we encode the processed sensory inputs by the perceptual system, using conceptual spaces <cit.>. In cognitive science, a Conceptual Space is a metric space in which entities are characterized by quality dimensions. Conceptual spaces have mostly been used in cognitive science for category learning, where the dimensions of a latent variable (LV) in a conceptual space represent the category features. In this paper, we use a conceptual space LV to represent different breakfast setups (such as {cereal, milk, bowl, spoon} make a breakfast setup), where the features of the LV represent the collection of objects in the breakfast setup represented by the LV. Further, as each breakfast setup contains food items such as cereal, milk, etc and utensils such as spoon, bowl, etc, we also encode this information about the objects in another LV. We term this LV, a food-context LV to differentiate it from the object LV for the breakfast options. This information can help the architecture generate creative breakfast setups (Section <ref>).
§.§ Short-Term Memory (STM)
Once an input image of a breakfast setup is encoded into a latent variable, it is stored in the short-term memory (STM) of the architecture. The size (k) of STM is set as a hyper-parameter to allow the architecture to store encoded images for a certain number of days. Once STM is full, data stored from earlier days is removed to make room for more data.
STM tracks the breakfast eaten by the user over multiple days. Using the data stored in STM, the architecture can suggest new breakfast options that the user has not eaten in previous days. Formally, let's consider there are n number of breakfast options stored in the episodic memory as LVs X={x_1, x_2, ..., x_n}. Over the course of k (hyperparameter in STM) number of days, the user eats different breakfast options, where M={m_1, m_2,..., m_n} represents the total number of times each of the n breakfast options was eaten by the user. From this set, the robot can find the breakfast options that have been least eaten by the user over k days as min M and set it up on the table. If multiple breakfast options were eaten the least number of times, then the robot randomly chooses one of these breakfast options.
§.§ Episodic Memory
The episodic memory stores different breakfast options taught by the robot's user. As different users can have different breakfast preferences, it is not possible to store a general set of breakfast options. Therefore, the robot must learn about these preferences by interacting with the user.
In our architecture, a user can initiate a learning session using a GUI (details in Section <ref>) and provide examples of different breakfast setups. The robot captures the breakfast setups as images using its sensors. The perceptual system (Section <ref>) processes the training images which are then encoded into latent variables (Section <ref>). The encoded LVs (both object LVs and food-context LVs) are then stored in the episodic memory, which can be accessed later to set up a table for breakfast.
§.§ Creating New Breakfast Options
The user can also ask the robot to surprise them (see Figure <ref>) by creating a new breakfast option that the user never taught the robot i.e. such a breakfast option does not exist in the episodic memory. We define a creative breakfast as a new combination of food and utensil items that were never directly learned by the robot from the user. To achieve this, we use the object LVs stored in the episodic memory to find the mean μ and covariance matrix Σ for a Gaussian distribution in a Gaussian process. We generate a pseudo-LV[The sampled LVs are termed as pseudo-LVs because they are not real LVs learned from the user.] after sampling the Gaussian distribution. However, the pseudo-LV can be the same as one of the object LVs stored in the episodic memory i.e. it is not a new breakfast option. Therefore, if the pseudo-LV is the same as any of the object LVs in the episodic memory, we continue to resample the Gaussian distribution until we get a pseudo-LV that is different from the object LVs in the episodic memory.
The new pseudo-LV, however, can be an invalid breakfast setup. For example, {cereal, milk, spoon} is an invalid breakfast setup as it does not contain any container (such as a bowl) to pour cereal and milk. To fix such cases, we find the conditional relationships among various objects that are used in different breakfast setups stored as LVs in the episodic memory. Using these conditional relationships we infer logic-based rules to generate a knowledge graph, which can be used to fix invalid breakfast setups.
We use the food-context LVs in the episodic memory to determine the dependency of different food items on a combination of other food items and utensils. To achieve this, let's consider n LVs in the episodic memory, and consider a food object represented by dimension i in the LVs. For each ith food item, we consider all the breakfast setups (say r LVs) where this food item exists. Among the r LVs, we first calculate the probability P(i|no_utensil), i.e. if the ith food item does not require a utensil to be present in a breakfast setup. If P(i|no_utensil)>0, there is at least one breakfast setup where the food item is not accompanied by a utensil, therefore the food item can be a part of a breakfast setup without a utensil. Otherwise, if P(i|no_utensil)=0, the food item requires at least one utensil. In this case, we go through all r LVs to find different combinations of utensils that the food item depends on, as a food item could depend on multiple utensils, e.g. cereal would depend on a spoon and a bowl. For this, let's consider that there are a total of m utensils present in the r LVs. For each jth utensil present in the r LVs, we find the conditional probability P(j|l) with all the other l={1,...,m} utensils in the r LVs. P(j|l) represents the probability that jth utensil exists given that lth utensil exists in the same LV. P(j|l) is determined as follows:
P(j|l) = ∑_q=1^r z_q^j z_q^jz_q^l>0/∑_q=1^r z_q^l z_q^l>0 ,
where z_q^l represents the value of lth utensil item in the qth food-context LV. If P(j|l)=1, utensil j must exist when utensil l exists in an LV accompanied by the ith food item. As a result, we get an m× m matrix representing the dependency of utensils on other utensils that accompany the ith food item in r LVs. Using this dependency matrix, we find all the utensil items that are independent of other utensils or that are interdependent with other utensils i.e. for two utensils j and l, P(j|l)=1 and P(l|j)=1. The resulting set represents combinations of different utensils that the ith food item depends on. These sets are then used to generate a logic-based knowledge graph based on is_required relationships (see Figure <ref> for an example). Note that the food item requires only one of the dependent utensil combinations to be present in the breakfast setup, not all the combinations. For example, milk must either be accompanied by a cup for drinking or {bowl, spoon} in a cereal breakfast. Figure <ref> shows a simple example of generating a logic-based knowledge graph from the learned breakfast options in memory.
After finding the dependencies on utensils, the same process is repeated to determine if a food item depends on other food items. After this process, we can find a combination of objects (foods or utensils) that each food item in the LVs depends on for valid breakfasts. We do not find a separate list of dependent objects for utensils as these items are only needed to accompany the food items in breakfast setups. Note that the knowledge graph is generated based on the breakfast options taught by the user, so the dependency rules encoded in the graph are personalized to the user. Experimental results in Section <ref> confirm this.
Using the logic-based knowledge graph, we can determine if a feature dimension in a pseudo-LV satisfies its dependency on other items. If a feature dimension is not accompanied by its dependent items, we manually add the dependent items in the pseudo-LV (see Figure <ref> for an example). Finally, after the dependency check, the pseudo-LV is decoded using the inverse of the procedure in Section <ref> to get the objects in the new breakfast option. The object names/labels are then passed on to the task planner.
§.§ Task Planner
The task planner gets the decoded breakfast option from the creativity module (Section <ref>), and plans lower-level actions to be taken by the robot to set up a table for breakfast. The task planner passes lower-level commands to the mobile base and the arm of the robot to move and fetch objects from the kitchen to the dining table.
§.§ Graphical User Interface
A simple graphical user interface (GUI) is integrated with the architecture to allow the robot to communicate with the user. The GUI allows the user to initiate a teaching session with the robot where the user can show the robot different breakfast setups on a table. The user physically places the set of objects in a breakfast setup on the table in front of the robot's camera (see Figure <ref>). The user can provide the name for the breakfast option by typing it in a textbox. The robot captures the breakfast data using the RGB camera and the depth sensor and then encodes and stores the breakfast option in the episodic memory (Section <ref>).
The GUI also allows the user to ask the robot to set up a table for breakfast. The user can type in the name of the breakfast that they want, ask the robot to set up the table for breakfast without typing any particular breakfast name or ask the robot to surprise them by creating a new breakfast option. After getting the input from the user, the architecture can use a combination of all the modules to allow the robot (Section <ref>) to set up the table for breakfast.
§ EXPERIMENTS
In this section, we first describe the experimental setup and the implementation details. We then describe two experiments to evaluate the performance of our architecture for learning different breakfast options from the user, setting them up on the table, and creating new breakfast options. For all the experiments reported in this section, the experimenters take the role of a user.
§.§ Experimental Setup
We use the Fetch robot <cit.> and its associated ROS packages for all the experiments. We performed experiments in a large indoor space where we set up the kitchen and the dining area with realistic household objects. The indoor space is mapped using the Lidar sensor on the Fetch robot and an existing SLAM algorithm available from Fetch Robotics. Navigation in the environment was achieved using ROS packages provided by Fetch Robotics. Common household items/objects belonging to 9 categories (see Table <ref> for a list of graspable objects) are placed on three tables in the kitchen. Out of the 9 objects, 3 (Banana, Bowl, and Spoon) were not graspable by the robot. Therefore, for breakfast setups that required these 3 objects, the user had to fetch the objects themselves. Manipulation of objects (pick and place) was achieved using ROS packages for gripper, arm, and torso control provided by Fetch Robotics.
The RGB camera and depth sensors on the Fetch robot were used for visual sensing of the environment. RGB images from the camera are passed through the perception module of the architecture which uses YOLOv2 <cit.> to detect and localize objects in the images (see Section <ref> for details).
For all the experiments (unless mentioned otherwise), the user (experimenter) first teaches the Fetch robot different breakfast options on the dining table using the GUI (Section <ref>). The robot learns the breakfast options and stores them in episodic memory. As the user would eat breakfast once every day, we can `simulate' multiple days by asking the robot to set up a table for breakfast multiple times a day. For the short-term memory (STM) in the architecture, we set the hyper-parameter k to 5 days. Examples of teaching breakfast options to the robot and testing the robot to set up known and new breakfast options are shown in the supplementary video.
§.§ Experiment 1: Setting Up Known Breakfast Options
In this experiment, we tested if the robot can learn breakfast options from the user and then set up the learned breakfast options when asked by the user. We taught the robot 7 different breakfast options as shown in Table <ref>. Figure <ref> shows examples of 2 out of 7 breakfast options learned by the robot. The robot was then asked to set up a table for breakfast 15 times. In each of the 15 turns (except for the 5th, 10th, and 15th run), the user typed the name of the breakfast in the GUI to ask the robot to set up the table for a particular breakfast. We randomly chose a breakfast option to be typed in each turn. For the 5th, 10th, and 15th turn, the user did not type any breakfast name, therefore the robot used the data stored in STM to choose the least eaten breakfast to set up. For each breakfast setup, the robot moved all the graspable objects in the breakfast setup from the kitchen to the dining table. On average, it took the robot ∼4 minutes to set up a breakfast option on the table.
Table <ref> shows the results of setting up 7 breakfast options learned by the robot. All the breakfast options were learned correctly by the robot, and there was no learning error. The robot was able to correctly set up breakfast options in 10 out of 15 runs. As each breakfast setup required multiple objects, failing to fetch even a single object would result in an incorrect breakfast setup. Most of the breakfast setup failures happened because of a single object in the breakfast setup (more details below). There were three runs when the robot was asked to set up a breakfast option using STM. The robot correctly chose one of the least-eaten breakfast options in all three runs.
Table <ref> shows the results of the experiment in 15 runs, with three different kinds of errors for each graspable object used in the 7 breakfast setups. The most common error type was the manipulation error (ME), which occurred because of two reasons: (1) the motion planner could not find a path to reach the goal, (2) the perceptual system provided an incorrect pose estimate to pick the object (perceptual error (PE)). There were no object detection failures during the experiment because even if the robot failed to detect an object, it moved its head up and down until it found the correct object. Therefore, all of the perceptual errors happened during the 3D pose estimation of objects. Finally, there was only one grasping error for Orange. The robot's arm was not low enough and because the orange is round it was not captured in the gripper of the robot. Objects with sharper edges, such as Milk did not face this issue. These results confirm that our architecture can allow a robot to learn most breakfast options from the user and set up the learned breakfast options on a table.
§.§ Experiment 2: Creating New Breakfast Options
§.§.§ Experiment with a Robot
In this experiment, we tested the ability of our architecture to allow a robot to create new breakfast options that were never learned by the robot. The experimental setup was the same as in experiment 1. The robot was started with the same 7 breakfast options in the beginning as in experiment 1. After that, we tested the robot 5 times to create and set up new breakfast options.
Table <ref> shows the five new breakfast options created by the robot. All five breakfast setups were valid setups because each food object was accompanied by the correct set of utensils. Two out of five breakfast options generated by the Gaussian process were invalid. For example, breakfast option 2 had bowl missing, and breakfast option 5 had cup missing. However, these objects were added by the architecture using the logic-based rules encoded in the knowledge graph for the food items (Section <ref>). Finally, note that there was no learning error (LE) encountered for these breakfast options because they were not learned from any example provided by the user to the robot. The values for perceptual and manipulation errors were consistent with the previous experiments.
§.§.§ Simulated Experiments
To further evaluate the effectiveness of our breakfast creativity algorithm, we tested the architecture in simulation to create 50 breakfast options. Note that in this case the architecture was only asked to suggest the breakfast option and the robot did not physically set up the generated breakfast option on the table. Out of the 50 breakfast options, 27 were the same setups as the ones stored in the episodic memory and were thus discarded. Out of the other 23 options, 7 were invalid options generated by the Gaussian process. However, these invalid options were corrected using the logic-based knowledge graph for the food items. Overall, out of the 23 new breakfast options, 6 were duplicates, so there were 17 distinct new options. These results confirm that our architecture can allow the robot to create and set up new breakfast options that were not learned by the robot. Further, our architecture was able to create more than double the breakfast options (17) it had learned by interacting with the user (7). However, the robot cannot generate a significantly large number of distinct breakfast options when learning from a few examples.
We further test our approach on a larger scale with a total of 25 objects, an initial set of 20 breakfasts and ask the creativity module to generate 200 breakfast options. Out of the 200 generated breakfasts, 65 were the same as the ones stored in the episodic memory and were therefore discarded. For the rest of the 135 breakfasts, 113 were invalid options but they were corrected by the logic-based knowledge graph for the food items. Finally, out of the 113 new breakfast options 36 were duplicates. Therefore, the architecture was able to generate 99 distinct new breakfast options from only 20 initial breakfast setups. These results confirm the scalability of our approach to larger datasets learned over the long term.
Finally, we tested our approach with some unconventional breakfast setups. For example, we added a breakfast setup {cereal, bowl} as some users might eat cereal without any milk. Other examples of unconventional setups were {peanut_butter, bowl, spoon}, {yogurt, spoon}, etc.. For this experiment, we had a total of same 25 objects as in the previous experiment, and 12 breakfast setups where 6 of the breakfast setups were unconventional. The creativity module generated 50 breakfast options, with 25 out of 50 being distinct new options. Interestingly, we noticed that the creative breakfast setups followed the dependency of food items learned through the data of the initial breakfasts. For example, the creativity module generated setups such as, {apple, cereal, bowl, spoon, yogurt, peanut_butter} where cereal is not accompanied by milk. These results confirmed the ability of our architecture to personalize to their users' preferences even when creating new breakfast options. These results also show the effectiveness of our unique combination of data-driven learning, logic-based reasoning, and human-robot interaction.
§ CONCLUSIONS
This paper has presented an architecture for learning and setting up different breakfast options for the user. The architecture can also create new breakfast options that were never taught by the user. Extensive proof-of-concept system evaluations on a Fetch mobile manipulator robot demonstrate the ability of our architecture to allow a robot to accurately learn multiple breakfast options from the user and then set them up on a table upon request. The results also confirm the ability of the architecture to be able to track previously eaten breakfasts by the user to suggest new breakfasts, and even create multiple breakfast options that were never learned by the robot. We hope that this work will lead to designing more effective personalized household robots that can interact with, learn and provide long-term assistance to older adults in their own homes to support independent living.
§ LIMITATIONS AND FUTURE WORK
For all the experimental evaluations, the experimenter performed the role of the user. In the future, we hope to conduct a user study with real participants to investigate the usability of the system in real-world household environments. Further, the robot showed promising results with the chosen hyperparameter value for k in the STM. However, we hope to perform more experiments in the future to analyze the effect of this hyperparameter on the choice of breakfast options.
There were some objects that the robot struggled with when setting up different breakfasts, particularly because of the 3D pose estimation of objects. However, designing robust pose estimation and manipulation algorithms for complex household objects was out of the scope of this work. In the future, we hope to explore these limitations to scale up our approach to more realistic household environments.
-12cm
IEEEtran.bst
|
http://arxiv.org/abs/2306.09279v2
|
20230615170001
|
Dynamical detection of a companion driving a spiral arm in a protoplanetary disk
|
[
"Chen Xie",
"Bin B. Ren",
"Ruobing Dong",
"Élodie Choquet",
"Arthur Vigan",
"Jean-François Gonzalez",
"Kevin Wagner",
"Taotao Fang",
"Maria Giulia Ubeira-Gabellini"
] |
astro-ph.EP
|
[
"astro-ph.EP"
] |
UTF8gbsn
Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France; <[email protected]>
Université Côte d'Azur, Observatoire de la Côte d'Azur, CNRS, Laboratoire Lagrange, Bd de l'Observatoire, CS 34229, 06304 Nice cedex 4, France
Université Grenoble Alpes, CNRS, Institut de Planétologie et d'Astrophysique (IPAG), F-38000 Grenoble, France
Department of Physics & Astronomy, University of Victoria, Victoria, BC, V8P 5C2, Canada
Univ Lyon, Univ Claude Bernard Lyon 1, ENS de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, F-69230, Saint-Genis-Laval, France
Steward Observatory, University of Arizona, USA
Department of Astronomy, Xiamen University, 1 Zengcuoan West Road, Xiamen, Fujian 361005, China
Dipartimento di Fisica, Università degli Studi di Milano, Via Celoria 16, 20133 Milano MI, Italy
Radio and near-infrared observations have observed dozens of protoplanetary disks that host spiral arm features.
Numerical simulations have shown that companions may excite spiral density waves in protoplanetary disks via companion–disk interaction. However, the lack of direct observational evidence for spiral-driving companions poses challenges to current theories of companion–disk interaction. Here we report multi-epoch observations of the binary system HD 100453 with the Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE) facility at the Very Large Telescope. By recovering the spiral features via robustly removing starlight contamination, we measure spiral motion across 4 yr to perform dynamical motion analyses. The spiral pattern motion is consistent with the orbital motion of the eccentric companion. With this first observational evidence of a companion driving a spiral arm among protoplanetary disks, we directly and dynamically confirm the long-standing theory on the origin of spiral features in protoplanetary disks. With the pattern motion of companion-driven spirals being independent of companion mass, here we establish a feasible way of searching for hidden spiral-arm-driving planets that are beyond the detection of existing ground-based high-contrast imagers.
Dynamical detection of a companion driving a spiral arm in a protoplanetary disk
0000-0002-6318-0104Chen Xie (谢晨)<ref>
0000-0003-1698-9696Bin B. Ren (任彬)Marie Skłodowska-Curie Fellow<ref>, <ref>
0000-0001-9290-7846Ruobing Dong (董若冰)<ref>
0000-0002-9173-0740Élodie Choquet<ref>
0000-0002-5902-7828Arthur Vigan<ref>
0000-0001-9423-6062Jean-François Gonzalez<ref>
0000-0002-4309-6343Kevin Wagner<ref>
0000-0002-2853-3808Taotao Fang (方陶陶)<ref>
0000-0002-5980-4287Maria Giulia Ubeira-Gabellini<ref>
Received July 31, 2023; accepted –
====================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The detection of spiral structures in protoplanetary disks has called for the understanding of spiral formation mechanisms <cit.>. Theoretical and hydrodynamical simulation studies have suggested that companion–disk interaction and disk gravitational instability (GI), together with other mechanisms such as vortex and shadowing <cit.>, are the most compelling approaches to contest for explaining the origin of spirals. To test spiral formation mechanisms, hydrodynamical simulations <cit.> and multi-epoch imaging studies <cit.> have been employed to associate spiral configuration or motion with the formation mechanism.
Although motion studies can distinguish theoretical mechanisms between companion-driven and GI-induction, there has been no observationally direct dynamical evidence of the co-motion of a companion and the spiral that it drives. This calls for the verification of the basic assumption in associating the companion–disk interaction theory with spiral motion: the co-motion of companion and spiral <cit.>. It is thus of importance to push beyond identifying companions in spiral systems <cit.> by directly validating the motion measurement approach. Therefore, we need to apply motion studies to spiral systems with known companion(s).
In addition to validating the formation and motion mechanism of companion-driven spirals, an application to the known companion–spiral system(s) can also formally establish the existence of these currently hidden spiral-driving planets, especially since such planets are the most compelling targets for confirmation with direct imaging using state-of-the-art telescopes (e.g., VLT/ERIS, JWST) in this current era where targeted imaging approaches (e.g., ) instead of blind search are necessary to efficiently populate the family of directly imaged exoplanets. To establish the motion pattern in systems with known spirals and companions, multi-epoch imaging using identical instrument and observation modes can be ideal for minimizing instrument differences and data reduction bias.
HD 100453 is a binary system <cit.> at a distance of 103.8 ± 0.2 pc <cit.> with an age of 6.5 ± 0.5 Myr <cit.>. The primary star HD 100453 A (hereafter HD 100453) is a young Herbig A9Ve star with a mass of 1.70±0.09 M_⊙ <cit.>. The protoplanetary disk around the primary star was directly imaged at near-infrared (NIR) and submillimeter wavelengths, showing
a cavity, a ring, and two spiral arms from inside out
<cit.>. NIR interferometric observations revealed the presence of the inner disk <cit.> inside the cavity that is misaligned with and shadows the outer spiral disk <cit.>.
The secondary star HD 100453 B (hereafter the companion) is an early M star with a mass of 0.2 ± 0.04 M_⊙ <cit.>, located at a projected distance of around 1.05 (109 au).
The HD 100453 system offers a particularly decisive test of co-motion between spirals and companions that might be driving them. Numerical simulations suggest that such a companion can truncate the disk and excite two spiral arms in the remaining disk as observed in the NIR <cit.>. The companion-driven origin of the spiral arms was supported by one of the observed spiral arms in the ^12CO line that connects to the companion position by assuming a coplanar orbit <cit.>. However, the potential mutual inclination between the companion orbit and the spiral disk <cit.> raised the possibility of a projection effect. With HD 100453 being the exemplary configuration of a companion–spiral system, we study the motions of the spiral(s) and the companion here.
§ OBSERVATIONS AND DATA REDUCTION
The HD 100453 system was observed with the Spectro-Polarimetric High-contrast Exoplanet REsearch <cit.> instrument at the Very Large Telescope (VLT) using the InfraRed Dual-band Imager and Spectrograph <cit.>. For spiral motion analysis, we retrieved three total intensity observations in K1-band (λ= 2.11 μm and Δλ = 0.10 μm) in April 2015 and April 2019, with a time span of 4.0 years. The observation and the pre-processing of the data are summarized in Appendix <ref>. Throughout the paper the observation of the disk on 08 April 2019 is only used for uncertainty estimation because its integration time is shorter than the observation on 07 April 2019 (see Table <ref>).
We processed the calibrated data by applying reference-star differential imaging (RDI) using the reference images assembled from <cit.>. We constructed a coronagraphic model of stellar signals and speckles via data imputation using sequential nonnegative matrix factorization <cit.> (see Appendix <ref> for a description of the detailed procedures). Combining the two techniques, RDI-DIsNMF is optimized for the direct imaging of circumstellar disks in total intensity by minimizing self-subtraction and overfitting that has plagued previous methods. We first generated the speckle features (i.e., NMF components of the stellar coronagraphic model) based on the disk-free reference library from <cit.> using NMF from <cit.>, then used these features to remove the speckles in HD 100453 observations.
To avoid the overfitting problem for RDI <cit.> that can change the morphology of spirals and bias spiral motion measurement, we masked out regions that host disk signals in HD 100453 data, and modeled the rest of the region using the NMF components, and then imputed the signals in disk hosting regions <cit.>. With RDI-DIsNMF using well-chosen reference images,
we were able to accurately recover the disk morphology with theoretically minimum post-processing artifacts, which is an essential requirement for accurate measurement of the pattern motion of spiral features.
§ DYNAMICAL ANALYSIS
§.§ Pattern motion of the spiral arm S1
Using RDI-DIsNMF, we recovered the disk around HD 100453 at 2.11 μm in total intensity (Fig. <ref>). The disk morphology is consistent with that in the polarimetric image <cit.>. In particular, we confidently recovered two spiral arms, S1 and S2. S1 is the primary arm that has CO gas connected to the projected position of the companion <cit.>. To measure the positions of the spiral arms, we first needed to correct the viewing geometry. We deprojected each disk image to face-on views (i.e., the disk plane) adopting an inclination of 3381 and a position angle (PA) of 14435 <cit.>. To correct for disk flaring, we assumed that the disk scale height (h) follows h = 0.22 × r^1.04, where r is radial separation in au <cit.>. Each disk image was then r^2-scaled to enhance the disk features at large radii. Finally, we transformed them into polar coordinates for the measurement of spiral arm locations. We determined the local maxima of the arm at each azimuthal angle in 1^∘ steps by performing Gaussian profile fitting (see Appendix <ref> for a detailed description).
The local maxima of spiral arm S1 are presented in Fig. <ref>, which describes the morphology of S1. An offset over a large range of 210^∘ – 260^∘ is visible between the two epochs of 2015 and 2019, and follows the rotational direction of the disk.
This offset also appears when simply comparing disk images between 2015 and 2019 (see Fig. <ref>). At locations 210^∘ we are limited by the signal-to-noise ratio (S/N) of the disk where the spiral arm is barely detected. At locations 260^∘, the spiral arm starts to merge with the ring-like structure, which increases the uncertainty of the local maxima of the spiral arm. Because possible systematics such as the image misalignment caused by the instrument have been properly corrected during the image alignment (Appendix <ref>), we conclude that the offset is caused by the pattern motion of the S1 arm in 2015–2019.
Following <cit.>, we fitted five-degree polynomials to the spiral arm S1 in two epochs and simultaneously obtain their morphological parameters and the pattern motion of the spiral arms. The fitting result is shown in Fig. <ref>. In principle, different parts of the spiral have different pattern speeds if driven by a companion on an eccentric orbit. However, our data do not permit a radius-dependent assessment, and a single-value pattern speed is measured as a compromise. The speed of the pattern motion for spiral arm S1 is 088 ± 007 yr^-1 in the counterclockwise direction, assuming the companion-driven scenario that the pattern speed is a constant for S1. The uncertainty estimation is described in Appendix <ref>. The deprojection will affect the spiral location determination, and subsequently the spiral motion measurement. However, the disk flaring of HD 100453 only has limited impact on the velocity of the spiral motion (see Appendix <ref>). Throughout the paper we present the velocity of the spiral motion based on the best-fit model (h = 0.22 × r^1.04) from <cit.> to correct for the disk flaring in the deprojection.
We examined the possibility of the gravitational instability (GI) scenario that each part of the arms rotates at its local Keplerian motion. Based on the fitted morphological parameters of the S1 arm in 2015, we predicted the location of the S1 arm in 2019, and adopted the stellar mass of 1.7 M_⊙ for the central star. The predicted locations of GI-induced spiral arms deviate from the observed arm locations in 2019 (Fig. <ref>). The local Keplerian motions at 25 to 35 au are 375 yr^-1 to 227 yr^-1, or 1501 to 906 in 4 yr. The local Keplerian motions are too large to explain the observation (∼3.5^∘ in 4 years). We conclude that the spiral arm S1 is not triggered by gravitational instability. Together with MWC 758 <cit.> and SAO 206462 <cit.>, HD 100453 is the third spiral disk that disfavors the GI origin for their spirals from pattern motion measurement.
§.§ Motion of the stellar companion
The companion HD 100453 B has over 15 years of astrometric data. We adopt astrometric data from <cit.>, <cit.>, and <cit.> (see Table <ref>). A linear fit to the PAs of the companion shows that the companion has an angular velocity of 0384 ± 0019 yr^-1 in the sky plane between 2003 and 2019 (Appendix <ref>). Based on the PAs only in 2015 and 2019, we also obtain an angular velocity of 040 ± 007 yr^-1 for the companion, which is consistent with our linear fit. Hence, we adopt the fitting result as the angular velocity of the companion because it contains more independent measurements. To obtain the companion motion in the disk plane, we deprojected the angular velocity of the companion from the sky plane to the disk plane. In the deprojection, we adopt the disk inclination, the disk position angle, and the companion position angle to be 3381, 14435, and 1332, respectively. We obtain the angular velocity of the companion in the disk plane to be 0455 ± 0023 yr^-1 between 2015 and 2019.
We also estimate the probability density distribution of the companion angular velocity in the disk plane (Appendix <ref>). We calculate the companion angular velocity based on the companion orbital parameters adopted from the orbit fitting results in <cit.>. The derived probability density distribution has a Gaussian profile; the companion has an angular velocity of 0457^+0023_-0023 yr^-1 (1σ credible interval) in the disk plane in 2019. The angular velocity derived from the orbital parameters is consistent with the direct linear fit to the companion PAs. This consistent angular velocity and its Gaussian distribution are expected because the companion motion between 2003 and 2019 is well constrained by the astrometry data with the measuring uncertainty followed potential Gaussian noise.
In general, the spiral pattern motion should be in the range of the slowest and fastest companion orbital frequency in the scenario of an eccentric perturber <cit.>. From the posterior probability distribution of the orbital parameters obtained by <cit.>, we derived the minimum and maximum value of the companion orbital motion.
Although the companion motion at the location in 2019 is slower than the spiral motion in 2019, the maximum value of the companion orbital motion is still larger than the measured spiral motion (see Fig. <ref>). It suggests that the physical interaction (i.e., tidal interaction) between the companion and the disk can exist, as proposed by the numerical simulation in <cit.>.
Based on the consistency in motion measurements, we conclude that the known companion HD 100453 B drives the spiral arm S1. This is the first detection of a companion driving a spiral arm among protoplanetary disks. In light of our result, the previously observed CO gas extending from the S1 arm in <cit.> is also dynamically connected to the companion, rather than moving independently, and the static connection arises from projection effects due to the relative inclination between the disk and the companion orbit.
§ DISCUSSION
§.§ Pattern motion of the spiral arm S2
HD 100453 was classified as a Group I disk that has an outer disk flaring <cit.>. The bottom of the S2 spiral arm shown in the NIR polarimetric image suggests the disk thickness is nonnegligible and the S2 arm locates on the near side of the disk <cit.>. NIR observations probe the scattered light from the disk surface. For the disk with flaring, the angle between the disk surface on the nearside of the disk (i.e., S2) and the sky plane is larger than the disk inclination of 338. We define the disk inclination to be the angle between the flat disk midplane and the sky plane. Large viewing angles (i.e., 300) prevent us from restoring the correct face-on morphology via the deprojection <cit.>. Unlike S2, S1 locates on the far side of the disk where the inclination of the disk surface is smaller than the disk inclination. Thus, it is more accurate to restore the S1 arm back to face-on morphology than the S2 arm. Therefore, we did not provide a motion measurement for the S2 arm.
§.§ Other spiral-triggering mechanisms
Spiral features can also be induced by a flyby <cit.>. However, a recent search for potential stellar flybys with Gaia DR3 did not identify recent close-in on-sky flyby candidates around HD 100453 <cit.>. Thus, this scenario is unlikely to cause the spiral feature in this system.
The disk of HD 100453 seen in the NIR contains two shadows (Fig. <ref>) created by the inner disk <cit.>. <cit.> explored the possibility of the shadow-triggered spirals via the pressure decrease <cit.>. However, spiral arms triggered by shadows should follow the local Keplerian motion as GI-induced spirals. Therefore, we excluded the shadows being the origin of the spiral features in HD 100453.
§.§ Feasibility of locating spiral-driving planets
In protoplanetary and transitional disks, the occurrence rate and orbital distribution of the embedded planet population remain to be established due to a limited number of confirmed proto-planets.
The current high-contrast imagers with low spectrum resolution (i.e., R100) at NIR wavelengths cannot easily discriminate between the scattered light from the dusty disk and that of the embedded planet <cit.>. Hα surveys for the signal of planetary accretions mostly resulted in nondetections of planets <cit.>, possibly caused by high extinction or periodic accretions if the forming planet is present. In summary, current instruments with conventional techniques are inefficient in the search for planets embedded in disks.
Our dynamical motion analysis of the HD 100453 system validated the approach of mapping spiral arm motion to locate hidden giant planets in protoplanetary disks, first proposed by <cit.>. Although we only investigated a nonplanetary companion in this specific study, the pattern speed of companion-driven spirals depends on the location of the companion instead of its mass. Furthermore, the sensitivity of our motion measurement directly depends on the time span (see Eq. (<ref>)).
The uncertainty of the motion measurement decreases with the increase in the time span (t) of two epochs. Typical 1σ uncertainties for the motion measurements based on two epochs (if t=5 yr) of SPHERE observations in total intensity and polarized light are about 005 yr^-1 (Appendix <ref>) and 003 yr^-1 <cit.>, respectively. Therefore, spiral motions driven by a planet can be detected and distinguished from local Keplerian motions (GI scenario) within a feasible time of a few years.
From the posterior probability distribution of orbital parameters obtained by <cit.>, we derived the companion orbits that can dynamically drive the spiral arm (see Fig. <ref>). The corresponding probability distribution of orbital parameters is shown in Fig. <ref>. Our dynamical analysis opens a new and feasible window to probe the orbit distribution of planets in the spiral disks that currently are difficult to study via conventional techniques of direct imaging and spectroscopy. The measured spiral motion can determine the range of the planet eccentricity <cit.>. In combination with the planet mass estimated from the morphology of the spiral arms <cit.> or simply using mass upper limits from direct imaging, we can infer the formation and migration of planets at the early stage.
§ CONCLUSION
We present multi-epoch observations of the HD 100453 system and perform a dynamical analysis for the spiral motion in 4 yr. The measured pattern motion of the spiral arm S1 disfavors the GI origin. More importantly, the orbital motion of companion HD 100453 B can explain the spiral pattern motion in the scenario of the eccentric perturber. It is the first dynamical detection of a companion driving a spiral arm among protoplanetary disks.
Companion–disk interaction is a long-standing theory that could naturally explain the origin of spiral features in disks.
For the first time, our dynamical analyses directly confirm that the companion–disk interaction can indeed induce spiral arms in disks, supporting that it could also be the formation mechanism for other spiral systems without detected companions. Our dynamical detection also validates our method to be a feasible way of searching for and locating hidden spiral-arm-driving planets that are best targets for dedicated direct imaging explorations <cit.> with upcoming state-of-the-art high-contrast imagers.
We thank Dr. Faustine Cantalloube for the beneficial discussion about the low wind effect. The disk images of HD 100453 are based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programmes 095.C-0389(A) and 0103.C-0847(A). We thank all the principal investigators and their collaborators who prepared and performed the observations with SPHERE. Without their efforts, we would not be able to build the master reference library to enable our RDI technique. B.B.R. acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant PROTOPLANETS No. 101002188), and from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 101103114. R.D. acknowledges financial support provided by the Natural Sciences and Engineering Research Council of Canada through a Discovery Grant, as well as the Alfred P. Sloan Foundation through a Sloan Research Fellowship. E.C. acknowledges funding from the European Research Council (ERC) under the European Union's Horizon Europe research and innovation programme (ESCAPE, grant agreement No 101044152). A.V. acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 757561). J.-F.G. acknowledges funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 823823 (DUSTBUSTERS) and from the ANR (Agence Nationale de la Recherche) of France under contract number ANR-16- CE31-0013 (Planet-Forming-Disks), and thanks the LABEX Lyon Institute of Origins (ANR-10-LABX-0066) for its financial support within the Plan France 2030 of the French government operated by the ANR. T.F. acknowledges supports from the National Key R&D Program of China No. 2017YFA0402600, from NSFC grants No. 11890692, 12133008, and 12221003, and from the science research grants from the China Manned Space Project with No. CMS-CSST-2021-A04.
aa
§ SPHERE/IRDIS OBSERVATIONS AND DATA REDUCTION
SPHERE/IRDIS has performed multiple K-band observations of the HD 100453 system in 2015, 2016, and 2019. However, the observing conditions were extremely bad for two observations in 2016 (see Table <ref>), resulting in the strong low wind effect <cit.> that affects the disk morphology. Therefore, we excluded the 2016 data from the dynamical analysis. Although IRDIS also has an H-band observation in 2016, it is essential to perform the motion measurement based on observations in the same wavelength because different wavelengths trace dust grains with different sizes, which are possibly located at different places in the disk. So the K-band observations in 2015 and 2019 are the only available data for HD 100453 to perform the dynamical analysis.
All the observations used the apodized pupil Lyot coronagraph in its configuration <cit.>, with a mask diameter of 185 mas and a pixel scale of 12.25 mas. IRDIS in its dual-band imaging <cit.> mode produces simultaneous images at two nearby wavelengths (K1: 2.110 μm and K2: 2.251 μm). While K-band observation is affected by the thermal background emission, all of the observations did not contain sky background calibrations to calibrate it. We therefore used DIsNMF to model and generate the synthetic sky background images based on all the available sky background images in the SPHERE archive, similar to the technique described in <cit.>. Each observation was then processed using the [<https://github.com/avigan/SPHERE>, version 1.4.2] pipeline <cit.> to correct for the sky background based on our NMF sky model, flat field, and bad pixels. Then the pipeline generated calibrated and roughly aligned (offsets1 pix) data cube for each observation.
In the K1 band, our DIsNMF approach can successfully model and then remove the thermal background. However, we did not cleanly remove all the thermal background emissions in K2 because of the stronger background variation at longer wavelengths (i.e., K2) and limited calibration images in the SPHERE archive for modeling. To avoid potential influence from the residual of the background in K2, we only use K1 data to measure the spiral motion in this work.
§.§ Image alignment
To determine the star center behind the coronagraph, SPHERE generates satellite spots on coronagraph images by introducing a 2D periodic modulation on the high-order deformable mirror <cit.>, obtaining star center images. SPHERE usually uses satellite spots in the first and last images of an observation to locate the star center behind the coronagraph. During the entire science observation of HD 100453, SPHERE relies on the differential tip-tilt sensor control to maintain the star at the same position behind the coronagraphic mask. However, the differential tip-tilt sensor loop runs at 1 Hz, so some residual jitter of the images can occur at a faster rate, therefore inducing a small shift (typically 1 pix; ).
The misalignment of images within each epoch of observation and between different observations has a direct impact on motion measurement. We performed the fine alignment for all the science images in two steps, the alignment of star center images from different epochs, and the alignment of science images to their corresponding star center images. We found that the star center images from three epochs were properly aligned by with offsets less than 0.05 pix by measuring the intersection of four satellite spots. So no additional alignment was required for star center images from different epochs.
To align science images within each epoch of observation, we used the position of the companion.
The high signal-to-noise ratio (S/N∼588) of the companion enables the fine alignment of the science images after the pre-processing of the pipeline. In the star center image, the positions of the companion and the primary star are known.
Since we knew the field rotation between the star center image and a given science image, we rotated the star center image to create a reference position of the companion when the given science image was taken. The positions of the reference companion and the companion in the science image were then determined by 2D Gaussian fitting because the companion has no known disk. The derived position offset is the offset between the star center image and a given science image. Once we obtained all the offsets, we performed image shifts to create a finely aligned and calibrated science data cube for each epoch.
We examined the offset of the companion position in two observations of 2019 after the companion alignment and post-processing described in Appendix <ref>. The obtained offset is less than 0.06 pixels, which is the residual offset in the star center images. In summary, our image alignment can accurately align all the science images to a common reference, thus avoiding the false positive caused by the instrument in the motion measurement.
§.§ Bad frame exclusion
Bad frame exclusion does not alter the true morphology of the disk. The aim is to have a higher S/N of the disk by increasing the disk signal (i.e., including more images) and reducing the distortion from bad frames (i.e., reducing instrumental residuals). Failed adaptive optics (AO) corrections cause the host star outside the coronagraph, which should be excluded. Bad AO corrections result in strong stellar-light leakage around the inner working angle (IWA) and clear spider patterns. Such spider patterns were not easily removed by RDI, and hence left strong residuals in the disk image that altered the disk morphology. Therefore, we also excluded the images with strong spider patterns, mainly in the 2015 observation. In total, we excluded about 77%, 14%, and 19% of the science images for observations in 2015, and on 07 and 08 April 2019, respectively. Thanks to the high disk surface brightness, we had enough disk signal to perform the dynamical analyses after the bad frame exclusion.
§ STELLAR EMISSION SUBTRACTION
We removed the stellar contribution using RDI-DIsNMF to map the disk. Although other techniques based on angular differential imaging <cit.> are common approaches to remove stellar point spread function (PSF), it usually produces nonphysical artifacts when applied to disks <cit.>. For example, ADI has the self-subtraction effect that lowers the throughput of the disk and more importantly alters the disk morphology. Because ADI builds the PSF reference based on the science data itself, it may contain some of the astrophysical signals in the PSF model. Unlike ADI, RDI builds the PSF references from the companion-free and disk-free reference images. Therefore, RDI naturally avoided the self-subtraction effect. A commonly used PSF reconstruction technique is principal component analysis <cit.>. In data processing, however, PCA removes the mean of the image, and thus creates nonphysical negative regions around a strong astrophysical signal (i.e., a bright disk), calling for forward modeling to properly recover these signals <cit.>. In comparison, NMF does not remove the mean of the image <cit.>, which can thus avoid creating negative regions around bright sources in data pre- and post-processing. Furthermore, the recent development of the NMF algorithm by <cit.> introduces the data imputation concept, which ignores the disk region to avoid the overfitting problem when reconstructing the PSF model.
§.§ Initial PSF Selection
We followed the method described in <cit.> to perform RDI. The key step in RDI is assembling a proper PSF reference library. We created the master PSF reference library by using all the public archival data in K1 taken with IRDIS under the same coronagraphic settings.
The pre-processing of all the archival data was performed using pipeline.
Bad reference stars that contain astrophysical sources were excluded by the visual inspection of the residual images after the reductions of ADI and then RDI. After assembling a master reference library for K1 data, we down-selected 200 best-matched reference images for each science image in each observation of HD 100453. For each observation, we combined the down-selected reference images and formed a single library with nonredundant references. As a result, the final sizes of reference libraries were 2802, 2646, and 1816 for observations in April 2015, and 07 and 08 April 2019, respectively.
To remove the stellar PSF we used NMF to create components of the PSF model from the reference library. After that, the PSF model was then reconstructed for each science image using the data imputation in DIsNMF after masking the source region (i.e., the disk and the companion). Finally, the residual science cube after the PSF subtraction was derotated and mean combined to form the residual image in K1.
§.§ Final PSF Selection
The disk of HD 100453 is bright enough to affect the down-selection of the reference images using the mean square error described in <cit.>. Consequently, the selected reference images tend to have certain levels of wind-driven halo <cit.> that mimic bright and extended disk features. As a result, poorly matched PSF references lead to oversubtraction that sightly affects the disk morphology. Better-matched PSF references can be selected if the disk contribution is significantly reduced.
For the final PSF selection, we adopted an iterative process in which we first removed the disk obtained with RDI-DIsNMF from the science data, then re-selected a better matching library of PSF references with the disk-removed science images. Only the reference library that is selected for PSF modeling was updated during each iteration. The original science images remain unchanged in each RDI-DIsNMF subtraction.
For the HD 100453 exposures in this study, the disk-removed science images are sufficiently clear of disk signals to converge on matching PSF references after a maximum of five iterations. In Fig. <ref> we show the residual images of disk-removed science data after the reduction of RDI-DIsNMF. No disk signal is left over, indicating a good recovery of disk flux. Because we directly subtracted the disk image to obtain the disk-removed science images, the noise pattern was changed in the disk region (0.45).
We performed a final RDI-DIsNMF subtraction to obtain the disk image for pattern motion analysis (see Fig. <ref>). Throughout the paper, the observation of the disk on 08 April 2019 is only used for uncertainty estimation (see Appendix <ref>) because the integration time of the second epoch observation in 2019 is shorter than the first one (see Table <ref>).
§ DETERMINING THE LOCATION OF THE SPIRAL ARM
To determine the local maxima of the spiral arm in polar coordinates, we performed two Gaussian profile fits with an additional constant, totaling seven free parameters. By doing so, we can simultaneously account for the existence of a ring-like structure near the IWA and that of a spiral. The additional constant was adopted in the model to account for the overall disk emission.
The regions with radii of 14 pixels and 35 pixels were masked out to avoid the noisy regions close to the IWA (8 pixels) and those without disk emissions, respectively. In each azimuthal angle (1^∘ step) in the polar images we performed a fitting to obtain the location of the spiral arm and presented it in Figs. <ref> and <ref>. We visually inspected all fitting results to ensure the correctness of the fitting (i.e., that the data are reasonably represented by the model).
§ UNCERTAINTY ESTIMATION
Precise motion measurements require accurate recovery of disk morphology. While RDI-DIsNMF can avoid the self-subtraction effect and it is theoretically expected to mitigate the overfitting of stellar PSFs, it may still slightly alter the disk morphology during PSF modeling since we cannot guarantee a perfect match of the speckles between a target image and its corresponding selected PSF references.
To account for this potential PSF mismatch effect, it is necessary to introduce additional uncertainty in spiral motion measurements. With the two K1 observations in 2019 obtained on different nights, the temporal separation is too small (1 day apart) to obtain spiral motion measurement. However, these two observations in 2019 are ideal in offering a unique opportunity to examine the unknown uncertainties in our dynamical analysis, especially in quantifying the change in disk morphology, and thus its impact on spiral motion caused by RDI-DIsNMF.
By measuring the motion of the spiral arms in two observations in 2019, we can estimate the motion caused by our post-processing method instead of the real spiral motion. Given the time span of ∼1 day, the real spiral motion is ∼0 degree. We performed the identical motion measurement procedure as in Sect. <ref>, and obtained a motion of 002 for the S1 arm in the two 2019 observations shown in Fig. <ref>. Nevertheless, it is possible that the selected PSFs do not necessarily return such uncertainties for all the epochs studied here. Therefore, we conservatively consider an additional uncertainty of σ_RDI = 01 for the RDI-DIsNMF method.
In our analysis, the total uncertainty in our pattern speed analysis is
σ =√(σ^2_fit +σ^2_north +σ^2_RDI) t^-1,
where σ_ fit, σ_ north, and σ_ RDI are uncertainties caused by the measurement of the spiral locations, true north uncertainty of SPHERE, and our post-processing method, respectively. The time span between two epochs is represented by t, which is 4.0 years.
The uncertainty of the spiral S1 locations returns a fitting uncertainty (σ_ fit) of 0192. We adopt the true north uncertainty of SPHERE to be 008 in all epochs <cit.>. The uncertainty caused by the post-processing method is estimated to be 01 per epoch using the 2019 observations. For the motion measurement on two epochs, σ_ north is √(2 × (008)^2) = 0113 and σ_ RDI is √(2 × (01)^2) = 0142.
Based on Equation (<ref>), the final 1σ uncertainty on the pattern speed of the S1 arm is 0066 yr^-1.
§ EFFECT OF DISK FLARING ON SPIRAL MOTION MEASUREMENT
Throughout the paper, we present the velocity of the spiral motion based on the best-fit model (h = 0.22 × r^1.04) from <cit.> to correct for the disk flaring in the deprojection. The deprojection will affect the spiral location determination, and subsequently the spiral motion measurement. To study the effect of the disk flaring on spiral motion measurement, we adopted a reasonable range of parameters for correcting the disk flaring and performed new motion measurements, as described in Sect. <ref>. The effect of the disk flaring on spiral motion measurement is shown in Fig. <ref>.
In the case of HD 100453, the velocity of the spiral motion decreases with the increase in the disk flaring, ranging from 10 yr^-1 to 05 yr^-1. This velocity range still favors the companion-driven scenario and disfavors the GI scenario. Given that different disk flarings only have a minor impact on the spiral motion and do not change our conclusion, we only present the motion measurement based on the best-fit model of disk flaring from <cit.>.
§ COMPANION ORBITAL FITTING
We performed a linear fit to the position angles of HD 100453 B from 2003 to 2019. The astrometric data is listed in Table <ref>, which was adopted from <cit.>, <cit.>, and <cit.>. The orbital period of the companion is about 800 years, which is significantly longer than the time span of our astrometric data. For the ∼20-year temporal separation studied here, the linear fit is therefore sufficient in obtaining the angular velocity of the companion. Figure <ref> shows the position angles of HD 100453 B and our linear fitting result. The slope corresponds to the measured angular velocity of the companion in the sky plane, which is 0384±0019 yr^-1 in the counterclockwise direction. Using only two astrometry data sets in 2015 and 2019, we obtained an angular velocity of 040±007 yr^-1 in the sky plane, which validates the choice of a linear fit.
§ COMPANION ORBITAL MOTION
The companion orbit can be described by the orbital elements. The radial separation (r) between the companion and the primary star is
r( ν) =a( 1-e^2) /1+ecosν ,
where ν, a, and e are the true anomaly, semimajor axis, and eccentricity, respectively.
We introduced u to represent the angle between the radial direction of the companion (r) and the intersection between the orbit and the sky planes. So the angle u is
u=π -( ν +ω),
where ω is the argument of periastron.
The projected radial separation of the companion in the sky plane (r_ proj) is
r_proj( ν) = a( 1-e^2) /1+ecosν√(cos^2 u+sin^2ucos^2 i) ,
where i is the inclination between the orbital plane and the sky plane. Equation (<ref>) shows the projection of the radial separation in Equation (<ref>) onto the sky plane. For a given combination of orbital elements and the projected radial separation (r_ proj), we can obtain their corresponding true anomaly (ν) by solving Equation (<ref>) numerically.
The specific angular momentum
h⃗ =r⃗ ×ṙ⃗̇,
or
h=rV_,
where V is the velocity of the companion in the orbit and the symbol donates the direction that is perpendicular to the outward radial from the primary to the companion. The orbit equation that defines the separation between the primary and the companion is
r=h^2/μ( 1+ecosν) ,
where h is the specific angular momentum. Substituting Equation (<ref>) into Equation (<ref>), we obtain
h=√(μ a( 1-e^2) ).
The definition of angular velocity is
u̇ =V_/r,
from which we can obtain the angular velocity of the companion at a given position in the orbital plane,
u̇ =√(μ( 1+ecosν) /r^3) ,
where μ is the gravitational parameter. In our case, the gravitational parameter is a constant as
μ =G( m_1+m_2) ,
where G, m_1, and m_2 are the gravitational constant, the mass of the primary (1.7 M_⊙), and the mass of the companion (0.2 M_⊙), respectively.
We use V_proj to represent the projection of V_ in the sky plane,
V_proj=V_√(sin^2 u+cos^2 ucos^2 i) .
The velocity in the sky plane that is perpendicular to the projected radial separation (r_proj) is
V_proj=V_projcos i/√(sin^2 u+cos^2 ucos^2 i)√(cos^2 u+sin^2 ucos^2 i).
By substituting Equation (<ref>) into Equation (<ref>), we obtain
V_proj=V_cos i/√(cos^2 u+sin^2 ucos^2 i) .
The projected angular velocity of the companion in the sky plane is
u̇_proj =V_proj/r_proj,
which can be rewritten as
u̇_proj =u̇cos i/cos^2 u+sin^2 ucos^2 i
by substituting Equation (<ref>), Equation (<ref>), and Equation (<ref>) into Equation (<ref>). Equation (<ref>) shows the projection of the angular velocity of the companion onto the sky plane.
We adopted the orbits of the companion from <cit.>, which were derived from the astrometric fit to the companion positions listed in Table <ref>. For a given combination of the orbital elements, we first use Equation (<ref>) to numerically derive the true anomaly of the companion for a given orbit, adopting the separation r_proj of 1.046 in 2019. Based on Equation (<ref>), Equation (<ref>), Equation (<ref>), and Equation (<ref>), we can derive the angular velocity of the companion in the sky plane using only the orbital elements (i.e., a,e,ν,ω,i). Finally, we deprojected the angular velocity of the companion in the sky plane to the disk plane, adopting the disk inclination, the disk position angle, and the companion position angle as 3381, 14435, and 1332, respectively. The derived current angular velocity of the companion is 0457^+0023_-0023 yr^-1. In Fig. <ref>, we present the calculated maximum velocity of the companion in the disk plane, which is 0488^+0355_-0154 yr^-1. The uncertainties are (16th, 84th) percentiles in Bayesian statistics.
We also validated the posterior probability distribution of orbital parameters by using <cit.> and the same astrometric data shown in Table <ref>. Based on the new orbital parameters, we obtained the current angular velocity and maximum velocity of the companion to be 0419^+0035_-0031 yr^-1 and 0560^+0691_-0233 yr^-1, which is consistent with the results derived from <cit.>.
§ ORBITAL PARAMETERS OF THE SPIRAL-DRIVING COMPANION
In general, the spiral pattern motion should be in the range of the slowest and fastest companion orbital frequency in the scenario of an eccentric perturber <cit.>. From the posterior probability distribution of orbital parameters obtained by <cit.>, we derived the orbital parameters that satisfy Eq. 12 in <cit.>. In this case, the maximum orbital velocity of the companion is greater than the minimum spiral motion (054 yr^-1). The corresponding distribution of orbital parameters is shown in Fig. <ref>.
|
http://arxiv.org/abs/2306.02677v1
|
20230605081144
|
A Privacy-Preserving Federated Learning Approach for Kernel methods
|
[
"Anika Hannemann",
"Ali Burak Ünal",
"Arjhun Swaminathan",
"Erik Buchmann",
"Mete Akgün"
] |
cs.LG
|
[
"cs.LG",
"cs.CR",
"I.2; I.2; K.6.5; E.3"
] |
[
Wei Hu
July 31, 2023
=================
It is challenging to implement Kernel methods, if the data sources are distributed and cannot be joined at a trusted third party for privacy reasons. It is even more challenging, if the use case rules out privacy-preserving approaches that introduce noise.
An example for such a use case is machine learning on clinical data. To realize exact privacy preserving computation of kernel methods, we propose FLAKE, a Federated Learning Approach for KErnel methods on horizontally distributed data. With FLAKE, the data sources mask their data so that a centralized instance can compute a Gram matrix without compromising privacy.
The Gram matrix allows to calculate many kernel matrices, which can be used to train kernel-based machine learning algorithms such as Support Vector Machines. We prove that FLAKE prevents an adversary from learning the input data or the number of input features under a semi-honest threat model. Experiments on clinical and synthetic data confirm that FLAKE is outperforming the accuracy and efficiency of comparable methods. The time needed to mask the data and to compute the Gram matrix is several orders of magnitude less than the time a Support Vector Machine needs to be trained. Thus, FLAKE can be applied to many use cases.
§ INTRODUCTION
Kernel methods are a prominent class of machine learning algorithms. However, in many real-world scenarios, kernel methods such as Support Vector Machines (SVM) cannot be readily applied, because the data sources are inherently distributed, but the data is private and cannot be shared freely.
Consider a machine learning scenario, where a Kernel method on medical data is to be used to develop effective treatments, or to identify risk factors for certain diseases.
The input data is collected from multiple hospitals, and it carries sensible medical information that must be kept private. In this scenario it is impossible to apply noise, because neither the patient nor the physician can accept stochastic results. The delay due to processing strong cryptography on a large data set in multiple rounds of a Secure-Multiparty Computation Protocol is also unacceptable.
Existing work in the field of Secure-Multiparty Computation <cit.> or Privacy-Aware Federated Learning <cit.> can be categorized in three approaches based on (1) encryption (2) differential privacy or (3) randomized masking <cit.>. The first two either apply strong cryptography or add noise to private data which is a severe restriction for many use cases.
In this paper, we focus on the third approach using randomized masking.
We present FLAKE, our Federated Learning Approach for KErnel methods. FLAKE computes the Gram matrix over distributed data sources that store horizontally partitioned data. The Gram matrix allows various kernel matrices to be computed and kernel-based machine learning algorithm to be trained as if the training takes place on centralized data. Examples for such algorithms include Support Vector Machines, Gaussian processes, kernel k-means, and more. To ensure privacy, FLAKE masks the input data at the sources. FLAKE ensures that the resulting Gram matrix is exact. In order to update the Gram matrix, only a fraction of the values need to be re-calculated. Thus, inference and update are inexpensive operations.
We make three contributions:
* We introduce the FLAKE protocol, which allows a function party to privately compute a Gram matrix on masked input data from multiple input parties.
* We prove that both the input data and the number of features is kept private, unless function party and input parties collude and share unmasked data.
* We evaluate FLAKE by experiments with medical and synthetic data.
Our formal analysis and our experiments confirm that FLAKE has the potential to open up new fields of application for Kernel-based methods on horizontally partitioned data, that must be kept private, but must be analyzed with an exact approach.
Paper structure:
Section <ref> reviews related work, followed by a description of FLAKE in Section <ref>. Section <ref> analyzes the privacy properties of the protocol. Section <ref> contains the experimental evaluation.
Finally, Section <ref> concludes.
§ RELATED WORK
§.§ Kernel-based Methods
Kernel-based machine learning algorithms have a well-established mathematical background. They are among the well-performing machine learning algorithms and are widely utilized in various applications <cit.>. They can learn non-linear patterns in the data efficiently thanks to the kernel trick: the data is represented by a set of pairwise similarity comparisons, the kernel values, instead of explicitly mapping them into higher dimensions, where linear classification can be done. To compute these kernel values, one can use several different kernel functions such as linear, polynomial, and radial basis function (RBF). Both polynomial and RBF kernels can be computed by using the kernel matrix of linear kernel, which is the Gram matrix. The Gram matrix is a positive semi-definite matrix and its entries indicate the dot product of the corresponding samples' feature vectors.
Therefore, we can formulate both kernels such that they are computable by using the entries of the Gram matrix. For instance, the polynomial kernel can be written as k(x,y) = (x^Ty + v)^p where v ≤ 0 is a trade-off parameter and p ∈ℕ is the degree of the polynomial. Similarly, the RBF kernel can be formulated as k(x,y) = exp(-x^Tx - 2 x^Ty + y^Ty^22 σ^2) where σ≤ 0 is the similarity adjustment parameter. In , we will benefit from this observation to compute the desired kernel matrices from the Gram matrix.
§.§ Federated Learning
Introduced by <cit.>, Federated Learning (FL) allows users to reap the benefits of modeling on rich yet sensitive data stored on distributed nodes. In conventional machine learning, a model ℳ is trained by the centralized data 𝒟_cent. However, due to privacy concerns, the data is not allowed to leave the nodes. FL addresses this problem. Participating nodes 𝒩_1, ..., 𝒩_n in FL aim to collaboratively train the model ℳ without revealing their data to other nodes. In FL, every node 𝒩_i trains a local model ℳ_i on its respective data set 𝒟_i and subsequently shares the model parameters with a central server. The central server then aggregates the received model parameters to obtain a global model ℳ_fed with an accuracy of acc_fed. As more data is collected, the process is repeated, with each node updating its local model and forwarding the updated parameters to the central server. Thus, the data does not leave its origin at any time in the computation. At some point in the iteration of FL, if | acc_fed - acc_cent|≤δ, then the Federated Learning framework is said to have δ-accuracy loss. The goal in FL is to have less accuracy loss while maintaining efficiency and the data's privacy.
The privacy of the aggregated models can be ensured in different ways.
Approaches based on encryption (1) like homomorphic encryption (HE) <cit.> aim to protect the privacy of aggregated models by encrypting individual models, but HE is computationally heavy and limited in functionality. Another cryptographic approach is secure multi-party computation (SMC) <cit.>, which allows multiple parties to jointly compute on private data without revealing it, but SMC still requires significant execution time due to communication overhead.
FL studies utilizing methods based on differential privacy (2) (DP), protect the privacy of the aggregated model by adding noise to the individual models, making it impossible to restore the original model or to inference information about a data point's membership. However, this usually involves a cutback in accuracy <cit.>.
The randomized masking approach (3) for FL was used by <cit.> who propose a geometric perturbation approach to preserve data privacy in classification tasks by hiding content while maintaining dot product and Euclidean distance relationships. To provide even stronger security, <cit.> utilize a random linear transformation scheme that requires the data owner to send perturbed data to the service provider for training SVM classifiers. Lin also applies perturbation for clustering tasks using a randomized kernel matrix to hide dot product and distance information <cit.>. Another randomization technique using Bloom filters enables outsourcing of mining association rules while protecting business intelligence and customer privacy, but only supports approximate reconstruction of mined frequent item sets by the data owner <cit.>. <cit.> introduce random kernels where the original data gets transformed using random linear transformation. However, due to the nature of approximation and introduction of noise, they all suffer from performance loss to provide privacy.
<cit.> provide an exact protocol and is, therefore, the closest study to our approach. Here, the data sources first have to communicate with each other to mask their data. Then they send these masked samples to enable the cloud so that it can compute the desired kernel-based machine learning algorithm. However, due to the utilized encoding technique in ESCAPED, one has to run the protocol from scratch whenever there is new data in any party that needs to be integrated into the model or there is a new party involved in the computation.
§ FLAKE
This section explains FLAKE, our privacy-aware Federated Learning Approach for KErnel methods.
§.§ Scenario Definition
We assume a multi-party scenario consisting of multiple input parties (Alice, Bob, Charlie for simplicity) and one function party. Alice, Bob and Charlie hold sensitive data that is horizontally partitioned, i.e., each input party stores the same schema with different training data. The function party performs Federated Learning iteratively on a (possibly large) set of input-data chunks.
We consider a fully untrusted setting where the input data must not leave their origin. Formally, we assume an arbitrary subset of semi-honest input parties and function party where no party colludes with another one. Note that this leaves aside extreme data distributions or all-zero cases where properties of the training data of one or more input parties can be guessed, or where only one input party exist.
Therefore, FLAKE needs to deal with four requirements:
Updatability: Privacy: The function party or an input party cannot learn the data of another input party, and
the number of features is kept private from the function party.
Accuracy: The accuracy of the federated model must be as good as that of the centralized one.
Updatability: It must be possible to update the model with new data.
Efficiency: Communication costs and execution time must be feasible for our scenario.
§.§ The FLAKE Protocol
FLAKE computes the Gram matrix of samples from different input parties to enable the training and testing of kernel-based machine learning algorithms. This takes place in three stages Distribution of Seed, Masking and Training and Inference and Updating.
Distribution of Seed
FLAKE relies on a Public Key Infrastructure, which delivers each input party the public signing keys for all other input parties. To initiate the process, one input party is randomly selected as the leader and generates a random seed. The leader then shares this seed with the other input parties using public-key encryption and digital signatures. The function party is a natural choice for the task of the aggregator, which transmits encrypted messages between input parties, but cannot decrypt or modify these messages. We assume a trusted third party for the distribution of public keys. This is a common assumption in frameworks for privacy-preserving federated learning <cit.>.
Masking and Training
The objective of this stage is to let the function party compute a Gram matrix without learning the data from the input parties (Requirement Privacy).
The Gram matrix G of the data matrices A, B, C provided by Alice, Bob, and Charlie is the matrix of all possible inner products AB^T, AC^T, CA^T,....
For better understanding, we introduce the private calculation of AB^T given A ∈ℝ^n_A× f and B ∈ℝ^n_B × f where f > 1. The following protocol reveals AB^T to the function party while hiding the input data and the number of features:
First, Alice and Bob calculate a random full-rank matrix N ∈ℝ^k × f for some k> f, based on a shared seed. Throughout all input parties and training iterations, N remains constant. Since rank(N)=f, there exists a non-unique matrix L∈ℝ^f × k such that LN = I_f× f, that can be computed using the singular value decomposition (SVD) of N, and is called the Moore-Penrose pseudoinverse. SVD allows us to write N as N=USV^T with U, V being orthogonal matrices and S being a diagonal matrix. The inverse of N can be determined from the SVD: L=N^-1=US^-1V^T. Here, S^-1 can be derived by taking the multiplicative inverse of every entry of S.
Now, Alice computes a independent left inverse L_A such that L_AN=I and Bob L_B such that L_BN=I. Then, the data gets masked by Alice A'=AL_A(NN^T)^1/2∈ℝ^n_A × k while Bob masks his data accordingly B'=BL_B(NN^T)^1/2∈ℝ^n_B × k.
Figure <ref> illustrates this.
A' and B' are forwarded to the function party, which only reveals n_a and n_b, respectively, and the Gram matrix of A and B when A'B'^T is computed. The function party computes the dot product AB^T, as shown in Figure <ref>. Then, the function party can compute the desired kernel matrix using the Gram matrix and perform training and testing of the designated kernel-based machine learning algorithm. The remaining entries of the Gram matrix are masked analogously. When dealing with more than three parties, the Gram matrix has to be extended correspondingly.
Inference and Updating
To integrate new data without having to rebuild the model from scratch (Requirement Updatability), FLAKE provides a protocol for inference and updating the Gram matrix.
We can distinguish two cases: First, one of the input parties may have received new input data. Second, a new input party shall be integrated into the computation.
For simplicity, we again explain our protocol with three parties Alice, Bob and Charlie with their respective data sets A, B, C.
Assume C has new data X which must be integrated into the Gram matrix shown in Table <ref>. X is the data set to be used for updating the model. To extend the gram matrix with the new values from C, the function party only needs to have the entries in the dashed rectangles. The party C uses the aforementioned masking and sending approaches for this purpose.
Now assume that a new input party needs to be added. In this case, the function party must calculate the values in the continuous rectangles in Table <ref>. The remaining new entries can be computed locally by C. In both cases, updating the Gram matrix means that the function party has to calculate only a small set of new values. The vast majority of values need to be calculated just once, and a large share of the calculation effort remains at the input parties.
Note that X can be also a test data set.
When a party wants to leave the consortium the function party deletes all random components coming from this party and gram matrix entries that are calculated using these random components. This is important for compliance with legal regulations such as General Data Protection Regulation (GDPR) <cit.>. It can be seen as an application of machine unlearning. In current FL methods, it is unclear and difficult how to eliminate a party's contribution to the collaboratively trained ML model.
§ ANALYSIS OF PRIVACY PROPERTIES
§.§ Privacy Definition
We consider the semi-honest (or honest-but-curious) adversary model.
In a multi-party scenario, a semi-honest adversary <cit.> corrupts an arbitrary subset of the parties involved. The corrupted parties follow the multi-party protocol as specified, i.e., the output of the protocol is correct. The corrupted parties try to learn private data from the messages they receive from uncorrupted parties. At the end of the protocol, the corrupted parties are allowed to share their information.
FLAKE consists of a function party and a number of input parties. From Requirement Privacy follows that FLAKE needs to ensure two privacy properties: (i) the data of uncorrupted input parties must kept private from any corrupted input party or the function party, and (ii) a corrupted function party must not be able to learn the number of features.
If the function party and all input parties operate honestly, privacy properties (i) and (ii) are ensured. If all input parties have been corrupted by a semi-honest adversary, privacy cannot ensured.
Between these extreme cases, we distinguish three cases for further analyses:
(1) A subset of the input parties is corrupted by a semi-honest adversary.
(2) The function party is corrupted by a semi-honest adversary.
(3) The function and a subset of input parties are corrupted by a semi-honest adversary.
Recall that we do not consider extreme scenarios. In particular, we exclude data distributions where the number of features or the training data of one or more input parties can be guessed, and protocols with only one input party. However, to make the guessing harder, the input parties generate a unique matrix L in each iteration. Therefore, the function party can not determine if an input party updates their data in a subsequent iteration. Also, all-zero rows are not allowed; though these are usually discarded as part of preprocessing anyway.
§.§ Privacy Analysis
Before we begin analysing the privacy of the protocol, we shall establish its correctness, which is unaffected by the existence of a semi-honest adversary.
Without loss of generality, we assume there are two input parties Alice and Bob with individual left inverses L_A and L_B of a common mask matrix N, whose outputs are A'=AL_A(NN^T)^1/2 and B'=BL_B(NN^T)^1/2. Then, the correctness of the protocol follows as below.
A'B'^T =AL_A(NN^T)^1/2(BL_B(NN^T)^1/2)^T,
=AL_A(NN^T)^1/2(NN^T)^1/2L_B^TB^T,
=AL_A(NN^T)L_B^TB^T,
=A(L_AN)(L_BN)^TB^T,
=AB^T = (BA^T)^T.
Analogously, correctness follows for AA^T and BB^T.
We analyze Case (1) first. Since the input parties know the number of features, we only have to prove Property (ii), i.e.,
a corrupted function party cannot learn the number of features.
FLAKE is secure against a semi-honest adversary who corrupts a subset of the input parties.
Let S_U be the set of all input parties involved in the computation. While executing FLAKE protocol, an input party P ∈ S_U has access only to the common mask N, the common seed used to generate N and the left inverse L_P of N generated by P. At any point in FLAKE protocol, the input party P gets neither the masked data of other input parties nor the computed Gram matrix using the masked data of all input parties. Thus, A semi-honest adversary corrupting a subset of input parties S_C ⊂ S_U cannot learn the data of non-corrupted input parties S_H ⊂ S_U where S_C ∩ S_H = ∅.
FLAKE is, therefore, secure against the semi-honest adversary corrupting a subset of input parties. Because a semi-honest adversary follows the protocol, the data provided by the corrupted input parties do not affect the result of the computation.
Regarding Case (2), we need to prove that FLAKE does not allow a semi-honest function party to learn (i) input data nor (ii) the number of features.
FLAKE is secure against a semi-honest adversary who corrupts the function party.
A semi-honest function party is only the receiver of the masked data from the input parties, and follows the protocol as intended. Without loss of generality, let there be two input parties Alice and Bob with input data A ∈ℝ^n_A × f and B ∈ℝ^n_B × f, respectively, where n_x is the number of samples in the corresponding party and f is the number of features. The semi-honest function party receives the masked input matrices of them, which are A'=AL_A(NN^T)^1/2∈ℝ^n_A × k and B'=BL_B(NN^T)^1/2∈ℝ^n_B × k where k > f. Then, it computes A'B'^T = AB^T ∈ℝ^n_A × n_B, A'A'^T = AA^T ∈ℝ^n_A × n_A and B'B'^T = BB^T ∈ℝ^n_B × n_B. The data that the function party has access to then includes
(a) A' and analogously, B'.
(b) AB^T=(BA^T)^T, AA^T and analogously BB^T.
Regarding (a), it is trivial that A' does not reveal the number of features of A. We now show that A' is not produced by a unique matrix A. Given an orthogonal matrix O ∈ℝ^f× f with f>1, for Ã=AO and L_Ã=O^TL_A, we have A'=Ã(L_Ã(NN^T)^1/2. Further, since we require that not all entries of any one sample is full of zeroes, the function party cannot deduce anything about A from A'.
Regarding (b), the matrices that produce these Gram matrices are not unique, since for any orthogonal matrix O ∈ℝ^f× f where f>1, labeling Ã=AO and B̃=BO, we have
ÃÃ^T=AA^T, B̃B̃^T=BB^T, ÃB̃^T=AB^T.
In consequence, the function party only learns the singular values and singular vectors of the matrices, i.e., it can find U and S from the singular value decomposition A=USV^T by eigen-decomposing AA^T. However, these values are insufficient to solve for A since we can generate countless number of different orthogonal matrices <cit.>. The function party learns neither (i) input data nor (ii) the number of features.
Although the function party obtains the Gram matrix, it cannot deduce the samples used to compute this Gram matrix, which was shown by <cit.>. Details can be found in the supplementary material.
Without loss of generality, let there be two input parties Alice and Bob with input data A ∈ℝ^n_A × f and B ∈ℝ^n_B × f, respectively, where n_x is the number of samples in the corresponding party and f is the number of features. The semi-honest function party receives the masked input matrices of them, which are A'=AL_A(NN^T)^1/2∈ℝ^n_A × k and B'=BL_B(NN^T)^1/2∈ℝ^n_B × k where k > f. Then, it computes A'B'^T = AB^T ∈ℝ^n_A × n_B, A'A'^T = AA^T ∈ℝ^n_A × n_A and B'B'^T = BB^T ∈ℝ^n_B × n_B. At any point in this computation, the number of features f is revealed to the semi-honest function party.
Even though the number of features of input data A and B is not revealed and cannot be learned by the semi-honest function party, let us assume that it has the knowledge of f only and no other side information. Under these circumstances, the semi-honest function party still cannot deduce unique matrices satisfying AA^T, BB^T and AB^T. Let O ∈ℝ^f× f be an orthogonal matrix where f>1. Then, we can generate Ã=AO and B̃=BO such that:
ÃÃ^T=AA^T, B̃B̃^T=BB^T, ÃB̃^T=AB^T.
Considering that we can generate countless number of different orthogonal matrices <cit.>, which makes brute force infeasible, even if the number of features is somehow learned, the semi-honest function party cannot obtain the input matrices A and B.
Case (3) means that not only the function party, but also a subset of the input parties has been corrupted by a semi-honest adversary. In this case, since the adversary knows N, the privacy of the data of the other parties is compromised since for data from a non-corrupt party Charlie of the form C'=CL_C(NN^T)^1/2, the adversary can obtain C by multiplying the data with (NN^T)^1/2L^T.
§ EXPERIMENTS
§.§ Implementation
In this section, we evaluate the performance of FLAKE and provide a run-time analysis.
We experiment with three clinical data sets which contain medical records and, thus, have strong privacy concerns <cit.>. All of them are suitable for classification tasks. For the run-time analysis, we experimented with a synthetic data set with {500, 1000, 2000, 4000, 8000} data points (dp) for each input party.
Details about their statistics can be found in the supplementary material.
Before starting with the run-time experiments, we want to compare FLAKE to other methods for randomization-based kernel computation for horizontally shared data. For this purpose, we implemented a 5-fold cross validation with FLAKE, ESCAPED <cit.>, PPSVM <cit.>, RSVM <cit.> and a naive SVM classifier in Python. Our experiments show that FLAKE, ESCAPED and the naive classifier produces the same results as they are exact solutions. Because of the introduced stochasticity, RSVM and PPSVM have a performance almost as good as the naive classifier, but they are not exact. Furthermore, the overhead associated with the various methods was measured for a single node and 1000 data points. The overhead for all methods was found to be extremely low, to the point of being negligible. Therefore, the subsequent experiments will primarily focus on scaling up the number of data points and input parties for FLAKE and ESCAPED, the two exact methods. For further details see the supplementary material.
We implemented FLAKE for a scenario with three input parties and one function party.
To mimick the network communication between input parties and function party, we have implemented each party as an isolated process that communicates with others via TCP connections. Our four data sets are divided into three disjoint partitions. Each partition is assigned an input party. Each input party then masks its data according to the FLAKE protocol, and splits the masked data into chunks.
After that, each input party compresses the chunks by zlib's Deflate-algorithm, and forwards the compressed chunks to the function party. The function party deflates the chunks, computes the Gram matrix and a polynomial kernel.
Finally, a SVM is trained with a 5-fold cross-validation. A grid search optimizes the corresponding hyperparameters C ∈{2^-4, ..., 2^10} (misclassification penalty) and p ∈{1, ..., 5} (degree).
All experiments were executed on a host with an AMD 7713 with 2.0GHZ and 512 GB of memory, which is a typical stand-alone server configuration for a small datacenter. We have used a single-threaded implementation. We repeated each experiment 10 times.
§.§ Run-time Analysis
We want to confirm that training time, masking time, communication time, gram-computation time and update time do not limit the applicability of FLAKE.
As known from literature, SVMs typically do not scale readily to very large data sets. In a centralized scenario, it is the training time for the SVM that limits the size of the input data.
We declare success, if we can show that the run-times of the stages of FLAKE in a federated scenario are negligible, compared to the stages required for the federated training of a SVM without masking.
Training Time
The training takes place at the function party.
Figure <ref> shows the training time for varying numbers of dp in our synthetic data set. As expected, the longest takes the training of the data set with 8000 dp with 516.62 (± 2.45) on average. Recall that 8000 dp means that each of our three input parties sends a masked data set of this size to the function party.
Masking Time
To find out how much masking burdens the input parties, we ran a series of experiments, again with the synthetic data set. We varied the number of dp and measured the time for masking. Figure <ref> reports the masking time measured for one input party. Even with 8000 dp per input party, the execution takes less than 0,003 (± 0.0001) seconds on average. This masking time is negligible, compared to the time to train the SVM model, and does not restrict the applicability of FLAKE.
Communication Time
Because our implementation runs on a data-center host, we estimate the communication time needed to send masked data from the input parties to the function party. The communication time T can be estimated as shown in Equation <ref>:
T = Datasize/Bandwidth + Latency× (1+ Packetloss)
Our largest data set consists of 8000 points, which adds up to a Datasize of 1.31 MB for each input party. A typical VPN has a Bandwidth of 1.25MBps, with an average Latency of 0.1s and a Packetloss of 2% <cit.>. For this set of parameters, the estimated the communication time is 1.05 seconds. Without Latency and Packetloss, it is 1.048 seconds. Recall that our experiments are executed on a single data-center host, i.e., the actual data transfer takes place as inter-process communication in the main memory of the host and virtually requires no time.
Gram-Computation Time
We also measured the time the function party needs to compute the Gram matrix from the masked data from the input parties. Figure <ref> shows that the computation time increases slightly more than linearly with the size of the data set, with no outliers. For 8000 dp, it took 0.99 (± 0.0083) seconds on average to compute the Gram matrix. Again, 8000 dp means the function party receives 3x8000 masked data sets from our three input parties. In summary, we have confirmed that the Gram-computation time does not contribute much to the total computation time.
Update Time
Having shown that the time required to mask the data, send them to the function party, and compute the Gram matrix is several orders of magnitude below the time to train the model, we now consider updating the model.
To mimick a typical Federated Learning use case, where the training data increases due to dynamic data collection after the initial training, the data sets were updated with additional data in multiple training iterations.
§.§ Discussion
Many privacy-preserving machine learning methods ensure privacy by adding stochasticity, which decreases the result quality (privacy ∼ utility trade-off) <cit.>. In contrast, the function party in FLAKE obtains an exact Gram matrix (Requirement Accuracy), that can be used to compute any desired kernel matrix and later train any kernel-based machine learning algorithm, as if it was centralized data.
ESCAPED, which provides an accurate solution as well, requires more communication between the parties, which results in longer execution times <cit.>. As shown in section <ref>, FLAKE is more efficient due to less communication rounds. Also, FLAKE allows input parties to update the Gram matrix with new samples independently of the previous samples. In ESCAPED, updating the Gram matrix with new samples is not supported. Instead, the Gram matrix must be recomputed using all the samples that the input parties have. After all, FLAKE has various advantages over preceding work using the randomized masking approach.
§ CONCLUSION
Federated learning is an essential aspect of distributed machine learning, particularly when data privacy is a primary concern. However, when implementing both Federated Learning and privacy-preserving methods, the quality of model training can suffer as a result. In this work, we have proposed FLAKE, a Federated Learning Approach for KErnel methods, as a solution to that challenge. Our approach allows for the efficient and private computation of the Gram matrix from data that is distributed on multiple sources, enabling the training of kernel-based machine learning algorithms without any trade-offs in utility.
Initially, four requirements were formulated, of which we have shown that FLAKE satisfies them: Privacy, Accuracy, Updatability and Efficiency. We showed, that FLAKE is both correct and private with regard to the considered threat models. We conducted various experiments on benchmark data sets to show FLAKE meets the accuracy and correctness of centralized models. Besides conducting experiments on well-known data sets, we also replicated the experiments of <cit.> on HIV V3 Loop Sequence data. While other privacy-preserving techniques can be computationally expensive, FLAKE is quite efficient. An analysis of FLAKE and comparable approaches shows, that FLAKE is not as computationally expensive. In order to expand the capabilities of the framework, additional common machine learning operations could be incorporated as future developments. Also, the masking and processing of vertically shared data could be included in FLAKE.
We believe that FLAKE has the potential to improve healthcare outcomes and reduce costs while addressing the privacy concerns associated with machine learning on clinical data. We also think that it may find many use cases in other application domains that handle sensitive, distributed data.
apalike
|
http://arxiv.org/abs/2306.07822v1
|
20230613145348
|
Non-coplanar Long Wavelength Magnetism and Charge Order in the Kagome-based Weyl Semimetal Mn$_{3}$Sn
|
[
"Y. Chen",
"J. Gaudet",
"G. G. Marcus",
"T. Nomoto",
"T. Chen",
"T. Tomita",
"M. Ikhlas",
"Y. Zhao",
"W. C. Chen",
"J. Strempfer",
"R. Arita",
"S. Nakatsuji",
"C. Broholm"
] |
cond-mat.str-el
|
[
"cond-mat.str-el"
] |
Institute for Quantum Matter and Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA
Department of Physics, University of California, Berkeley, CA 94720, USA
Material Sciences Division, Lawrence Berkeley National Lab, Berkeley, California 94720, USA
Institute for Quantum Matter and Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA
NIST Center for Neutron Research, National Institute of Standards and Technology, Gaithersburg, Maryland 20899, USA
Department of Materials Science and Eng., University of Maryland, College Park, MD 20742-2115
Institute for Quantum Matter and Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA
Research Center for Advanced Science and Technology, University of Tokyo, 4-6-1 Komaba Meguro-ku, Tokyo 153-8904, Japan
Institute for Solid State Physics (ISSP), University of Tokyo, Kashiwa, Chiba 277-8581, Japan
Institute for Solid State Physics (ISSP), University of Tokyo, Kashiwa, Chiba 277-8581, Japan
Institute for Solid State Physics (ISSP), University of Tokyo, Kashiwa, Chiba 277-8581, Japan
NIST Center for Neutron Research, National Institute of Standards and Technology, Gaithersburg, Maryland 20899, USA
Department of Materials Science and Eng., University of Maryland, College Park, MD 20742-2115
NIST Center for Neutron Research, National Institute of Standards and Technology, Gaithersburg, Maryland 20899, USA
Advanced Photon Source, Argonne National Laboratory, Illinois 60439, USA
Research Center for Advanced Science and Technology, University of Tokyo, 4-6-1 Komaba Meguro-ku, Tokyo 153-8904, Japan
RIKEN Center for Emergent Matter Science, 2-1 Hirosawa Wako Saitama 351-0198, Japan
Institute for Quantum Matter and Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA
Institute for Solid State Physics (ISSP), University of Tokyo, Kashiwa, Chiba 277-8581, Japan
Department of Physics, University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
Trans-scale Quantum Science Institute, University of Tokyo, Bunkyo-ku, Tokyo 113-8654, Japan
Canadian Institute for Advanced Research, Toronto, M5G 1Z7, ON, Canada
Institute for Quantum Matter and Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA
NIST Center for Neutron Research, National Institute of Standards and Technology, Gaithersburg, Maryland 20899, USA
Department of Materials Science and Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
The magnetism of the kagome-based Weyl semi-metal Mn3Sn is explored using neutron and X-ray scattering. A co-planar anti-chiral k=0 magnetic order develops below T_N = 445 K, with an ordered moment of 2.1(1)μ_B/Mn and a correlation length exceeding 350 nm at T=300 K. For T<T_ inc=285 K, this structure is replaced by an incommensurate non-coplanar structure composed of a transverse polarized helimagnet with wave vector k_χ=k_χĉ and a longitudinally polarized spin density wave with k_β=k_βĉ. Very interestingly, charge density waves with wave vectors 2k_β, 2k_χ, and k_β+k_χ accompany the magnetic order. While k_β(T) and k_χ(T) vary differently with temperature, their trajectories intersect and lock for 200 K<T<240 K. For T<100 K, k_β=0.08446(1)c^*≈1/12c^* while k_χ=0.1039(4)c^*≈5/48c^* for T<25 K. The Q-dependence of inelastic neutron scattering for ħω <10 meV reflects k_β≈ k_χ for all T<300 K. While a single Γ-point mode at Δ=4.5 meV is observed in the commensurate phase, there are modes at Δ_1=5.0(5) meV,Δ_2=7.0(5) meV, and Δ_3=8.5(5) meV in the lower symmetry modulated phase.
Non-coplanar Long Wavelength Magnetism and Charge Order
in the Kagome-based Weyl Semimetal Mn3Sn
C. Broholm
July 31, 2023
=================================================================================================
Dirac band crossings near the chemical potential of a centro-symmetric semi-metal produce a rich interplay between magnetic order and itinerant electrons. By lifting Kramers degeneracy, magnetic ordering can give rise to anomalous quantum transport properties through the Berry curvature of the resulting Weyl points <cit.>. Less attention has been paid to the converse impacts of relativistic electrons on magnetic interactions and the resulting magnetic orders. Generically one may expect nesting instabilities for spin and charge with characteristic wave vectors defined by the spacing between Dirac or Weyl points <cit.>. Besides, recent intensive studies have proven that kagome metals provide fertile grounds to reveal novel quantum phases due to the interplay and competition between electronic topology, electron localization and frustrated noncollinear, noncoplanar magnetism <cit.>.
In this letter, we report a complex incommensurate modulation of spin, charge and lattice that develops for T<285 K in the topological semi-metal Mn_3Sn. Using synchrotron x-ray and polarized neutron diffraction, the coexistence of two distinct fundamental wave vectors associated with in- and out-of-plane spin components and their evolution to rational fractions of c^* at low temperatures is demonstrated along with their Fermi-surface nesting instability. Using inelastic magnetic neutron scattering, we show that the triplet of spin wave excitations expected for an isotropic triangular based AFM is split into three distinct modes in the low T modulated phase.
Very high quality stochiometric single crystals of Mn3Sn were obtained by the Bridgman-Stockbarger method <cit.> as detailed in the supplemental material (SM). To resolve the vectorial character of the previously detected magnetic order<cit.>, we used neutrons polarized along wave vector transfer (HF) and perpendicular to the scattering plane (VF), resolving the spin-flip (SF) and the non-spin-flip (NSF) scattering cross sections (see table in Fig. <ref>). Fig. <ref>(a) shows the intensity in each polarization channel for a rocking scan through the (100) Bragg peak at room temperature in the commensurate k=0 state. A comparison of the NSF channels (blue and green data sets) shows (100) is an allowed nuclear Bragg peak with magnetic scattering from spins polarized along the in-plane b direction. The lack of VF SF scattering (red data) shows the absence of c-polarized magnetism at 300 K. Fig. <ref>(b) shows scans along the (10ℓ) direction in the incommensurate phase for T=250 K and T=100 K. HF data are shown for ℓ<0 and VF data for ℓ>0. The (100) peak now has an equal intensity in the two NSF channels, which signifies the absence of k=0 magnetic order. The magnetic diffraction is now found for |ℓ|<0, which indicates a spin structure that is modulated along c. The |ℓ|<0 peaks within the HF data are exclusively in the SF channel so they are purely magnetic. The VF data in the |ℓ|>0 region show this magnetic order is non-co-planar with moments both within the basal plane (green) and along c (red), each with distinct temperature-dependent wave-vectors that we denote as k_χ and k_β respectively.
The T-dependence of the neutron diffraction cross-section in Fig. <ref>(d) reveals two distinct phase transitions. At k=0, magnetic scattering is superimposed upon the nuclear scattering for 285 K<T<420 K. The intensity grows continuously upon cooling below T=420 K as for a second-order phase transition. As k=0 diffraction vanishes, the intensity of diffraction at k_χ and k_β grows. We also see the onset of magnetic Bragg peaks defined by a third ordering vector 2k_β+k_χ (green data points in Fig. <ref>(d)) that prove both k_χ and k_β magnetic components form within the same domain.
To determine the temperature dependence of the wave vectors k_χ and k_β with enhanced resolution, here we also employed synchrotron x-ray scattering method. Magnetic diffraction patterns were acquired along the (00ℓ) direction close to the Q = (102) Bragg peak for temperatures between 10 K and 270 K. The data is presented in the lower panel of Fig. <ref>(c). While k_χ and k_β are well separated at the onset of incommensurate magnetic order (T=285 K), they approach each other upon cooling and merge into a single diffraction peak for 200 K<T<230 K. Upon further cooling below 200 K, however, the peak splits into two again now with k_χ>k_β. The peak assignment is accomplished by comparison to polarized magnetic neutron diffraction data (solid points in Fig. <ref>(c)). This complex interplay between the wave vector for the in-plane (k_χ) and out-of-plane (k_β) polarized magnetic order is a clear indication of the coexistence of the corresponding magnetic order parameters at the atomic scale. Upon further cooling, k_β and k_χ stabilize to values that are indistinguishable from the following rational fractions k_β = 1/12c^* and k_χ = 5/48c^*, respectively.
The upper panel of Fig. <ref>(c), shows the presence of 2^ nd harmonic and interference peaks with wave vectors 2k_χ, 2k_β and k_χ+k_β, respectively. Because the analogous peaks are absent in our neutron diffraction measurements yet four orders of magnitude stronger than the magnetic x-ray diffraction peaks, we conclude they arise from a charge density modulation analogous to that found in Cr metal <cit.>. The presence of the k_χ+k_β peak provides evidence of atomic scale coexistence of the magnetic and charge density waves. The appearance of the charge density wave in this magnetic phase is highly interesting and also seen in other kagome antiferromagnet such as FeGe <cit.>.
For detailed quantitative information about the commensurate and incommensurate phases of Mn3Sn, we refine the polarization-resolved intensity of scans along c^* through magnetic peaks accessible with 14.7 meV neutrons in the (h0ℓ) and (hhℓ) scattering plane. For the k = 0 commensurate phase, which exists for T∈ [285,445] K (Fig. <ref>(d)), we collected room temperature rocking scans for all four polarization channels at the accessible k = 0 Bragg peak. The second-order critical behavior shown in Fig. <ref>(d) ensures a single irreducible representation (IR) description of the k=0 spin structure. The only IR that allows for the absence of magnetic scattering at (111) and the existence of a (110) peak within the SF and NSF channels of all polarization configurations is Γ_9 <cit.>. The absence of magnetic scattering at Q=(002) (inset of Fig. <ref>(b)) allows a description in terms of a pure anti-chiral antiferromagnetic spin structure as the small uniform magnetization of 0.007 μ_B/Mn<cit.> does not produce significant diffraction. Thus, the parameters to be refined within Γ_9 are the magnetic moment size on Mn ions (M_χ) and a uniform rotation of all spins about the c-axis. However, in a multi-domain sample with a macroscopic 6-fold axis, even polarized neutron diffraction is insensitive to this angular variable <cit.>. We therefore acquired polarized neutron diffraction data in a 2 T field, which exceeds the coercive field inferred from magnetization data. The field was applied along b and a, which is perpendicular to the (h0ℓ) and (hhℓ) scattering plane, respectively. The refinement of the 0 T and the 2 T data can be gauged from Fig. <ref>(a,b). The best fit is obtained with M_χ = 2.1(1)μ_B/Mn and with all spins parallel to the edges of the equilateral triangles that form the kagome lattices. With field applied along either a or b direction, the anti-chiral order of Mn3Sn is rotated by 90 relative to that of Mn3Ge <cit.>, indicating an easy axis along a direction.
To determine the long wavelength modulated spin structure of Mn3Sn, we acquired polarized diffraction data at T=250 K.
HF, VF, NSF and SF Bragg diffraction cross sections were obtained by integrating the corresponding intensity of (h0ℓ) and (hhℓ) scans through magnetic Bragg peaks of the form G±k_χ and G±k_β using 14.7 meV neutrons. Here G is a nuclear Bragg peak, k_χ=(0,0,±0.092(1)), and k_β=(0,0,±0.104(1)). The data can be described by the Γ_6 IR of the 'little group' G_k_χ/β. The 6 basis vectors of Γ_6 are defined in the supplementary material. χ_x, χ_y, f_x, f_y lie within the basal plane while β_1, β_2 describe the out-of-plane spin modulation. For G = (100), (200), (300), and (110), the k_χ and the k_β peaks respectively appear in the NSF and SF channel of the VF polarization configuration (see for example Fig. <ref>(b)). Thus the k_χ component of the spin structure is associated with χ_x, χ_y, f_x, f_y while the k_β component must be described by β_1 and β_2. Considering the multi-domain nature of the order, we obtain the following constraints on the amplitudes of the basis vectors at T=250 K:
|χ_x|^2+|χ_y|^2 = 1.68(4)^2
|f_x|=|f_y| = 0.00(7)
|β_1|^2+|β_2|^2 = 1.50(6)^2.
A spin structure consistent with the polarized neutron diffraction refinement is depicted in Fig. <ref>(c). For this structure, both χ_x and β_1 are real while χ_y = iχ_x and β_2 = iβ_1. This yields a helical in-plane anti-chiral order with a moment of 2.36(6)μ_B/Mn superimposed on an out-of-plane modulation with an amplitude of 2.12(8)μ_B/Mn. A comparison between the T = 250 K observed and calculated neutron structure factors is in Fig. <ref>(d).
To understand the origin of this non-coplanar modulated phase, we examine the associated soft modes through cold neutron spectroscopy with wave vector transfer near the (110) zone center. Fig. <ref>(a,b) show the excitation spectrum in the anti-chiral commensurate phase for wave vector transfer Q=(11ℓ), |ℓ|<0.4. As for Mn3Ge, the spectrum has a gapless mode visible only very close to (110) and a 5.5 meV resonance that extends to wave vector transfer |ℓ|<0.15. The presence of the resonance in THz Faraday rotation spectroscopy on Mn_3+xSn_1-x (x=0.13, 0.47) thin films<cit.> confirms it has spectral weight at the zone center. We associate the gapless mode with the in-plane Goldstone mode that arises because the anti-chiral magnetic order breaks the rotational degree of freedom of Mn-spins on the kagome lattice (Fig. <ref>(a)). There may also be contributions to the scattering from acoustic phonons. We associate the 5.5 meV resonance with a doublet of out-of-plane polarized spin waves β_1 and β_2, which are degenerate when the in-plane anisotropy is negligible<cit.>. The co-planar anti-chiral magnetic structure of the 𝐤=0 phase in Mn3X can be described by localized magnetic moments with nearest-neighbor antiferromagnetic and Dzyaloshinskii-Moriya (DM) interactions. The DM vectors of the in-plane nearest neighbor interactions are pinned along the 𝐜 axis by crystal symmetry and determine the chirality of the antiferromagnetic structure.
Upon cooling into the incommensurate state, there is a chromium-like increase in the c-axis resistivity <cit.>. This indicates the removal of part of the Fermi surface as a gap opens on nested parts of the Fermi surface. C-polarized magnetism with wave vector k_β along with a charge density wave at 2 k_β are the associated amplitude-modulated electronic density waves. While the co-planar anti-chiral order converts to a transverse spiral with wave vector k_χ, its amplitude is unchanged across the transition and does not vary in space within our magnetic refinement. This suggests that out-of-plane rather than in-plane spin fluctuations condense at the incommensurate phase transition so that spectral weight is drawn from the β_1 and β_2 modes at 5.5 meV to form the 𝐤_β Bragg peaks. Within the non-coplanar modulated phase of Mn3Sn at 2.6 K, Fig. <ref>(e,f) shows there are three dominant components at Δ_1=4.5(5) meV,Δ_2=7.0(5) meV and Δ_3=8.5(5) meV. Fig. <ref>(c) reveals that the scattering intensity at ħω=4.5 meV continuously shifts from 𝐤=0 to 𝐤=𝐤_χ,β upon cooling. While Δ_1 thus appears to develop from the out-of-plane high T mode, the temperature dependence of the local susceptibility χ^''(ω) in Fig. <ref>(d) shows the Δ_2 and Δ_3 modes only appear in the incommensurate phase.
Both spin space anisotropy and the inevitable spatial inhomogeneity of the multi-k incommensurate state may contribute to this complex low T excitation spectrum.
To examine whether Fermi-surface nesting drives the incommensurate phase transition, the density functional electronic band structure was calculated and is shown in Fig. <ref>. The Fermi surfaces contain large flat sheets that extend perpendicular to the c-axis. Large areas of these Fermi-surfaces are connected by k_β and k_χ as shown in Fig. 4(c). Noticeably, one of the nested Fermi surfaces marked α in Fig. <ref>(b) contains Weyl nodes at room temperature so that a change in band topology can be anticipated as the bands hybridize while spin and charge density waves are formed. This is indicated by the suppression of both the anomalous Hall and Nernst effects <cit.> as required by the fact that the incommensurate phase with k_χ≠ 0 cannot define a direction within the basal plane. The different spatial modulations observed for 𝐤_β and 𝐤_χ which resembles observations in the kagome semi-metal YMn_6Sn_6<cit.>, may reflect the complex inter-band nature of the nesting.
While aspects of the low energy magnetism in the commensurate phases of Mn_3Sn<cit.> and Mn_3Ge<cit.> can be described by models of interacting local moments<cit.>, the incommensurate charge and spin density wave phase of Mn_3Sn that we have reported manifest a complex interplay between conduction electrons, magnetism and band topology in these materials.
This work was supported as part of the Institute for Quantum Matter, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award No. DE-SC0019331. J.G. acknowledges support from the NSERC Postdoctoral Fellowship Program and C.B. acknowledges support from the Gordon and Betty Moore Foundation GBMF9456. Work at the University of Tokyo was supported by JST-Mirai Program (JPMJMI20A1), JST-CREST (JPMJCR18T3), New Energy and Industrial Technology Development Organization (NEDO). A portion of this research used resources at the High Flux Isotope Reactor and Spallation Neutron Source, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. Access to MACS was provided by the Center for High-Resolution Neutron Scattering, a partnership between the National Institute of Standards and Technology and the National Science Foundation under Agreement No. DMR-1508249.
|
http://arxiv.org/abs/2306.12006v2
|
20230621040510
|
Learning Homogenization for Elliptic Operators
|
[
"Kaushik Bhattacharya",
"Nikola Kovachki",
"Aakila Rajan",
"Andrew M. Stuart",
"Margaret Trautner"
] |
math.NA
|
[
"math.NA",
"cs.LG",
"cs.NA",
"35B27, 35J47, 74H15"
] |
Effect of a hybrid transition moment on Stark-modulated photon echoes in Er:
Matthew J. Sellars
July 31, 2023
============================================================================
Multiscale partial differential equations (PDEs) arise in various applications, and several schemes have been developed to solve them efficiently. Homogenization theory is a powerful methodology that eliminates the small-scale dependence, resulting in simplified equations that are computationally tractable. In the field of continuum mechanics, homogenization is crucial for deriving constitutive laws that incorporate microscale physics in order to formulate balance laws for the macroscopic quantities of interest. However, obtaining homogenized constitutive laws is often challenging as they do not in general have an analytic form and can exhibit phenomena not present on the microscale. In response, data-driven learning of the constitutive law has been proposed as appropriate for this task. However, a major challenge in data-driven learning approaches for this problem has remained unexplored: the impact of discontinuities and corner interfaces in the underlying material. These discontinuities in the coefficients affect the smoothness of the solutions of the underlying equations. Given the prevalence of discontinuous materials in continuum mechanics applications, it is important to address the challenge of learning in this context; in particular, to develop underpinning theory that establishes the reliability of data-driven methods in this scientific domain. The paper addresses this unexplored challenge by investigating the learnability of homogenized constitutive laws for elliptic operators in the presence of such complexities. Approximation theory is presented, and numerical experiments are performed which validate the theory for the solution operator defined by the
cell problem arising in homogenization for elliptic PDEs.
§ INTRODUCTION
Homogenization theory is a well-established
methodology that aims to eliminate fast-scale dependence in partial differential equations (PDEs) to obtain homogenized PDEs which produce a good approximate solution of the problem with fast scales while being more computationally tractable. In continuum mechanics, this methodology is of great practical importance as the constitutive laws derived from physical principles are governed by material behavior at small scales, but the quantities of interest are often relevant on larger scales. These homogenized constitutive laws often do not have a
closed analytic form and may have new features not present in the microscale laws. Consequently, there has been a recent surge of interest in employing data-driven methods to learn homogenized constitutive laws.
The goal of this paper is to study the learnability of homogenized
constitutive laws in the context of one of the canonical model problems of homogenization, namely the divergence form elliptic PDE. One significant challenge in applications of homogenization in material
science arises from the presence of discontinuities and corner interfaces in the underlying material. This leads to a lack of smoothness in the coefficients and solutions of the associated equations, a phenomenon extensively studied in numerical methods for PDEs. Addressing this challenge in the context of learning remains largely unexplored and is the focus of our work. We develop underlying theory and provide accompanying numerical studies to address learnability in this context.
In Subsection <ref> we establish the mathematical framework and notation for the problem of interest, state the three main
contributions of the paper, and overview the contents of each section
of the paper. In Subsection <ref> we provide a detailed literature review. Subsection <ref> states the stability estimates that are key for the approximation theory developed in the paper and discusses the remainder of the paper in the context of these estimates.
§.§ Problem Formulation
We consider the following linear multiscale elliptic equation on a bounded domain Ω⊂:
-∇·(A^∇ u^) = f x ∈Ω,
u^ = 0 x ∈Ω.
Here A^(x) = A(x/) for A(·) which is 1-periodic and positive definite: A: →^d × d_ sym, ≻ 0. Our focus is on linking
this multiscale problem to the homogenized form of equation (<ref>), which is
-∇·(A∇ u ) = f x ∈Ω,
u = 0 x ∈Ω,
where A is given by
A = ∫_(A(y) + A(y) ∇χ(y)^T) ,
and χ:→ solves the cell problem
-∇_y ·(∇_y χ A) = ∇_y · A, χ is 1-periodic.
For 0 < ≪ 1, the solution u^ of (<ref>) is approximated by the solution u of (<ref>), and the error converges to zero as → 0 in various
topologies <cit.>.
We assume f ∈ L^2(Ω; ) in equation (<ref>) is independent of the microscale variable , and that
A_L^∞ := sup_y ∈ |A(y)|_F <∞
where |·|_F is the Frobenius norm.
Hence A ∈ L^∞(;) and A^∈ L^∞(Ω;). Similarly, for A ∈ L^2(;), we define
A_L^2^2 := ∫_ |A(y)|_F^2 .
We also, for given β≥α>0,
define the following subset
of 1-periodic, positive-definite matrix fields in
L^∞(;) by
= {A ∈ L^∞(;) : ∀ (y,ξ) ∈×, α |ξ|^2 ≤⟨ξ, A(y) ξ⟩≤β |ξ|^2}.
Finally, we often work in the Sobolev space H^1
restricted to spatially
mean-zero periodic functions, denoted
Ḣ^1: = {v ∈ W^1,2() | v is 1-periodic, ∫_ v = 0};
the norm on this space is defined by
g_Ḣ^1 := ∇ g _L^2.
Numerically solving (<ref>) is far more computationally expensive than solving the homogenized equation (<ref>), motivating the wish to find the homogenized
coefficient A defining equation (<ref>). The difficult part of obtaining the equation (<ref>) is solving the cell problem (<ref>). Although explicit solutions exist in the one-dimensional setting for piecewise constant A <cit.> and in the two-dimensional setting where A is a layered material <cit.>, in general a closed form solution is not available and the cell problem must be solved numerically. Note that in general the right hand side ∇_y· A of the cell-problem can only be defined weakly for A∉ C^1(, ), a commonly occuring situation in applications such as those arising
from porous medium flow, or to vector-valued generalizations of the setting here to elasticity, rendering the numerical solution non-trivial. For this reason, it is potentially valuable to approximate the solution map
G: A ↦χ,
defined by the cell problem, using a map defined by a neural operator.
More generally it is foundational to the broader program of learning
homogenized constitutive models from data to thoroughly study this issue
for the divergence form elliptic equation as the insights gained will be important for
understanding the learning of more complex parameterized homogenized models, such as
those arising in nonlinear elasticity, viscoelasticity, and plasticity.
The full map from A to the homogenized tensor A is expressed by A ↦χ↦A, and one could instead learn the map
F: A ↦A.
Since the map χ↦A is simply a linear integration of ∇χ, we focus on the approximation of A ↦χ and state equivalent results for the map A ↦A that emerge as consequences of the approximation of χ. In this paper we
make the following contributions:
* We state and prove universal approximation theorems for the maps G defined by (<ref>) and (<ref>) and F defined by (<ref>), (<ref>) and (<ref>), in various topologies and for a pair of different neural operator architectures.
* We provide explicit examples of microstructures which satisfy the hypotheses
of our theorems.
* We provide numerical experiments to demonstrate the ability of neural operators to approximate the solution map on four different classes of material parameters A.
In Subsection <ref> we provide an overview of the literature, followed in Subsection <ref> by
a discussion of stability estimates for (<ref>), with respect to variations in A; these are
at the heart of the analysis of universal approximation.
The main body of the text then commences with Section <ref>, which characterizes the
microstructures of interest to us in the context of continuum mechanics.
Section <ref> states universal approximation theorems for G(·) and F(·),
using the Fourier neural operator and a more general neural operator.
In Section <ref> we give numerical experiments illustrating the approximation of map G defined by (<ref>) on microstructures of interest in continuum mechanics. Details of the
stability estimates, the proofs of universal approximation theorems, and properties of
the microstructures are given in Appendices <ref>, <ref> and <ref>
respectively.
§.§ Literature Review
Homogenization aims to derive macroscopic equations that describe the effective
properties and behavior of solutions to problems at larger scales given a system that exhibits
behaviour at (possibly multiple) smaller scales. Although it is developed for the various cases of random,
statistically stationary, and periodic small-scale structures, we work here entirely in the periodic setting.
The underlying assumption of periodic homogenization theory is that the coefficient is periodic in the small-scale variable, and that the scale separation is large compared to the macroscopic scales of interest.
Convergence of the solution of the multiscale problem to
the homogenized solution is well-studied; see
<cit.>.
We refer to the texts <cit.>
for more comprehensive citations to the literature.
Homogenization has found extensive application in the
setting of continuum mechanics <cit.> where,
for many multiscale materials, the scale-separation assumption is
natural.
In this work, we are motivated in part by learning constitutive
models for solid materials, where crystalline microstructure renders the material parameters
discontinuous and may include corner interfaces.
This difficulty has been explored extensively in the context of numerical methods for
PDEs, particularly with adaptive finite element
methods <cit.>.
There is a significant body of work on the approximation theory associated with
parametrically dependent solutions of PDEs, including viewing these solution as
a map between the function space of the parameter and the function space of the
solution, especially for problems possessing holomorphic regularity <cit.>. This work could potentially
be used to study the cell problem for homogenization that is our focus here. However,
there has been recent interest in taking a data-driven approach to solving PDEs
via machine learning because of its flexibility and ease of implementation.
A particular approach to learning solutions to PDEs is operator learning, a machine learning
methodology where the map to be learned is viewed as an operator acting
between infinite-dimensional function spaces rather than between finite-dimensional spaces <cit.>.
Determining whether, and then when, operator learning models have advantages over classical numerical methods
in solving PDEs remains an active area of research <cit.>.
The paper <cit.> makes a contribution to this area, in the
context of the divergence form elliptic PDE and the map from coefficient to solution
when the coefficient is analytic over its domain; the authors prove that ϵ error is
achievable for a DeepONet <cit.> of size only polylogarithmic in ϵ,
leveraging the exponential convergence of spectral collocation methods for boundary value problems with
analytic solutions.
However, in the setting of learning homogenized constitutive laws in material science, discontinuous
coefficients form a natural focus and indeed form the focus of this paper. A few
characteristics make operator learning a promising option in this context. First,
machine learning has been groundbreaking in application settings with no
clear underlying equations, such as computer vision and language models
<cit.>. In constitutive modeling, though
the microscale constitutive laws are known, the homogenized equations are
generally unknown and can incorporate dependencies that are not present on
the microscale, such as history dependence, anisotropy, and slip-stick
behavior <cit.>.
Thus, constitutive models lie in a partially equation-free
setting where data-driven methods could be useful. Second, machine learned models
as surrogates for expensive computation can be valuable when the cost of producing
data and training the model can be amortized over many forward uses of the trained model.
Since the same materials are often used for fabrication over long time periods,
this can be a setting where the upfront cost of data production and model training
is justified.
Other work has already begun to explore the use of data-driven methods for
constitutive modeling; a general review of the problem and its challenges, in the context of
constitutive modeling of composite materials, may be found in <cit.>. Several works use the popular framework of physics-informed machine learning to approach the problem <cit.>. In <cit.>, physical constraints are enforced on the network architecture while learning nonlinear elastic constitutive laws. In <cit.>, the model is given access to additional problem-specific physical knowledge. Similarly, the work of <cit.> predicts the Cholesky factor of the tangent stiffness matrix from which the stress may be calculated; this method enforces certain physical criteria. The paper
<cit.> studies approximation error and uncertainty quantification for this learning problem. In <cit.>, a derivative-free approach is taken to learning homogenized solutions where regularity of the material coefficient is assumed. The work of <cit.> illustrates the potential of operator learning methodology to model constitutive laws with history dependence, such as those that arise in crystal plasticity. Finally, a number of further works demonstrate empirically the potential of learning constitutive models, including <cit.>.
However, the underlying theory behind operator learning for constitutive models lags behind its empirical application. In <cit.>, approximation theories are developed to justify the use of a recurrent Markovian architecture that performs well in application settings with history dependence. This architecture is further explored in <cit.> with more complex microstructures. Universal approximation results are a first step in developing theory for learning because they guarantee that there exists an ϵ-approximate operator
within the operator approximation class, which is consistent with an assumed true model underlying the data <cit.>.
In addition to universal approximation, further insight may be gained by seeking to quantify the data or model size required to obtain a given level of accuracy; the papers
<cit.> also contain work in this direction, as do the papers
<cit.>, which build on the analysis developed in
<cit.> referred to above. In our work
we leverage two existing universal approximation theorems for neural operators, one from <cit.> for general neural operators (NOs) and one from <cit.> for Fourier neural operators (FNOs), a particular practically useful
architecture from within the NO class. We take two different approaches to proving approximation theorems based on separate PDE solution stability results in pursuit of a more robust understanding of the learning problem. Since the state of the field is in its
infancy, it is valuable to have different approaches to these
analysis problems. Finally, we perform numerical experiments on various microstructures to understand the practical effects of non-smooth PDE coefficients in learning solutions.
We highlight the fact that in this paper we do not tackle issues related to the non-convex
optimization problem at the heart of training neural networks; we simply use state of the art stochastic
gradient descent for training, noting that theory explaining its excellent empirical behaviour is lacking.
Throughout this paper we focus on equation (<ref>), which describes a conductivity
equation in a heterogeneous medium; a natural generalization of interest is to the constitutive law of linear elasticity, in which the solution is vector-valued and the coefficient is a fourth order tensor. Though it is a linear elliptic equation, we echo the sentiment of Blanc and Le Bris <cit.> with their warning “do not underestimate the difficulty of equation (<ref>).” There are many effects to be understood in this setting, and resolving learning challenges is a key step towards understanding similar questions for the learning of parametric
dependence in more complex homogenized constitutive laws
where machine-learning may prove particularly useful.
§.§ Stability Estimates
At the heart of universal approximation theorems is stability of the solution map
(<ref>); in particular continuity of the map for certain classes of A. In this subsection, we present three key stability results that are used to prove the approximation theorems in Section <ref>. The proofs of the following stability estimates may all be found in Appendix <ref>.
A first strike at the stability of the solution map (<ref>) is a modification of
the classic L^∞/H^1 Lipschitz continuity result for dependence of the
solution of elliptic PDEs on the coefficient; here generalization is necessary because the coefficient
also appears on the right-hand side of the equation defining G(·):
propositionPropstabinfty
Consider the cell problem
defined by equation (<ref>). The following hold:
* If A ∈, then (<ref>) has a unique solution χ∈Ḣ^1(;) and
χ_Ḣ^1(;)≤√(d)β/α.
* For and solutions to the cell problem in equation (<ref>) associated with coefficients ,∈, respectively, it follows that
- _Ḣ^1(;)≤√(d)/α(1 + β/α)A^(1)-A^(2)_L^∞(;).
However, this perturbation result is insufficient for approximation theory because the space L^∞ is not separable and approximation is hardly possible in such spaces <cit.>. While one may define the problem on a separable subspace of L^∞, see Lemma <ref>, such spaces are not particularly useful in applications to micromechanics. This is because many of the models for realistic microstructures include functions which can be discontinuous on an uncountable set of points in the domain and such functions cannot be contained in any separable subspace of L^∞; see Lemma <ref>.
Instead, we require continuity from L^q to Ḣ^1 for some q ∈ [2,∞). To this end, we provide two additional stability results. The first stability result gives continuity, but not Lipschitz continuity, from L^2 to Ḣ^1. The second stability result gives Lipschitz continuity from L^q to Ḣ^1 some q ∈ (2,∞).
propositionPropstabLtwo
Endow with the L^2(;) induced topology and let K ⊂ be a closed set. Define the mapping G: K →Ḣ^1(;) by A ↦χ as given by (<ref>). Then there exists a bounded continuous mapping
∈ C(L^2(; ); Ḣ^1(;))
such that (A) = G(A) for any A ∈ K.
The preceding L^2 continuity proposition is used to prove the approximation results for the FNO in Theorems <ref> and <ref>. For the second approximation theorem, we use the following proposition on Lipschitz continuity from L^q to Ḣ^1.
propositionPropstabLq
For A ∈ in (<ref>), there exists q_0 satisfying 2< q_0 < ∞ such that for all q satisfying q_0 < q ≤∞, the solution map A ↦χ of (<ref>) is Lipschitz-continuous as a map from L^q(;) to Ḣ^1(; ).
Explicit upper bounds for q_0 in Proposition <ref> exist and are discussed in Remark <ref>.
Proposition <ref> is used to prove the approximation theorem for the NO in Theorems <ref> and <ref>.
§ MICROSTRUCTURES
The main application area of this work is constitutive modeling. In this section we describe various classes of microstructures that our theory covers. In particular, we describe four classes of microstructures in two dimensions:
* Smooth microstructures generated via truncated, rescaled log-normal random fields.
* Discontinuous microstructures with smooth interfaces generated by Lipschitz star-shaped inclusions.
* Discontinuous microstructures with square inclusions.
* Voronoi crystal microstructures.
Visualizations of examples of these microstructures may be found in
Figure <ref>. Proofs that the non-smooth classes of microstructures satisfy the assumptions of the theorems in Section <ref> may be found in Appendix <ref>.
Smooth Microstructures
The smooth microstructures are generated by exponentiating a rescaled Gaussian random field.
A is symmetric and coercive everywhere in the domain with a bounded eigenvalue ratio. Furthermore, the smooth function A and its derivatives are Lipschitz. Our theory is developed specifically to analyze non-smooth microstructures, so this example is used mainly as a point of comparison.
Star-Shaped Inclusions
For the star-shaped inclusion microstructure, A is taken to be constant inside and outside the star-shaped boundary.
The boundary function is smooth and Lipschitz in each of its derivatives. A is positive and coercive in both regions with a bounded eigenvalue ratio. This microstructure introduces discontinuities, but the boundary remains smooth.
Square Inclusions
The square inclusion microstructures are composed of two materials; one constant A inside the square inclusion, and another constant A outside the square inclusion. Since we assume periodicity, without loss of generality the square inclusion is centered. The size of the square inclusion within the cell is varied between samples as are the constant values of A. This microstructure builds on the complexity of the star inclusion microstructure by adding corners to the inclusion boundary.
Voronoi Interfaces
The Voronoi crystal microstructures are generated by assuming a random Voronoi tessellation and letting A be piecewise-constant taking a single value on each Voronoi cell. The number of cells, values of A on the cells, and locations of the cell centers may all be varied. This is the most complex microstructure among our examples and is a primary motivation for this work as Voronoi tessellations are a common model for crystal structure in materials.
§ UNIVERSAL APPROXIMATION RESULTS
In this section we state the four approximation theorems for learning solution operators to the cell problem. Theorems <ref> and <ref> concern learning the map A →χ in equation (<ref>), and Theorems <ref> and <ref> concern learning the map A →A described by the combination of equations (<ref>) and (<ref>). Theorems <ref> and <ref> are specific to learning a Fourier neural operator (FNO), which is a subclass of the general neural operator described by Theorems <ref> and <ref>. The proofs of the theorems in this section may be found in Appendix <ref>.
§.§ Definitions of Neural Operators
First, we define a general neural operator (NO) and the Fourier neural operator (FNO). The definitions are largely taken from <cit.>, and we refer to this work for a more in-depth understanding of these operators. In this work, we restrict the domain to the torus.
Let 𝒜 and 𝒰 be two Banach spaces of real vector-valued functions
over domain . Assume input functions a ∈𝒜 are ^d_a-valued while the output functions u ∈𝒰 are ^d_u-valued. The neural operator architecture 𝒢_θ:𝒜→𝒰 is
𝒢_θ = 𝒬∘𝖫_T-1∘…∘𝖫_0∘𝒫,
v_t+1 = 𝖫_t v_t =σ_t(W_tv_t + 𝒦_tv_t + b_t), t=0,1, …, T-1
with v_0 = 𝒫(a), u=𝒬(v_T) and 𝒢_θ (a) = u. Here, 𝒫: ^d_a→^d_v_0 is a local lifting map, 𝒬:^d_v_T→^d_u is a local projection map
and the σ_t are fixed nonlinear activation functions acting locally as maps ^d_v_t+1→^d_v_t+1 in each layer (with all of 𝒫, 𝒬 and the σ_t viewed
as operators acting pointwise, or pointwise almost everywhere, over the domain ), W_t ∈^d_v_t+1× d_v_t are matrices, 𝒦_t: {v_t: →^d_v_t}→{v_t+1:→^d_v_t+1} are integral kernel operators and b_t: →^d_v_t+1 are bias functions. For any m ∈ℕ_0, the activation functions σ_t are restricted to the set of continuous → maps which make real-valued, feed-forward neural networks dense in C^m() on compact sets for any fixed network depth. We note that all globally Lipschitz, non-polynomial, C^m() functions belong to this class. The integral kernel operators 𝒦_t are defined as
(𝒦_t v_t)(x) = ∫_κ_t (x,y) v_t(y) dy
with standard multi-layered perceptrons (MLP) κ_t : ×→^d_v_t+1× d_v_t. We denote by θ the collection of parameters that specify 𝒢_θ, which include the weights W_t, biases b_t, parameters of the kernels κ_t, and the parameters describing the lifting and projection maps 𝒫 and 𝒬 (usually also MLPs).
The FNO is a subclass of the NO.
The FNO inherits the structure and definition of the NO in Definition <ref>, together with some specific design choices. We fix d_v_t = d_v for all t, where d_v is referred to as the number of channels, or model width, of the FNO. We fix σ_t=σ to be a globally Lipschitz, non-polynomial, C^∞ function.[In this work in all numerical experiments we use the GeLU activation function as in <cit.>.] Finally, the kernel operators 𝒦_t are parameterized in the Fourier domain in the following manner. Let
ψ_k(x) = e^2π i ⟨ k, x ⟩, x ∈, k ∈ℤ^d,
denote the Fourier basis for L^2(;ℂ) where i = √(-1) is the imaginary unit.
Then, for each t, the kernel operator 𝒦_t is parameterized by
(𝒦_t v_t)_l (x) = ∑_k ∈ℤ^d
|k| ≤ k_max(∑_j = 1^d_vP_lj^k ⟨ (v_t)_j, ψ_k⟩_L^2(;ℂ))ψ_k(x).
Here, l=1,…,d_v and each P^k ∈ℂ^d_v× d_v constitute the learnable parameters of the integral operator.
From the definition of the FNO, we note that parameterizing the kernels in the Fourier domain allows for efficient computation using the FFT. We refer to <cit.> for additional details.
§.§ Main Theorems
The first two theorems guarantee the existence of an FNO approximating the maps A ↦χ and A ↦A and are based on the stability estimate for continuity from L^2 →Ḣ^1 obtained in Proposition <ref>.
theoremthmLtwochi
Let K ⊂ and define the mapping G : K →Ḣ^1 (;) by A ↦χ as given by (<ref>). Then, for any ϵ > 0 and K compact in L^2(;), there exists an FNO Ψ : K →Ḣ^1(;) such that
sup_A ∈ KG(A) - Ψ(A)_Ḣ^1 < ϵ.
theoremthmLtwoAbar
Let K ⊂ and define the mapping F : K → by A ↦A̅ as given by (<ref>). Then, for any ϵ > 0 and K compact in L^2(;) there exists an FNO Φ: K → such that
sup_A ∈ K |F(A) - Φ(A)|_F < ϵ.
The remaining two theorems guarantee the existence of a general neural operator approximating the maps A ↦χ and A ↦A. Although the FNO is a subclass of general neural operators, from a theoretical perspective, the stability estimates used to obtain the results for general neural operators are more concrete. Indeed, the following two theorems are based on the stability estimate of Proposition <ref>, which obtains Lipschitz continuity from L^q →Ḣ^1.
theoremthmLqchi
Let K ⊂ and define the mapping G : K →Ḣ^1(;) by A ↦χ as given by (<ref>). Let q_0 be as in Proposition <ref>. Then, for any q satisfying q_0 < q < ∞ and for any K compact in L^q(;^d × d), it holds that for any ϵ > 0, there exists a neural operator Ψ: K →Ḣ^1(;) such that
sup_A ∈ KG(A) - Ψ(A) _Ḣ^1 < ϵ.
theoremthmLqAbar
Let K ⊂ and define the mapping F : K → by A ↦A̅ as given by (<ref>). Let q_0 be as in Proposition <ref>. Then, for any q satisfying q_0 < q < ∞ and for any K compact in L^q(;^d × d), it holds that for any ϵ >0, there exists a neural operator Φ : K → such that
sup_A ∈ K |F(A) - Φ(A)|_F < ϵ.
Although the statements of Theorems <ref> and <ref> seem almost identical to those of Theorems <ref> and <ref>, the proofs are quite different as they rely on stability estimates obtained through entirely different methods. Additionally, they depend on different universal approximation theorems. The FNO results use the universal approximation theorem for FNOs in <cit.>, while the NO results use the universal approximation theorem for NOs in <cit.>.
The above approximation results can also be formulated to hold, on average, over any probability measure with a finite second moment that is supported on . In particular, if we let μ be such a probability measure then there exists an FNO or a neural operator Ψ such that
𝔼_A ∼μG(A) - Ψ(A)_Ḣ^1 < ϵ.
This follows by simple exchanging the appropriate results from <cit.> or <cit.> in the respective proofs. We do not carry out the full details here. While this allows approximation over the non-compact set , the error can only be controlled on average instead of uniformly. Such results might find applications in situations where the microstructures are modeled stochastically.
Note that
χ^(1) - χ^(2)^2_H^1 = ∑_l=1^d χ^(1)_l - χ^(2)_l^2_L^2 + ∇χ^(1)_l - ∇χ^(2)_l ^2_L^2
hence, by Poincaré inequality, there is a constant C = C() > 0 such that
χ^(1) - χ^(2)_H^1≤ C √(d)min ( A^(1)_L^∞/α^(1)α^(2) + 1/α^(2), A^(2)_L^∞/α^(1)α^(2) + 1/α^(1) ) A^(1) - A^(2)_L^∞
as desired.
§ NUMERICAL EXPERIMENTS
In this section we perform numerical experiments to illustrate the
fact that it is possible to find good operator approximations of the
homogenization map (<ref>)
in practice. We focus on use of the FNO and note that, while
Theorems <ref> and <ref> assert the
existence of desirable operator approximations, they are not constructive and do not come equipped with error estimates.
We find approximations using standard empirical loss minimization
techniques and quantify the complexity with respect to data and
parametric size of the approximations.
To verify that Theorems <ref> and <ref> apply, we have to show that the subset of coefficient functions employed are compact. Lemma <ref> applies to show compactness for the square inclusion and Voronoi crystal interfaces of Examples 3 and 4. Lemma <ref> applies to show compactness for the star-shaped inclusions of Example 2. The smooth microstructure example serves as a comparison case
for examining the impact of discontinuous coefficients on the learning accuracy.
The experiments are all conducted using an FNO with a fixed number
T=4 of hidden layers. The two remaining parameters to vary are the
channel width d_v and the number of Fourier modes k_max.
We make the following observations based on the numerical experiments:
* The effective A tensors computed from the model predicted solutions exhibit very low error.
* The error in the learned χ is significantly higher along discontinuous material boundaries and corner interfaces, as expected. However, the FNO operator approximation is able to approximate the solution with reasonable relative error even for the most complex case of a set of input functions with varying Voronoi geometry and varying microstructural properties within domains.
* In comparison with the smooth microstructure case, learning the map for the Voronoi microstructure requires substantially
more data to avoid training a model which plateaus at a poor
level of accuracy.
* When compared with the smooth microstructure case, error for the Voronoi microstructure decreases more slowly with respect to increasing model width, but shows more favourable response with
respect to increasing the number of Fourier modes.
We first describe implementation details of each of the microstructures in Subsection <ref>. Then we show outcomes of the numerical experiments in Subsection <ref>, which we discuss in Subsection <ref>.
§.§ Microstructure Implementation
Smooth Microstructures
The smooth microstructures are generated by exponentiating a rescaled approximation of a Gaussian random field. We consider
material tensor A(x) given by
A(x) = [ λ_1 0; 0 λ_2 ].
The random field used to generate the diagonal entries of A(x) is
constructed as follows:
λ_i(x) = ∑_k_1,k_2 = 1^4 ξ^(1)_k_1,k_2sin(2π k_1 x_1)cos(2π k_2 x_2) + ξ^(2)_k_1,k_2cos(2π k_1 x_1)sin(2π k_2 x_2),
λ_i(x) = exp(λ_i(x)/max_x' ∈ [0,1]^2 |λ_i(x')|),
where ξ^(j)_k_1,k_2 are i.i.d. normal Gaussian random variables. Due to the rescaling and exponentiation, the ratio of the eigenvalues of A(x) is at most e^2.
Star-Shaped Inclusions
The star-shaped inclusions are generated by defining a random Lipschitz polar boundary function as
r(θ) = 𝖺 + 𝖻∑_k = 1^5 ξ_k sin(k θ)
where ξ_k are i.i.d. normal random variables, and 𝖺 and 𝖻 are constants.
Then A(x) is constant inside and outside the boundary. We randomly sample eigenvalues for A on each domain
via λ_i = 0.1 + ζ_i where ζ_i are uniform random variables on [0,1]. Three components of
the eigenvectors are i.i.d. normal random variables, and the fourth component enforces orthogonality to guarantee symmetry of
A. In this manner, A is symmetric and coercive and has a bounded eigenvalue ratio.
Square Inclusions
The radius of the square is randomly generated via
r = 𝖺 + 𝖻ζ
where ζ is a uniform random variable on [0,1] and 𝖺 and 𝖻 are positive constants. The values of A on
each of the constant domains are chosen in the same manner as in the star-shaped inclusion case.
Voronoi Interfaces
The Voronoi crystal microstructure has A = aI where a is constant on each Voronoi cell and is chosen uniformly at random in the same manner as for the star inclusions. Voronoi tessellations are a common model for crystal structure in materials. In one Voronoi example, we we fix the geometry for all data, and in a second Voronoi example we vary the geometry by randomly sampling five cell centers from a uniform distribution on the unit square.
§.§ Results
Since the cell problem is separable into two problems for each component of χ, we learn χ_1 and χ_2 separately;
in this paragraph, therefore, j ∈{1,2}. Each FNO model is trained using the empirical estimate of the mean squared H^1 norm:
Loss(θ) = 1/N∑_n = 1^N (χ^(n)_j - χ^(n)_j^2_L^2 + ∇χ_j^(n) - ∇χ^(n)_j^2_L^2)
where n is the sample index, χ_j is the true solution, and χ_j is the FNO approximation of the solution, parameterized by θ. In the analysis, we examine several different measures of error, including the following relative H^1 and relative W^1,10 errors.
Relative H^1 Error (RHE) = 1/N∑_n=1^N (χ_j^(n)- χ_j^(n)^2_L^2 + ∇χ_j^(n) - ∇χ_j^(n)^2_L^2/χ_j^(n)^2_L^2 + ∇χ_j^(n)^2_L^2)^1/2
Relative W^1,10 Error (RWE) = 1/N∑_n=1^N (χ_j^(n)- χ_j^(n)^10_L^10 + ∇χ_j^(n) - ∇χ_j^(n)^10_L^10/χ_j^(n)^10_L^10 + ∇χ_j^(n)^10_L^10)^1/10
The error visualized in Figures <ref> and <ref>
is spatially varying and computed via
Scaled L^2 Error(x) = 1/g_L^2 |g(x) - g(x)|
where g is χ^(n)_j and ∇χ^(n)_j for a particular sample n.
Finally, we also look at error in A, which we scale by the difference between the arithmetic and harmonic mean of A. Any effective A should have a norm in this range; this may be thought of as a physical constraint. Thus, this error is given by
Relative A Error (RAE) = A - A_F/a_m - a_h
where the arithmetic mean a_m and harmonic mean a_h are given by
a_m = ∫_𝕋^2 A(x) _F
a_h = (∫_𝕋^2 A^-1(x))^-1_F.
We note that using a_m-a_h rather than A_F as a scaling factor in equation (<ref>) leads to a larger error value, so achieving low error in this metric is harder.
We train six different models whose details may be viewed in Table <ref>. Each of these models is trained on 9500 data samples generated using a FEM solver on a
triangular mesh with the solution interpolated to a 128 × 128 grid.
The models are each trained with an FNO architecture for 300 epochs with the same architecture of 24 Fourier modes, a model width of 48, and 4 layers.
Table <ref> also includes a quantification of the error. Visualizations of the median-error test samples for each example may be viewed in Figures <ref> and <ref>. These figures are similar in form, and may be compared to, those in <cit.>.
We also investigate the effects of the number of training data and the model size on the error for the smooth and Voronoi microstructures described as Examples 1 and 5 in Table <ref>. A plot of error versus training data may be found in Figure <ref>, and plots of error versus the number of Fourier modes for fixed total model size, as measured by (model width) × (number Fourier modes), may be found in Figure <ref>.
Figure <ref> is similar to, and may be examined
along with, those in <cit.>.
Figure <ref> addresses the question of how to
optimally distribute computational budget through different parameterizations to achieve minimum error at given cost as measured
by number of parameters; it should be compared to similar experiments
in <cit.>.
§.§ Discussion
As can be seen from the data in Table <ref>, the microstructures exhibiting discontinuities lead to higher model error than the smooth microstructure. The visualizations of the median-error test samples in Figure <ref> give some intuition; error is an order of magnitude higher along discontinuous boundaries; this is most apparent in the gradient. The true solution gradient often takes its most extreme values along the discontinuities, and the RWE gives an indication of how well the model captures the most extreme values in the solution. Unsurprisingly, this error is much higher than the RHE, but we note that it is confined to a small area of the domain along discontinuous boundaries and corner interfaces.
Voronoi Examples 5 and 6 in Table <ref> are identical except that the numerical solution for the two data sets is obtained using solvers with different levels of accuracy. For Example 5, the accuracy is the same as for Examples 1-4, but for 6 the solver uses 551062 elements instead of 90804. Both solutions are then interpolated to a 128× 128 grid to train the model. A common belief among machine learning practitioners is that the error from the model approximation is independent from the error of the numerical solution used to obtain the data. Since the model only “sees" the data, the reasoning goes that model will obtain similar train and test error relative to the data regardless of the numerical accuracy of the data relative to the true solution. With this experiment, we see that this does not hold for the Voronoi microstructure; using a more accurate solver results in lower model error in both the RHE and RWE measurements. These observations lead to a hypothesis that the errors produced via a numerical solver are harder to learn than the true solution behavior, but we leave the formalization and validation of this hypothesis to future work.
We also examine the effect of the number of training data samples and the FNO size on model accuracy for the smooth and Voronoi microstructures corresponding to Examples 1 and 4 in Table <ref>. In Figure <ref> we see the effect of increasing the number of training data. In theory, the error should decay with a dependence 1/√(N) for N the number of training data. The smooth microstructure achieves this theoretical decay with a slope of -0.58 on a log-log scale, but the Voronoi microstructure only achieves a slope of -0.25. We note that this is comparable to the behavior during training over 300 epochs; the test error for the smooth microstructure continues to decrease over the entire training periodic, but the test error for the Voronoi microstructure plateaus by around 100-150 epochs. The model size also presents a qualitatively different effect on error for the smooth and Voronoi microstructures. In Figure <ref>, we see the tradeoff between the number of Fourier modes and the model width for approximately constant model size, measured as the product of the width and number of modes. The Voronoi example benefits from additional Fourier modes, whereas the smooth example worsens. On the other hand, the smooth model benefits more from an increase in model width. We refer to <cit.> for in-depth numerical studies of errors, choice of hyperparameters,
and parameter distributions for FNO; here we highlight only the qualitative differences between the model behavior for different microstructures.
Finally, we compare the error in the effective A defined in (<ref>). This error is scaled by a difference between the Frobenius norms of the arithmetic and harmonic means of the true A because the Frobenius norm of the true A should fall within that range. For this reason, in the case where the arithmetic and harmonic means are very close, it is not valuable to learn the true A. The varying-geometry Voronoi microstructure examples on average have about 100 times greater difference between the means than the star and square microstructure examples. Thus, it is valuable to note that the median relative A error shown in Table <ref> is lower for the Voronoi examples than for the star and square inclusion examples because in the Voronoi setting, the arithmetic and harmonic means are poor approximations of A. This characteristic of the Voronoi microstructure further underscores the value of learning in this setting.
§ CONCLUSIONS
In this work, we establish approximation theory for learning the
solution operator arising from the elliptic homogenization cell problem (<ref>) viewed as a mapping from the coefficient to the solution; the theory allows for discontinuous
coefficients. We also perform numerical experiments that validate the theory, explore qualitative differences between various microstructures, and quantify error/cost trade-offs in
the approximation. We provide two different proof approaches that rely on different stability results for the underlying solutions. These stability results, when combined with existing universal approximation results for neural operators, result in rigorous approximation theory for learning in this problem setting. On the numerical side, we provide examples of various microstructures that satisfy the conditions of the approximation theory. We observe that model error is dominated by error along discontinuous and corner interfaces, and that discontinuous microstructures give rise to qualitatively different learning behavior. Finally, we remark that the learned effective properties are highly accurate, especially in the case of the Voronoi microstructure that we regard as the most complex. Since discontinuous microstructures arise naturally in solid mechanics, understanding learning behavior in this context is an important prerequisite for using machine learning for applications. In this area and others, numerous questions remain which address the rigor necessary for use of machine learning in scientific applications.
We have confined our studies to one of the
canonical model problems of homogenization theory, the divergence form elliptic setting, with periodic microstructure, to obtain deeper understanding of the learning constitutive laws. One interesting
potential extension of this work is the setting in which the material coefficient A is not periodic
but random with respect to the microstructure. Another is where it is only locally periodic and has dependence on the macroscale variable as well; thus A^ϵ = A(x, x/). In this case, the form of the cell problem (<ref>) and homogenized coefficient (<ref>) remain the same, but A and χ both have parametric dependence on x. The approximation theory, and the empirical learning problem, would grow in complexity in comparison to what is developed here, but the resulting methodology could be useful
and foundational for understanding more complex constitutive models in which the force balance
equation couples to other variables. Indeed, the need for efficient learning of constitutive models is
particularly pressing in complex settings such as crystal plasticity. We anticipate
that the potential use of machine learning to determine parametric
dependence of constitutive models defined by homogenization will be for these more complex problems. The work described in this paper provides an underpinning conceptual approach,
foundational analysis and set of numerical experiments that serve to underpin more applied work in this field.
siam
tocsectionAppendices
§ APPENDICES
§ PROOFS OF STABILITY ESTIMATES
In this section, we prove the stability estimates stated in Section <ref>.
The following lemma is a modification of the standard estimate for parametric dependence
of elliptic equations on their coefficient. We include it here for completeness.
*
For existence and uniqueness of the solution to the cell problem using Lax-Milgram, we refer to the texts <cit.>; we simply derive the bounds and stability estimate. First, note that (<ref>) decouples, in particular,
- ∇· (A ∇χ_ℓ) = ∇· A e_ℓ, y ∈
for l=1,…,d where e_ℓ is the ℓ-th standard basis vector of and each χ_ℓ∈Ḣ^1 (𝕋^1; ). Multiplying by χ_ℓ and integrating by parts shows
α∇χ_ℓ_L^2^2 ≤∫_⟨ A ∇χ_ℓ, ∇χ_ℓ⟩
= - ∫_⟨ A e_ℓ, ∇χ_ℓ⟩
≤∫_ |Ae_ℓ| |∇χ_ℓ|
≤ (∫_ |Ae_ℓ|^2 )^1/2 (∫_ |∇χ_ℓ |^2 )^1/2
≤A_L^∞∇χ_ℓ_L^2.
Therefore
∇χ_L^2^2 = ∑_l=1^d ∇χ_ℓ_L^2^2 ≤d A_L^∞^2/α^2≤d β^2/α^2,
which implies the first result.
To prove the second result, we denote the right hand side of <ref> by f^(i)_ℓ = ∇· A^(i) e_ℓ in what follows. For any v ∈Ḣ^1(;), we have that
-∫_∇· (∇_ℓ) v = ∫__ℓ v
-∫_∂ v ∇_ℓ·n + ∫_∇ v ·∇_ℓ = ∫__ℓ v .
Since v, , and the solution _ℓ are all periodic on , the first term is 0. Combining with the equation for _ℓ, we get
∫_∇ v ·( - ) ∇_ℓ =
=∫_(_ℓ - _ℓ)v + ∇ v ·((∇_ℓ - ∇_ℓ)) .
Setting v = _ℓ - _ℓ, we have
∫_(∇_ℓ - ∇_ℓ) ·(( - )∇_ℓ) = ∫_ (_ℓ - _ℓ)(_ℓ - _ℓ)
+ ∫_(∇_ℓ - ∇_ℓ) ·((∇_ℓ - ∇_ℓ)) ,
α∇_ℓ - ∇_ℓ_L^2^2 ≤ - _L^∞∇_ℓ_L^2∇_ℓ - ∇_ℓ_L^2
+ _ℓ - _ℓ_Ḣ^-1∇_ℓ - ∇_ℓ_L^2,
_ℓ - _ℓ_Ḣ^1≤1/α( - _L^∞∇_ℓ_L^2 + _ℓ - _ℓ_Ḣ^-1).
Evaluating,
_ℓ - _ℓ_Ḣ^-1 = ∇· e_ℓ - ∇· e_ℓ_Ḣ^-1,
= sup_ξ_Ḣ^1=1∫_ξ∇·( - )e_ℓ ,
≤sup_ξ_Ḣ^1=1( - )e_ℓ_L^2∇ξ_L^2,
≤ - _L^2≤ - _L^∞.
since our domain is . Combining this with (<ref>) and the bound of ∇χ_ℓ_L^2≤β/α obtained in the first part of this proposition, we have
_ℓ - _ℓ_Ḣ^1≤1/α(1 + β/α)- _L^∞.
Returning to d dimensions yields the result.
The following result shows that the mapping A ↦A̅ is continuous on separable subspaces of L^∞(;).
Let ⊂ L^∞(;) be a separable subspace and K ⊂∩ a closed set. Define the mapping F : K → by A ↦A̅ as given by (<ref>). Then there exists a continuous mapping ℱ∈ C(;) such that ℱ(A) = F(A) for any A ∈ K.
Let A^(1), A^(2)∈ K then, by Proposition <ref>,
| F ( A^(1) ) - F ( A^(2) ) |_F ≤∫_ |A^(1) - A^(2)|_F ( 1 + |∇χ^(1) |_F )
+ ∫_ |A^(2)|_F |∇χ^(1) - ∇χ^(2) |_F
≤A^(1) - A^(2)_L^∞( 1 + ∇χ^(1)_L^2 ) + A^(2)_L^∞∇χ^(1) - ∇χ^(2)_L^2
≤ ( 1 + √(d)/α ( A^(1)_L^∞ + A^(2)_L^∞ ( min ( A^(1)_L^∞, A^(2)_L^∞ )/α + 1 ) ) )
·A^(1) - A^(2)_L^∞
hence F ∈ C(K;). Applying the Tietze extension theorem <cit.> to F implies the existence of ℱ.
The following lemma shows that, unfortunately, separable subspaces of
L^∞(;) are not very useful. Indeed, in the desired area of application of continuum mechanics, we ought to be able to place a boundary of material discontinuity anywhere in the domain. The following result shows that that would not be possible for a subset of which lies only in a separable subspace of L^∞(; ).
For any t ∈ [0,1] define c_t : [0,1] → by
c_t(x) =
1, x ≤ t
0, x > t
, ∀ x ∈ [0,1].
Define E = {c_t : t ∈ [0,1]}⊂ L^∞([0,1]). There exists no separable subspace ⊂ L^∞([0,1]) such that E ⊆.
Suppose otherwise. Since (, ·_L^∞) is a separable metric space, (E,·_L^∞) must be separable since E ⊆; this is a contradiction since (E,·_L^∞) is
not separable. To see this, let {c_t_j}_j=1^∞ be an arbitrary countable susbset of E. Then for any t ∉{t_j}_j=1^∞, we have,
inf_{t_j}_j=1^∞c_t - c_t_j_L^∞ = 1.
Hence no countable subset can be dense.
Instead of working on a compact subset of a separable subspace of L^∞(; ), we may instead try to find a suitable probability measure which contains the discontinous functions of interest. The following remarks makes clear why such an approch would still be problematic for the purposes of approximation.
Let μ be a Gaussian measure on
L^2([0,1]). Define
T(x) =
1, x ≥ 0
0, x < 0
, ∀ x ∈ [0,1]
and consider the corresponding Nemytskii operator N_T : L^2 ([0,1]) → L^∞ ([0,1]).
Then, working with the definitions in Lemma <ref>, it is easy to see that
E ⊂N_T^♯μ. Therefore there exists no separable subspace of L^∞([0,1]) which contains N_T^♯μ.
We therefore abandon L^∞ and instead show continuity and Lipschitz continuity for some L^q with q < ∞ to Ḣ^1. The following lemma is a general result for convergence of sequences in metric spaces which is used in a more specific context in the next lemma.
Let (M,d) be a metric space and (a_n) ⊂ M a sequence. If every subsequence (a_n_k) ⊂ (a_n) contains a subsequence (a_n_k_l) ⊂ (a_n_k) such that (a_n_k_l) → a ∈ M then (a_n) → a.
Suppose otherwise. Then, there exists some ϵ >0 such that, for every N ∈ℤ^+, there exists some n = n(N) > N such that
d(a_n, a) ≥ϵ.
Then we can construct a subsequence (a_n_j) ⊂ (a_n) such that d(a_n_j,a) ≥ϵ∀ n_j. Therefore a_n_j does not have a subsequence converging to a, which is a contradiction.
The following lemma proves existence of a limit in L^2(D;) of a sequence of outputs of operators in L^∞(D;).
Let D ⊆ be an open set and (A_n) ⊂ L^∞(D;) a sequence satisfying the following.
* A_n ∈ for all n,
* There exists A ∈ L^∞ (D;) such that
(A_n) → A in L^2(D;).
Then, for any g ∈ L^2(D;), we have that (A_ng) → Ag in L^2(D;).
We have
A_ng_L^2≤βg_L^2
hence (A_n g) ⊂ L^2(D;) and, similarly, by finite-dimensional norm equivalence, there is a constant C_1 > 0
such that
Ag_L^2≤ C_1 A_L^∞g_L^2
hence Ag ∈ L^2(D;). Again, by finite-dimensional norm equivalence, we have that there exists a constant C_2 > 0
such that, for j ∈{1,…,d} and almost every y ∈ D, we have
(A_n g)_j(y)^2 ≤ |A_n^(j)(y)|^2 |g(y)|^2 ≤ C_2 β^2 |g(y)|^2
where A_n^(j)(y) denotes the j-th row of A_n^(j)(y). In particular,
|(A_n g)_j (y) | ≤√(C_2)β |g(y)|.
Let (A_n_k) ⊂ (A_n) be an arbitrary subsequence. Since (A_n) → A, we have that (A_n_k) → A in L^2(D;). Therefore, there exists a subsequence (A_n_k_l) ⊂ (A_n_k) such that A_n_k_l(y) → A(y) for almost every y ∈ D. Then A_n_k_l(y) g(y) → A(y) g(y) for almost every y ∈ D. Since |g| ∈ L^2(), we have, by the dominated convergence theorem, that (A_n_k_l g)_j → (Ag)_j in L^2(D) for every j ∈{1,…,d}. Therefore (A_n_k_l g) → Ag in L^2(D;). Since the subsequence (A_n_k) was arbitrary, Lemma <ref> implies the result.
Finally, we may prove Proposition <ref>.
*
Consider the PDE
- ∇· (A ∇ u) = ∇· A e, y ∈
where e is some standard basis vector of . Let (A_n) ⊂ K be a sequence such that (A_n) → A ∈ K in L^2(;). Denote by u_n ∈Ḣ^1 () the solution to (<ref>) corresponding to each A_n and by u ∈Ḣ^1 () the solution corresponding to the limiting A. A similar calculation as in the proof of Proposition <ref> shows
αu_n - u_Ḣ^1^2 ≤∫_⟨ (A - A_n)(∇ u + e), ∇ u_n - ∇ u ⟩
≤u_n - u_Ḣ^1(A_n - A)(∇ u + e)_L^2.
Since ∇ u + e ∈ L^2(;), by Lemma <ref>, ( A_n(∇ u + e) ) → A(∇ u + e) in L^2(;) hence (u_n) → u in Ḣ^1(). In particular, the mapping A ↦ u defined by (<ref>) is continuous. Since the problem (<ref>) decouples as shown by (<ref>), we have that each component mapping G_l : K →Ḣ^1() defined by A ↦χ_ℓ is continuous thus G is continuous.
Applying the Tietze
extension theorem <cit.> to G implies the existence of .
The following is a straightforward consequence of Proposition <ref> that establishes continuity of the map A ↦A defined in (<ref>) as well.
Endow with the L^2(;) induced topology and let K ⊂ be a closed set. Define the mapping F : K → by A ↦A̅ as given by (<ref>). Then there exists a bounded continuous mapping ℱ∈ C(L^2(;);) such that ℱ(A) = F(A) for any A ∈ K.
Since ∇ : Ḣ^1(;) → L^2(;) is a bounded operator, Lemma <ref> implies that the mapping A ↦ A + A ∇χ^T is continuous as compositions, sums, and products of continuous functions are continuous. Now let A ∈ then A ∈ L^1(;) since A ∈ L^∞ (;). Thus
| ∫_ A |_F ≤∫_ |A|_F ≤A_L^2
by Hölder's inequality and the fact that ∫_ = 1. Hence F ∈ C(K;) as a composition of continuous maps. Again applying the Tietze
extension theorem <cit.> to F implies the existence of ℱ.
To prove Proposition <ref>, we need to establish Lipschitz continuity. We first establish the following result, which is similar to the one proved in <cit.> in Theorem 2.1. We show it again here both for completeness and because we specify to the case of the cell problem (<ref>) with periodic boundary conditions rather than the system (<ref>) with Dirichlet boundary conditions.
Let A^(1), A^(2)∈ and let , be the corresponding solutions to (<ref>).
Then
- _Ḣ^1≤√(d)/α( -_L^2 + ∇_L^p - _L^q)
for p ≥ 2 and q = 2p/p-2.
As in the proof of Proposition <ref>, we denote f^(i) = ∇· A^(i) for i ∈{1,2} for simplicity of notation and to be easily comparable to the proof of Theorem 2.1 in <cit.>. Since both sides of the cell problem equation (<ref>) depend on A^(i), we introduce χ as the solution of
-∇·(∇χ) = ∇·, χ∈Ḣ^1(;)
as an intermediate function. We obtain bounds using χ and apply the triangle inequality to
( - χ) + (χ -)_Ḣ^1
to obtain a bound on - _Ḣ^1.
From the naïve perturbation bound in (<ref>) we have
χ_ℓ - _ℓ_Ḣ^1≤1/α - _Ḣ^-1,
so we are left to bound _ℓ - χ_ℓ_Ḣ^1. We note that
∇·(∇χ_ℓ) = ∇·(∇_ℓ)
∫_∇χ_ℓ·∇ v = ∫_∇_ℓ·∇ v ∀ v ∈Ḣ^1()
Letting v = _ℓ - χ_ℓ,
∫_∇χ_ℓ·(∇_ℓ - ∇χ_ℓ) = ∫_∇_ℓ·(∇_ℓ - ∇χ_ℓ)
∫_A^(2)(∇χ_ℓ -∇_ℓ) ·(∇χ_ℓ - ∇_ℓ)
= ∫_(-)∇_ℓ·(∇_ℓ - ∇χ_ℓ)
αχ_ℓ - _ℓ_Ḣ^1 ≤( - )(∇_ℓ)_L^2
Applying Hölder for L^2, we get
χ_ℓ - _ℓ_Ḣ^1≤1/α∇_ℓ_L^p - _L^q
for q = 2p/p-2 where p ∈ [2,∞].
Putting the two parts together, we have that
_ℓ - _ℓ_Ḣ^1 ≤1/α∇· e_ℓ -∇· e_ℓ_Ḣ^-1 + 1/α∇_ℓ_L^p - _L^q
≤1/α - _L^2 + 1/α∇_ℓ_L^p - _L^q
Combining bounds for all d dimensions yields the result.
Since L^q(Ω) ↪ L^2(Ω) for bounded Ω⊂^d and q ≥ 2, we could also write the bound of Lemma <ref> as
_ℓ - _ℓ_Ḣ^1≤1/α(C + ∇_ℓ_L^p) - _L^q
for some C dependent only on q and Ω.
The result of Lemma <ref> is unhelpful if ∇χ_L^p is unbounded, as it is for the case of p = 2 with sets of A containing discontinuous corner interfaces appearing in the microstructure examples of square inclusions and Voronoi crystals as described in Section <ref>.[To see this, consider local solutions u: ℝ^2 →ℝ^2 of ∇·(a∇ u) = 0
where a = a(θ) is a(θ) =
a_-, 0 < θ < θ^*
a_+, θ^* < θ < 2π with the ansatz u(r,θ) = h(r)g(θ). The solution component h(r) takes the form h(r) = r^b, b>0, whose gradient is singular for 0<b<1. Solving the associated eigenvalue problem for b gives a singular component for θ^* ≠π.] In this setting, it is not possible for Lemma <ref> to result in Lipschitz continuity as a map from L^2 to Ḣ^1. Instead, we seek to bound ∇χ_L^p for p satisfying 2 < p < ∞.
Before continuing, we establish a bound on the gradient of the solution to the Poisson equation on the torus. This follows the strategy of <cit.> for the Dirichlet problem. In order to avoid extra factors of 2π in all formulae we assume = [0,2π]^d with opposite faces identified for the following result of Lemma <ref>. As we work on the torus, it is useful to first set up notation for the function spaces of interest. Let
𝒟() = C^∞_c() = C^∞()
be the space of test functions where the last equality follows from compactness of the torus. Functions can be either or ℂ valued hence we do not explicitly specify the range. We equip 𝒟() with a locally convex topology generated by an appropriate family of semi-norms, see, for example, <cit.>. Any function g ∈𝒟() can be represented by its Fourier series
g(x) = ∑_k ∈ℤ^dg(k) e^ix · k
where g denotes the Fourier transform of g and convergence of the right-hand side sum is with respect to the topology of 𝒟(). It holds that g∈𝒮(ℤ^d), the Schwartz space of rapidly decreasing functions on the integer lattice, so we have
|g(k)| ≤ c_m (1 + |k|)^-m, m = 0, 1, …
for some constants c_m. We may then define the topological (continuous) dual space of 𝒟(), the space of distributions, denoted 𝒟'(), which can be described as follows: the
condition that f ∈𝒟'() is characterized by the property
|f(k)| ≤ b_m (1 + |k|)^m, m = 0, 1, …
for some constants b_m. We take the weak-^* topology on 𝒟'() and generally use the prime notation for any such dual space. For any -∞ < s < ∞, we define the fractional Laplacian as
(-Δ)^s f = ∑_k ∈ℤ^d∖{0} |k|^2sf(k) e^ik· x
where the right-hand side sum converges in the topology of 𝒟'(). It is easy to see that (-Δ)^s : 𝒟'() →𝒟'() is continuous. Furthermore, for any j ∈{1,…,d}, we define the family of operators R̃_j : 𝒟'() →𝒟'(), defining periodic Riesz transforms, by
R̃_j f = ∑_k ∈ℤ^d - i k_j/|k|f(k) e^ik · x
where we identify k_j/|k||_k=0 = lim_|k|→ 0k_j/|k| = 0. Again, we stress that convergence of the right-hand side sum is in the topology of 𝒟'(). Lastly, we denote by 𝒮() and 𝒮'() the Schwartz space and the space of tempered distributions on respectively; see, for example, <cit.> for the precise definitions.
The following lemma establishes boundedness of the periodic Riesz transform on L^p (). It is essential in proving boundedness of the gradient to the solution of the Poisson equation on the torus. The result is essentially proven in <cit.>. We include it here, in our specific torus setting, giving the full argument for completeness.
There exists a constant c = c(d,p) > 0 such that,
for any j ∈{1,…,d} and any f ∈ L^p() for some 2 ≤ p < ∞, we have
R̃_j f _L^p ()≤ c f_L^p ().
Let g ∈ L^2 () ∩ L^p (^d) for some 1 < p < ∞. For any j ∈{1,…,d}, define the family of operators R_j by
(R_j g)(x) = lim_δ^-1, ϵ→ 0^+∫_δ≥ |t| ≥ϵ g(x-t) K_j (t) dt,
where
K_j (t) = Γ ( (d+1)/2 ) t_j/π^(d+1)/2 |t|^d+1
and Γ denotes the Euler-Gamma function.
By <cit.>, K_j ∈𝒮'(^d) and its Fourier transform satisfies
K_j (t) = - i t_j/|t|
where i = √(-1) denotes the imaginary unit. Therefore, for any ϕ∈𝒮(), we have
(K_j * ϕ)^ (t) = - i t_j/|t|ϕ(t)
where * denotes convolution, see, for example, <cit.>. Since g ∈ L^2(^d), we therefore find that, by <cit.>,
(R_j g)^ (x) = - i x_j/|x|g(x)
for Lebesgue almost every x ∈^d. The result <cit.> further shows that there exists a constant c = c(d,p) > 0 such that
R_j g_L^p ()≤ c g_L^p ().
We note from (<ref>) and the definition (<ref>) that R̃_j may be viewed as R_j with the restriction of the Fourier multiplier -i x_j/|x| to the lattice ℤ^d. We can therefore use the transference theory of <cit.> to establish boundedness of R̃_j from the boundedness of R_j.
In particular, note that the mapping x ↦ -i x_j/|x| is continuous at all x ∈^d except x=0. However, by symmetry, we have that, for all ϵ > 0
∫_|x| ≤ϵ -i x_j/|x| dx = 0.
Therefore we can apply <cit.> to conclude that, since R_j is bounded from L^p () to L^p(), R̃_j is bounded from L^p() to L^p ()
with
R̃_j_L^p () → L^p ()≤R_j_L^p () → L^p ().
This implies the desired result.
We define the Bessel potential spaces by
L^s,p () = {u ∈𝒟'() | u_L^s,p ()(I -Δ)^s/2 u_L^p () < ∞}
for any - ∞ < s < ∞ and 1 < p < ∞. We also define the homogeneous version of these spaces, sometimes called the Riesz potential spaces, by
L̇^s,p () = {u ∈𝒟'() | u_L̇^s,p ()(-Δ)^s/2 u_L^p () < ∞ , ∫_ u = 0 }.
It is clear that L̇^s,p () ⊂ L^s,p () is closed subspace. We then have the following result for the Poisson equation.
For each f ∈ L^s,p(), for -∞ < s < ∞ and 2 ≤ p<∞, the solution u of the equation
- Δ u = f, u 1-periodic, ∫_ u = 0
satisfies
∇ u_L̇^s+1,p ()≤ K f_L̇^s,p()
for some finite K > 0 depending only on p and d.
From the definitions (<ref>) and (<ref>), it is easy to see that the Riesz transform can be written as
R̃_j = - ∂_x_j (-Δ)^-1/2
in the sense of distributions. Consider now equation (<ref>) with f ∈ L^s,p () for 2 ≤ p < ∞. We have that
_x_ju_L̇^s+1,p () = _x_j(-Δ)^-1f_L̇^s+1,p ()
= _x_j(-Δ)^-1/2(-Δ)^s/2f_L^p ()
= R̃_j (-Δ)^s/2f_L^p ().
It is clear that
(-Δ)^s/2f_L^p () = f_L̇^s,p () < ∞
hence (-Δ)^s/2f ∈ L^p (). We can thus apply Lemma <ref> to find a constant c = c(d,p) > 0 such that
_x_ju_L̇^s+1,p ()≤ c (-Δ)^s/2f_L^p () = c f_L̇^s,p ().
The result follows by finite-dimensional norm equivalence.
Next we define the homogeneous Sobolev spaces on the torus as
Ẇ^k,p() = {u ∈ W^k,p() | u is 1 -periodic, ∫_ u = 0 }
for k=0,1,…, and 1 ≤ p ≤∞ with the standard norm on W^k,p, see, for example <cit.>.
By <cit.>, we have that, for any k = 0,1,… and 1 < p < ∞,
L^k,p () = W^k,p (), L̇^k,p () = Ẇ^k,p ().
Furthermore, by <cit.>,
W^-k,p' () = ( W^k,p () )' = ( L^k,p () )' = L^-k,p' (),
Ẇ^-k,p' () = ( Ẇ^k,p () )' = ( L̇^k,p () )' = L̇^-k,p' ()
where p' is the Hölder conjugate of p i.e. 1/p + 1/p' = 1.
In the following, we use the notation
[K_0,K_1]_θ, q
to denote the real interpolation between two Banach spaces continuously embedded in the same Hausdorff topological space, as described in <cit.>. We also need Lemma A1 from <cit.>, which we have copied below as Lemma <ref>
to ease readability. Although this lemma was written only for q=2, the result still holds for our q > 2 with a very similar proof.
Let E_1 ⊂ E_0 be two Banach spaces with E_1 continuously embedded in E_0. Let T: E_j → E_j be a bounded operator with closed range and assume that T is a projection, j ∈{0,1}. Denote by K_0 and K_1 the ranges of T|_E_0 and T|_E_1 respectively. Then the following two spaces coincide with equivalent norms:
[K_0, K_1]_θ, q = [E_0, E_1]_θ, q∩ K_0 ∀ s ∈ (0,1).
We now state the result for the bound on ∇χ_L^p with a proof largely developed in <cit.>.
Let χ solve (<ref>) for A ∈. Then
∇χ_L^p≤1/βK^η(p)/1-K^η(p)(1-α/β)
for 2 < p < p^*(α/β) where
p^*(t) : = max{p | K^-η(p)≥ 1-t, 2 < p < Q}
for η(p) = 1/2 - 1/p/1/2 - 1/Qand K= K(d,Q) is the constant in Lemma <ref>, for any choice of Q>2.
The operator T = -Δ is invertible from Ḣ^-1 to Ḣ^1, and the inverse T^-1 is bounded with norm 1 since the Poisson equation with periodic boundary conditions has a unique solution in Ḣ^1 for f ∈ H^-1 with bound u_Ḣ^1≤f_Ḣ^-1. From Lemma <ref> it is also bounded with norm K=K(d,Q) from Ẇ^-1,Q to Ẇ^1,Q for any Q > 2. By the real method of interpolation <cit.>, for 2<p<Q we have that
W^1,p = [H^1, W^1,Q]_η(p), p
using the notation of <cit.> where η(p) = 1/2 - 1/p/1/2 - 1/Q. From the duality theorem (Theorem 3.7.1. of <cit.>), we have that
[H^-1, W^-1,Q]_η(p),p = ([H^1,W^1,Q']_η(p),p')'
From real interpolation, the right hand side equals (W^1,p')' = W^-1,p in our notation. Therefore, we have the necessary dual statement that parallels (<ref>):
W^-1,p = [H^-1,W^-1,Q]_η(p),p.
These interpolation spaces are not yet restricted to functions with periodic boundary conditions. Using the projection onto the space of continuous, periodic functions on as T and noticing that W^1,Q↪ H^1, we apply Lemma <ref> with K_0 = Ḣ^1 and have
Ẇ^1,s = [Ḣ^1, Ẇ^1,Q]_η(p), p.
The duality theorem still holds in this setting, so we also have
Ẇ^-1,p = [Ḣ^-1,Ẇ^-1,Q]_η(p),p.
Using the exact interpolation theorem, Theorem 7.23 of <cit.>, T^-1 is also a bounded map from Ẇ^-1,p to Ẇ^1,p with norm K^η(p):
T^-1f_Ẇ^1,p≤ K^η(p)f_W^-1,p.
The remainder of the proof is identical to that of the proof of Proposition 1 in <cit.>, but we state it here in our notation for completeness. Define S: Ẇ^1,p→ W^-1,p as the operator Su = -∇·(1/βA∇ u). Let V be the perturbation operator V: = T - S. Since A ∈, we have S≤ 1 and V≤ 1 - α/β. Therefore, as a mapping from Ẇ^1,p to Ẇ^-1,p,
T^-1V≤T^-1V≤ K^η(p)(1 - α/β)
Since T is invertible, S = T(I - T^-1V) is invertible provided K^η(p)(1- α/β) < 1. Moreover, as a mapping from W^-1, p to W^1,p,
S^-1≤(I-T^-1V)^-1T^-1≤K^η(p)/1-K^η(p)(1-α/β).
Therefore,
∇χ_L^p = χ_Ẇ^1,p≤1/βK^η(p)/1-K^η(p)(1-α/β)
provided K^η(p)(1- α/β) < 1. The bound and specified range of p follow.
Finally, we may prove Proposition <ref>
*
Lemma <ref> guarantees a p_0> 2 such that ∇χ^(2)_L^p in Lemma <ref> is bounded above by a constant for 2 < p< p_0. Then Lemma <ref> gives Lipschitz continuity of the solution map from L^q() ↦Ḣ^1() for q satisfying q_0 < q < ∞ for some q_0 > 2.
From the results of Lemma <ref> and Lemma <ref>, we have that we can take q_0 = 2p_0/p_0 - 2 where
p_0 = max{p | K^-η(p)≥ 1-t, 2 < p < Q}.
Therefore, bounds on p_0 may be inherited from bounds on K that appears in Lemma <ref>.
We can leverage the result of Proposition (<ref>) to also show continuity in the map A ↦A from L^q to Ḣ^1 in equation (<ref>).
If A ∈ in (<ref>), there exists q_0 < ∞ such that for all q satisfying q_0 < q ≤∞, the solution map A ↦A of equations (<ref>) and (<ref>) is Lipschitz-continuous as a map from L^q(;) to .
Recall equation (<ref>):
A = ∫_(A(y) + A(y) ∇χ(y)^T)
For two different coefficient functions , ∈, , the associated solutions of (<ref>), and , the associated homogenized coefficients of (<ref>), we may write
- = ∫_ - + ∇()^T - ∇ ()^T
| - |_F ≤|∫_ - |_F
+ |∫_∇ - ∇ + ∇ -∇|_F
≤ - _L^2 + |∫_(∇ - ∇) + ∇(-)|_F
≤ - _L^2 + - _Ḣ^1_L^2 + ∇_L^2 - _L^2
≤(1 + √(d)β/α) - _L^2 + C β - _L^q
where C is the Lipschitz constant from Proposition <ref>. The embedding L^q ↪ L^2 gives Lipschitz continuity from L^q to of the map.
§ PROOFS OF APPROXIMATION THEOREMS
In this section we prove the approximation theorems stated in Section <ref>.
*
By Proposition <ref>, there exists a continuous map
∈ C(L^2(;);Ḣ^1(;)) such that (A) = G(A) for any A ∈ K. By <cit.>, there exists a FNO Ψ: L^2(;) →Ḣ^1(;) such that
sup_A ∈ K(A) - Ψ(A) _Ḣ^1 < ϵ.
Therefore
sup_A ∈ KG(A) - Ψ(A)_Ḣ^1 = sup_A ∈ K(A) - Ψ(A)_Ḣ^1 < ϵ
as desired.
*
The result follows as in Theorem <ref> by applying Lemma <ref> instead of Proposition <ref>.
*
By Proposition <ref>, there exists a Lipschitz-continuous map
G ∈ C(L^q(;); Ḣ^1(;)). By <cit.>, there exists a NO Ψ: L^q(;) →Ḣ^1(;) such that
sup_A ∈ KG(A) - Ψ(A) _Ḣ^1 < ϵ.
*
The result follows as in Theorem <ref> by applying Lemma <ref> instead of Proposition <ref>.
§ PROOFS FOR MICROSTRUCTURE EXAMPLES
For d = 2, let 𝒜⊂ be a set of piecewise-constant functions on whose level sets are nontrivial for at most M constants, and each level set consists of the union of at most M convex polygons. The set 𝒜 is compact in L^2(;).
For arbitrary ϵ > 0, we identify a finite ϵ-net, denoted 𝒩_ϵ, for 𝒜. Partition the unit square by a uniform grid of 2^2ℓ boxes for integer ℓ > 1, and denote by B_ℓ this set of boxes. Let 𝒩_ϵ be the set of piecewise-constant functions that are constant over each box and take values only in the set {z: z = α + η k, k ∈{0, 1, …, ⌈β-α/η⌉}}, for η < ϵ.
Since each function takes at most M values on at most M convex polygons, we have naive upper bounds of M^2 total polygons partitioned from one another by at most M^4 line segments, not including the domain boundaries. To see this, note that each n-sided convex polygon bordered only by other convex polygons must border at least n other polygons, and thus the number of sides of each polygon is upper bounded by the total number of polygons. For any function A ∈𝒜, we can find a function A' ∈𝒩_ϵ such that, on boxes with no intersections by line segments of A, A-A'_L^∞≤η/2, and on boxes with line intersections, A-A'_L^∞ < |α - β|. Since the latter error is not controlled, we must bound the number of boxes that may be intersected by line segments of A. Each line segment may pass through at most 2^ℓ + 1 boxes, so a set, denoted B_i, of at most M^42^ℓ + 1 boxes may have errors of |α - β|. Therefore,
A-A'^2_L^2 ≤∑_b ∈ B_ℓA-A'^2_L^∞(b)(1/2^2ℓ)
≤∑_b ∈ B_i |α - β|^2(1/2^2ℓ) + ∑_b ∈ B∖ B_iη/2(1/2^2ℓ)
≤M^4 2^ℓ + 1/2^2ℓ |α - β|^2 + η/2.
A choice of ℓ > log_2(M^4|α - β|^2/ϵ) + 2 and η < ϵ gives the result.
Let 𝒜⊂ be the set of γ-Lipschitz star-shaped inclusions, defined as functions that take one constant value inside a domain B, whose boundary is defined by a γ-Lipschitz polar function r = g(θ) contained in the unit square, and another value outside the domain B. Then, the set 𝒜 is compact.
For arbitrary ϵ > 0, we identify a finite ϵ-net, denoted 𝒩_ϵ, for 𝒜. Partition the unit square by a uniform grid of 2^2ℓ boxes for integer ℓ > 1, and denote by B_ℓ this set of boxes. Let 𝒩_ϵ be the set of piecewise-constant functions that are constant over each box and take values only in the set {z: z = α + η k, k ∈{0, 1, …, ⌈β-α/η⌉}}, for η < ϵ.
We need to bound the number of grid boxes of height and width h = 1/2^ℓ that can be intercepted by the parametric curve C=(g(θ), θ) in the unit square. First observe that if the curve is partitioned into segments of length h, each segment can intersect at most 4 grid boxes. This can be seen by noticing that, although a curve can pass through four grid boxes that meet at a corner with arbitrarily small arc length, to reach another one, the curve would have to cross one of the four boxes entirely, which would entail an arc length longer than h. Now note that the length of the curve is bounded above by
L = ∫_C ≤∫_0^2π√(r^2 + (dr/dθ)^2) ≤∫_0^2π√(2 + γ^2) ≤ 2π√(2 + γ^2).
Therefore, the line intercepts at most 2^ℓ + 1(4π√(2 + γ^2)) boxes.
Now, for any function A ∈𝒜, we can find a function A' ∈𝒩_ϵ such that, on boxes with no intersections by the curve of A, A-A'_L^∞≤η/2, and on the set of boxes with intersections, denoted B_i, A-A'_L^∞ < |β-α|.
Therefore, we have
A-A'_L^2^2 ≤∑_b ∈ B_ℓA - A'_L^∞^2(1/2^2ℓ)
≤∑_b ∈ B_i|α - β|^2(1/2^2^ℓ) + ∑_b ∈ B∖ B_iη/2(1/2^2ℓ)
≤2^ℓ + 1(4π√(2 + γ^2))/2^2ℓ|α - β|^2 + η/2.
Picking ℓ > log_2((4π√(2 + γ^2))|α - β|^2/ϵ) and η < ϵ gives the result.
|
http://arxiv.org/abs/2306.03489v1
|
20230606081434
|
Extended Series of Correlation Inequalities in Quantum Systems
|
[
"Chigak Itoi",
"Hiroto Ishimori",
"Kota Sato",
"Yoshinori Sakamoto"
] |
math-ph
|
[
"math-ph",
"cond-mat.dis-nn",
"math.MP"
] |
Complexity of Anchored Crossing Number and Crossing Number of Almost Planar Graphs
Petr Hliněný 0000-0003-2125-1514
==================================================================================
A systematic derivation provides extended series of correlation inequalities in quantum systems.
Each order in truncated Taylor expansion of the spectral representation
for the Duhamel correlation function gives its lower and upper bounds.
The obtained bound on the Duhamel function and the square root interpolation method enable us to derive a variational solution of specific free energy in the transverse field Sherrington-Kirkpatrick model.
§ INTRODUCTION
Spectral representations of physical observables are known to be useful to study quantum statistical systems.
Famous correlation inequalities are obtained in these representations,
such as the Bogoliubov inequality, the Harris
inequality and the Falk-Bruch inequality <cit.>.
These inequalities provide bounds on physical observables, which enable us to prove rigorous theorems on these observables in quantum systems.
The Mermin-Wagner theorem <cit.> is
proven using the Bogoliubov correlation inequality. This theorem claims that the spontaneous symmetry breaking of continuous symmetry
cannot occur at any finite temperature in two or one dimensions.
Recently, Leschke, Manai, Ruder and Warzel have proven the non-zero variance of the overlap operator in the transverse Sherrington-Kirkpatrick (SK) model <cit.>, using the Falk-Bruch inequality <cit.>. Many researchers who study spin glass systems appreciate their result.
Alternatively, their result can be proven more easily by the Harris inequality <cit.> or other extended inequalities instead of the Falk-Bruch inequality.
In the present paper, several correlation inequalities
are obtained systematically in terms of spectral representations of operators.
The present paper is organized as follows. In section 2, definitions of several complex valued functions and main results of new correlation inequalities are provided. In section 3, the main results are proven in terms of contour integrations
of the spectral representation for operators. In section 4, obtained correlation
inequalities are applied to the transverse field SK model.
We extend the square root interpolation for a variational solution of the replica symmetric
specific free energy given by Guerra and Talagrand <cit.>
to the quantum mechanically perturbed model.
§ DEFINITIONS AND MAIN RESULTS
Consider a quantum system with a Hamiltonian H.
The Duhamel function for bounded linear operators A, B is defined by
(A,B) =∫_0^1 dt ⟨ e^β t H A e^-β tH B ⟩,
which is important to represent a susceptibility of quantum spin systems.
To express the main theorem,
consider functions f : ℂ→ℂ, g : ℂ→ℂ and h : ℂ→ℂ defined by
f(z) :=z z, g(z):=1/f(z)=tanh z/ z, h(z):= z/log
1+z/1-z
.
Note that f(-z) = f(z), g(-z) = g(z) and h(-z) = h(z).
Define k-th differential coefficient of the following operator is defined by
C_A^k:= ( d^k/dt^k e^tH A e^-tH)_t=0 = [H, ⋯,[H, [H,A] ]⋯ ],
for an arbitrary positive integer k, and define C_A^0 := A.
Theorem 1 Let A be a bounded linear operator and n be a positive even integer, such that n/2 is odd.
The difference between the expectation of
the anti-commutator and the Duhamel function is bounded by the following sequence with expectations of double commutators:
∑_k=1^n/2+1(β/2)^2k-1f^(2k)(0)/2(2 k)!⟨ [C_A^k-1^†, C_A^k] ⟩≤1/2⟨{A^†,A }⟩- (A^†, A) ≤∑_k=1^n/2(β/2)^2k-1f^(2k)(0)/2(2 k)!⟨ [C_A^k-1^†, C_A^k] ⟩.
Theorem 2
Let A be a bounded linear operator and n be a nonnegative even integer, such that n/2 is even.
The Duhamel function is bounded by the following sequence with the expectation of
the anti-commutators:
∑_k=0^n/2+1(β/2)^2kg^(2k)(0)/2(2 k)!⟨{C_A^k^†, C_A^k }⟩≤
(A^†, A) ≤∑_k=0^n/2(β/2)^2kg^(2k)(0)/2(2 k)!⟨{C_A^k^†, C_A^k }⟩.
Theorem 3
Let A be a bounded linear operator and n be a nonnegative even integer, such that n/2 is even.
The following expectation of commutator is bounded by sequences with the expectation of
the anti-commutators:
∑_k=0 ^n/2+1g^(2k) (0)/(2k)!( β/2)^2k+1⟨{C_A^k+1^†,C_A^k+1}⟩≤⟨[A^†, [H, A] ]⟩≤∑_k=0 ^n/2g^(2k) (0)/(2k)!( β/2)^2k+1⟨{C_A^k+1^†,C_A^k+1}⟩.
Theorem 4
Let A be a bounded linear operator and n be a nonnegative even integer, such that n/2 is even.
The Duhamel function is bounded by the following sequence with the expectation of
the commutator and anti-commutator:
(A^†,A) ≤⟨{A^†,A}⟩∑_k=0^n/2h^(2k)(0)/(2k)!(⟨ [A^†, A]⟩/⟨{A^†,A}⟩)^2k.
For n=2, Theorem 1 implies
β/12⟨ [A^†, [H, A]] ⟩ -β^3/720⟨ [[H,A]^†, [H,[H, A]]] ⟩≤1/2⟨{A^†,A }⟩- (A^†, A)≤β/12⟨ [A^†, [H, A]] ⟩.
The upper bound is known as the Bogoliubov-Harris inequality <cit.>, and the lower bound gives a new inequality.
For n=0, Theorem 2 implies
1/2⟨{ A^†, A}⟩-β^2/24⟨{[H,A]^†, [H, A] }⟩≤ (A^†, A)≤1/2⟨{ A^†, A}⟩.
Note that the upper bound
is well-known, and the lower bound is a new one given by the left hand side.
Several inequalities given by Theorem 1 and 2 have been obtained by Brankov and Tonchev <cit.>.
For n=0, Theorem 3 implies
β/2⟨{[H,A]^†, [H, A] }⟩-β^3/24⟨{[H,[H,A]]^†, [H,[H, A]] }⟩≤⟨ [A^†,[H, A]] ⟩≤β/2⟨{[H,A]^†, [H, A] }⟩.
These are new inequalities.
For n=4, Theorem 4 gives a new inequality
2(A^†,A)/⟨{A^†,A}⟩≤ 1- 1/3(⟨ [A^†, A]⟩/⟨{A^†,A}⟩)^2
-4/45(⟨ [A^†, A]⟩/⟨{A^†,A}⟩)^4.
§ PROOFS
Spectral representation is well-known as a useful method to represent correlation functions in quantum systems <cit.>.
To define a spectral representation of correlation functions, define
energy eigenstates |μ⟩ belonging to the energy eigenvalue E_μ
H | μ⟩ = E_μ |μ⟩.
The partition function for inverse temperature β is
Z(β) := Tr e^-β H = ∑_μ e^-β E_μ.
Define spectral function Q_A,B(ω) of bounded linear operators A, B for ω∈ℝ by
Q_A,B(ω) := 1/Z(β)∑_μ,ν e^-β E_ν⟨ν | A| μ⟩⟨μ | B| ν⟩ (1+e^-βω) δ (E_μ-E_ν-ω).
The function Q_A,B(ω) has the following properties. Q_A,B(ω) is bilinear in A and B. The complex conjugate is given by Q_A,B(ω)^* = Q_B^†, A^†(ω). Q_A,B (-ω) =Q_B,A(ω). Q_A^†, A(ω) ≥ 0, and Q_A^†,A(ω)=0 implies
⟨ A ⟩ =0..
Spectral representations for
several correlation functions are given in the following.
Lemma 1 The spectral representation of the expectation of the anti-commutator between A, B is given by
⟨{A, B}⟩ = ⟨ (AB+BA) ⟩ = ∫_-∞^∞ dω Q_A,B(ω).
Proof. The right hand side is
RHS = ∫_-∞^∞ dω1/Z(β)∑_ν,μ e^-β E_ν⟨ν | A| μ⟩⟨μ| B|ν⟩ (1+e^-βω)δ (E_μ-E_ν -ω)
= 1/Z(β)∑_ν,μ e^-β E_ν⟨ν | A| μ⟩⟨μ| B|ν⟩(1 + e^ -β(E_μ-E_ν))
= 1/Z(β)∑_ν,μ⟨ν | A| μ⟩⟨μ| B|ν⟩(e^-β E_ν + e^ -β E_μ)
= 1/Z(β)∑_ν,μ ( ⟨ν | A| μ⟩⟨μ| B|ν⟩ e^-β E_ν +
⟨μ | A| ν⟩⟨ν| B|μ⟩ e^ -β E_ν)
= 1/Z(β)∑_ν,μ ( ⟨ν | A| μ⟩⟨μ| B|ν⟩ +
⟨ν | B|μ⟩⟨μ|A |ν⟩ )e^-β E_ν.
Since 1= ∑_μ | μ⟩⟨μ|, the left hand side is
LHS = 1/Z(β) Tr (AB+BA) e^-β H = 1/Z(β)∑_ν⟨ν | (AB+BA) e^-β H|ν⟩
= 1/Z(β)∑_ν,μ ( ⟨ν | A| μ⟩⟨μ| B|ν⟩ +
⟨ν | B|μ⟩⟨μ|A |ν⟩ )e^-β E_ν.
This is identical to the right hand side.
Lemma 2 The spectral representation of the expectation of the commutator between A, B is given by
⟨ [A, B] ⟩ = ∫_-∞^∞ dωtanhβω/2 Q_A,B(ω)
Proof. The right hand side is
RHS = ∫_-∞^∞ dωtanhβω/21/Z(β)∑_ν,μ e^-β E_ν⟨ν | A| μ⟩⟨μ| B|ν⟩ (1+e^-βω)δ (E_μ-E_ν -ω)
= 1/Z(β)∑_ν,μtanhβ (E_μ-E_ν)/2 e^-β E_ν⟨ν | A| μ⟩⟨μ| B|ν⟩(1 + e^ -β(E_μ-E_ν))
= 1/Z(β)∑_ν,μ⟨ν | A| μ⟩⟨μ| B|ν⟩e^β(E_μ-E_ν)/2 - e^-β(E_μ-E_ν)/2/e^β(E_μ-E_ν)/2 + e^-β(E_μ-E_ν)/2
(e^-β E_ν + e^ -β E_μ)
= 1/Z(β)∑_ν,μ⟨ν | A| μ⟩⟨μ| B|ν⟩e^-β E_ν - e^-β E_μ/e^-β E_μ + e^-β E_ν
(e^-β E_ν + e^ -β E_μ)
= 1/Z(β)∑_ν,μ⟨ν | A| μ⟩⟨μ| B|ν⟩
( e^-β E_ν-e^-β E_μ)
= 1/Z(β)∑_ν,μ ( ⟨ν | A| μ⟩⟨μ| B|ν⟩ -
⟨ν | B|μ⟩⟨μ|A |ν⟩ )e^-β E_ν.
Assume resolution of unity 1= ∑_μ | μ⟩⟨μ|, then the left hand side is
LHS = 1/Z(β) Tr [A,B] e^-β H = 1/Z(β)∑_ν⟨ν | (AB-BA) e^-β H|ν⟩
= 1/Z(β)∑_ν,μ ( ⟨ν | A| μ⟩⟨μ| B|ν⟩ -
⟨ν | B|μ⟩⟨μ|A |ν⟩ )e^-β E_ν.
This is identical to the right hand side.
Lemma 3The spectral representation of the expectation of the double commutator is given by
⟨ [ A, [H ,B]] ⟩ = ∫_-∞^∞ dωωtanhβω/2 Q_A,B(ω).
Proof. The right hand side is
RHS = ∫_-∞^∞ dωωtanhβω/21/Z(β)∑_ν,μ e^-β E_ν⟨ν | A| μ⟩⟨μ| B|ν⟩ (1+e^-βω)δ (E_μ-E_ν -ω)
= 1/Z(β)∑_ν,μ(E_μ-E_ν) tanhβ (E_μ-E_ν)/2
e^-β E_ν⟨ν | A| μ⟩⟨μ| B|ν⟩(1 + e^ -β(E_μ-E_ν))
= 1/Z(β)∑_ν,μ⟨ν | A| μ⟩⟨μ| B|ν⟩ (E_μ-E_ν)
e^β(E_μ-E_ν)/2 - e^-β(E_μ-E_ν)/2/e^β(E_μ-E_ν)/2 + e^-β(E_μ-E_ν)/2
(e^-β E_ν + e^ -β E_μ)
= 1/Z(β)∑_ν,μ⟨ν | A| μ⟩⟨μ| B|ν⟩ (E_μ-E_ν)
e^-β E_ν - e^-β E_μ/e^-β E_μ + e^-β E_ν
(e^-β E_ν + e^ -β E_μ)
= 1/Z(β)∑_ν,μ⟨ν | A| μ⟩⟨μ| B|ν⟩ (E_μ-E_ν)
( e^-β E_ν-e^-β E_μ) .
Since 1= ∑_μ | μ⟩⟨μ|, the left hand side is
LHS = 1/Z(β) Tr [A,[H, B] ]e^-β H = 1/Z(β)∑_ν⟨ν | [A (HB-BH) - (HB-BH)A ]e^-β H|ν⟩
= 1/Z(β)∑_ν⟨ν | ( AHB- E_ν A B - B A E_ν +BHA) e^-β E_ν|ν⟩
= 1/Z(β)∑_ν,μ ( ⟨ν | A| μ⟩⟨μ| B|ν⟩( E_μ-E_ν) -(E_ν -E_μ)
⟨ν | B|μ⟩⟨μ|A |ν⟩ )e^-β E_ν
= 1/Z(β)∑_ν,μ⟨ν | A| μ⟩⟨μ| B|ν⟩ (E_μ-E_ν)
( e^-β E_ν-e^-β E_μ) .
This is identical to the right hand side.
Lemma 4 The Duhamel function for bounded linear operators A, B
(A,B) = ∫_-∞^∞dω/βωtanhβω/2 Q_A,B(ω)
Proof. The right hand side is
RHS = ∫_-∞^∞dω/βωtanhβω/21/Z(β)∑_ν,μ e^-β E_ν⟨ν | A| μ⟩⟨μ| B|ν⟩ (1+e^-βω)δ (E_μ-E_ν -ω)
= 1/ Z(β)∑_ν,μ1/β(E_μ-E_ν)tanhβ (E_μ-E_ν)/2
e^-β E_ν⟨ν | A| μ⟩⟨μ| B|ν⟩(1 + e^ -β(E_μ-E_ν))
= 1/Z(β)∑_ν,μ⟨ν | A| μ⟩⟨μ| B|ν⟩1/β(E_μ-E_ν)e^β(E_μ-E_ν)/2 - e^-β(E_μ-E_ν)/2/e^β(E_μ-E_ν)/2 + e^-β(E_μ-E_ν)/2
(e^-β E_ν + e^ -β E_μ)
= 1/Z(β)∑_ν,μ⟨ν | A| μ⟩⟨μ| B|ν⟩1/β(E_μ-E_ν)e^-β E_ν - e^-β E_μ/e^-β E_μ + e^-β E_ν
(e^-β E_ν + e^ -β E_μ)
= 1/Z(β)∑_ν,μ⟨ν | A| μ⟩⟨μ| B|ν⟩ e^-β E_ν-e^-β E_μ/β(E_μ-E_ν) .
Since 1= ∑_μ | μ⟩⟨μ|, the left hand side is
LHS = ∫_0^1 dt ⟨ e^β t H A e^-β tH B ⟩
= ∫_0^1 dt
1/Z(β)∑_ν⟨ν |e^β t H A e^-β tH B e^-β E_ν|ν⟩
= ∫_0^1 dt 1/Z(β)∑_ν,μ⟨ν |e^β t E_ν A e^-β t E_μ | μ⟩⟨μ| B e^-β E_ν|ν⟩
= 1/Z(β)∑_ν,μ⟨ν | A| μ⟩⟨μ| B|ν⟩ e^-β E_ν∫_0^1 dt e^tβ (E_ν-E_μ)
= 1/Z(β)∑_ν,μ⟨ν | A| μ⟩⟨μ| B|ν⟩ e^-β E_νe^β (E_ν-E_μ) -1/β (E_ν-E_μ).
This is identical to the right hand side.
Lemma 5 For an arbitrary bounded operators A,B and for arbitrary positive integer k,
the following identities are valid
ω^k Q_A, B(ω) = Q_A, C_B^k(ω), ω^2k Q_A^†, A(ω) = Q_C_A^k^†, C_A^k(ω).
Proof.
For arbitrary bounded linear operators A, B, the following is valid
Q_A, [H,B](ω) = 1/Z∑_μ,ν e^-β E_ν⟨ν | A | μ⟩⟨μ | [H,B] | ν⟩ (1+e^-βω)
δ(E_μ-E_ν -ω)
= 1/Z∑_μ,ν e^-β E_ν⟨ν | A | μ⟩⟨μ |(E_μ-E_ν) B| ν⟩ (1+e^-βω)
δ(E_μ-E_ν -ω)
= 1/Z∑_μ,ν e^-β E_ν⟨ν | A | μ⟩ω⟨μ | B| ν⟩ (1+e^-βω)
δ(E_μ-E_ν -ω)
= ω Q_A,B(ω)
Therefore, the first identity is valid for k=1. Also, the identity for k>1 is obtained by successive use of the above identity.
Since H^†=H, and [H,A]^†= [A^†, H],
Q_[H,A]^†, [H,A](ω) = 1/Z∑_μ,ν e^-β E_ν⟨ν |[A^†, H] | μ⟩⟨μ | [H,A] | ν⟩ (1+e^-βω)
δ(E_μ-E_ν -ω)
= 1/Z∑_μ,ν e^-β E_ν⟨ν | (E_μ-E_ν) A^† | μ⟩⟨μ |(E_μ-E_ν) A| ν⟩ (1+e^-βω)
δ(E_μ-E_ν -ω)
= ω^2/Z∑_μ,ν e^-β E_ν⟨ν | A^† | μ⟩⟨μ | A| ν⟩ (1+e^-βω)
δ(E_μ-E_ν -ω)
= ω^2 Q_A^†,A(ω).
The successive use of this identity
ω^2 Q_A^†, A(ω) = Q_[H,A]^†, [H,A](ω) and the definition
C_A^k := [H, [H, ⋯ [H, A] ⋯ ] ] given in Theorem 1
give
ω^2k Q_A^†, A(ω) = ω^2k-2 Q_[H,A]^†, [H,A](ω)
= ω^2k-4 Q_[H, [H,A]]^†, [H,[H,A]](ω) =
⋯ = Q_C_A^k^†, C_A^k(ω).
This completes the proof.
Let n be a nonnegative even integer, and define a function f_n: ℝ→ℝ, g_n: ℝ→ℝ and
h_n: (-1,1) →ℝ by
f_n(x) := f(x) -∑_m=0 ^n f^(m)(0)/m! x^m, g_n(x) := g(x) -∑_m=0 ^n g^(m)(0)/m! x^m
, h_n(x) := h(x) -∑_m=0 ^n h^(m)(0)/m! x^m.
Lemma 6 For any x ∈ℝ and for any nonnegative even integer n, f_n(x) ≤ 0 and g_n(x) ≥ 0
for an odd n/2, and f_n(x) ≥ 0 and g_n(x) ≤ 0 for an even n/2.
For any x ∈ (-1,1) and for any nonnegative even integer n, h_n(x) ≤ 0.
Proof.
First, we prove the sign definiteness of the function f_n(x). Since f_n(x) is an even function, it is sufficient to show the definiteness of
f_n(x) for x ≥ 0.
For x ≥ 0 and for n=0,
f_0(x)=f(x) -f(0)= x-tanh x/tanh x≥ 0,
since (x-tanh x)' = 1-1/ cosh^2 x ≥ 0.
For a positive even integer n,
n-th derivative of the function f is represented in the following contour integral around x
depicted in Figure <ref> (a)
f^(n)(x) =n! _C_xdz/2π if(z)/(z-x)^n+1
=n! ∑_k=-∞^∞ _C_iπ kdz/2π iz cosh z/(z-x)^n+1sinh z .
Note that the contour depicted in Figure <ref> (a)
can be deformed into that depicted in Figure <ref> (b).
Thus, the contour integral (<ref>)
is rewritten into that along other contours depicted in Figure <ref> (a) .
The Cauchy formula gives
f^(n)(x) = n! ∑_k=-∞^∞ _C_iπ kdz/2π iz cosh z/(z-x)^n+1 (z-ikπ)(sinh z)'
= n! ∑_k=-∞^∞ _C_iπ kdz/2π ii π k /(i π k-x)^n+1 (z-iπ k)
= -n!/(iπ)^n∑_k=1^∞[ k /(k+ix/π)^n+1+ c.c.]
= -n!/(iπ)^n [-i x/πζ(n+1, i x/π) + ζ(n, i x/π) + c.c. ],
where ζ(s,z) is the Hurwitz zeta function defined by
ζ(s,z) := ∑_k=1^∞ 1 /(k+z)^s,
for Re s > 1 and z ∉ℤ^-, where ℤ^- is the set of negative integers. Note that
f_n^(n)(x) =
f^(n) (x) -f^(n) (0)
= -n!/(iπ)^n∑_k=1^∞[ k /(k+ix/π)^n+1+ c.c. -2k/k^n+1]
= (-1)^n/2 2n!/π^n∑_k=1^∞ k^-n[1- r_k(x)^-n-1cos (n+1) θ_k(x) ],
where r_k(x) exp i θ_k(x) := 1+ix/(π k). Note that
r_k(x) = √(1+x^2/(π k)^2)≥ 1,
which implies
1- r_k(x)^-n-1cos (n+1) θ_k(x) ≥ 0.
Therefore, the expression (<ref>) implies that for any x>0,
f_n^(n) (x) ≥ 0 for even n/2, and f_n^(n) (x) ≤ 0
for odd n/2. Since differential coefficients of f_n at
the origin vanish
f_n(0)=0= f_n'(0)=f_n”(0) = ⋯ = f_n^(n-1)(0).
Therefore, f_n(x) ≤ 0 for an odd n/2, and f_n(x) ≥ 0 for an even n/2.
Next, we prove the sign definiteness of the function g_n(x).
Since g_n(x) is an even function, it is sufficient to show the definiteness of
g_n(x) for x ≥ 0.
Let x be a nonnegative number and n be a nonnegative even integer.
The n-th order derivative of the function g is represented in the following contour integral around x depicted in Figure <ref> (a)
g^(n)(x) =n! _C_xdz/2π ig(z)/(z-x)^n+1
=n! ∑_k=-∞^∞ _C_iπ (k+1/2)dz/2π isinh z/(z-x)^n+1 zcosh z .
Note that the contour depicted in Figure <ref> (a)
can be deformed into that depicted in Figure <ref> (b).
Thus, the contour integral (<ref>)
is rewritten into that along other contours depicted in Figure <ref> (b) .
As in the calculation for f_n^(n)(x), g_n^(n)(x) can be obtained as
g_n^(n)(x) =
g^(n) (x) -g^(n) (0)
=
-n!/(iπ)^n+2∑_k=1^∞[ 1 /(k-1/2+ix/π)^n+1 (k-1/2)+ c.c. -2/(k-1/2)^n+2]
= (-1)^n/2+1 2n!/π^n+2∑_k=1^∞ (k-1/2)^-n-2[1- s_k(x)^-n-1cos (n+1) ϕ_k(x) ],
where s_k(x) exp i ϕ_k(x) := 1+ix/[π (k-1/2)]. This implies that
g_n^(n)(x) ≥ 0 for an odd n/2, and g_n^(n)(x) ≤ 0 for an even n/2.
Since differential coefficients of g_n at
the origin vanish
g_n(0)=0= g_n'(0)=g_n”(0) = ⋯ = g_n^(n-1)(0).
Therefore, g_n(x) ≥ 0 for an odd n/2, and f_n(x) ≤ 0 for an even n/2.
Finally, we prove the sign definiteness of the function h_n(x). Since h_n(x) is an even function,
it is sufficient to show the definiteness of
h_n(x) for x ∈ [0,1).
The n-th order derivative of the function h(x) can be represented in terms of the following contour integral around x depicted in Figure <ref> (a)
h^(n)(x) =n! _C_xdz/2π ih(z)/(z-x)^n+1
=n! _C_xdz/2π i z/(z-x)^n+1log (1+z)/(1-z) .
Note that the logarithmic function in the integrand has a branch cut Re z ≤ -1, Re z ≥ 1, on the real axis Im z=0, as depicted in Figure <ref> (b).
Rewrite this integration in terms of w=1/z
h^(n)(x)
= -n!_C_1/xdw/2π i w^2 w^-1/(w^-1-x)^n+1log (1+w^-1)/(1-w^-1)
= -n! _C_1/xdw/2π i w^n-2/(1-xw)^n+1log (w+1)/(w-1) .
Note that the logarithmic function in the integrand has a branch cut -1 ≤ Re w ≤ 1, on the real axis Im w=0,as depicted in Figure <ref> (c).
This implies
h^(n)(x)
= -n! ∫_-1 ^1dw/2π i w^n-2/(1-xw)^n+1[1/log (1+w)/(1-w) - π i -1/log (1+w)/(1-w)+π i]
= -n! ∫_-1 ^1 dw w^n-2/(1-xw)^n+1 [ |log (1+w)/(1-w) |^2+ π^2]≤ 0,
for any x ∈ (-1,1). This fact and
h_n^(m)(0) =0 for any integer m ∈ [0,n] imply h_n(x) ≤ 0 for any x ∈ [0,1).
This completes the proof.
Proof of Theorem 1.
For a bounded linear operator A,
1/2⟨{A^†, A}⟩ -(A^†,A) = ∫_-∞ ^∞ dω(1/2-1/βωtanhβω/2) Q_A^†,A (ω)
= ∫_-∞ ^∞dω/βωtanhβω/2(βω/2βω/2 - 1 ) Q_A^†,A (ω).
For a positive odd n/2,
Lemma 6, Lemma 3,
Lemma 5 and Q_A^†,A (ω) ≥ 0 imply
1/2⟨{A^†, A}⟩ -(A^†,A)≤∫_-∞ ^∞dω/βωtanhβω/2∑_k=2 ^nf^(k) (0)/k!( βω/2)^kQ_A^†,A (ω)
=∫_-∞ ^∞dω/2ωtanhβω/2∑_k=1 ^n/2f^(2k) (0)/(2k)!( β/2)^2k-1ω^2k-2Q_A^†,A (ω)
=∫_-∞ ^∞dω/2ωtanhβω/2∑_k=1 ^n/2f^(2k) (0)/(2k)!( β/2)^2k-1Q_C_A^k-1^†,C_A^k-1 (ω)
=∑_k=1 ^n/2f^(2k) (0)/2(2k)!( β/2)^2k-1⟨ [C_A^k-1^†,[H,C_A^k-1]] ⟩.
Since (n+ 2)/2 is even,
Lemma 6 and Q_A^†,A (ω) ≥ 0 imply
1/2⟨{A^†, A}⟩ -(A^†,A)≥∫_-∞ ^∞dω/βωtanhβω/2∑_k=2 ^n+2f^(k) (0)/k!( βω/2)^kQ_A^†,A (ω)
=∫_-∞ ^∞dω/2ωtanhβω/2∑_k=1 ^n/2+1f^(2k) (0)/(2k)!( β/2)^2k-1ω^2k-2Q_A^†,A (ω)
=∫_-∞ ^∞dω/2ωtanhβω/2∑_k=1 ^n/2+1f^(2k) (0)/(2k)!( β/2)^2k-1Q_C_A^k-1^†,C_A^k-1 (ω)
=∑_k=1 ^n/2+1f^(2k) (0)/2(2k)!( β/2)^2k-1⟨ [C_A^k-1^†,[H,C_A^k-1]] ⟩
These and [H,C_A^k-1] = C_A^k
complete the proof of Theorem 1.
Proof of Theorem 2.
For a bounded linear operator A,
(A^†,A) = ∫_-∞ ^∞ dω1/βωtanhβω/2 Q_A^†,A (ω)
= ∫_-∞ ^∞dω/2 g( βω/2) Q_A^†,A (ω).
For a nonnegative even n/2, (n+2)/2 is an odd integer,
Lemma 6, Lemma 3,
Lemma 5 and Q_A^†,A (ω) ≥ 0 imply
(A^†,A)≥∫_-∞ ^∞dω/2∑_k=0 ^n+2g^(k) (0)/k!( βω/2)^kQ_A^†,A (ω)
=∫_-∞ ^∞dω/2∑_k=0 ^n/2+1g^(2k) (0)/(2k)!( βω/2)^2kQ_A^†,A (ω)
=∫_-∞ ^∞dω/2∑_k=0 ^n/2+1g^(2k) (0)/(2k)!( β/2)^2kQ_C_A^k^†,C_A^k (ω)
=∑_k=0 ^n/2+1g^(2k) (0)/2(2k)!( β/2)^2k⟨{C_A^k^†,C_A^k }⟩.
Since n/2 is even,
Lemma 6 and Q_A^†,A (ω) ≥ 0 imply
(A^†,A)≤∫_-∞ ^∞dω/2∑_k=0 ^ng^(k) (0)/k!( βω/2)^kQ_A^†,A (ω)
=∫_-∞ ^∞dω/2∑_k=1 ^n/2g^(2k) (0)/(2k)!( βω/2)^2kQ_A^†,A (ω)
=∫_-∞ ^∞dω/2∑_k=0 ^n/2g^(2k) (0)/(2k)!( β/2)^2kQ_C_A^k^†,C_A^k (ω)
=∑_k=0 ^n/2g^(2k) (0)/2(2k)!( β/2)^2k⟨{C_A^k^†,C_A^k }⟩.
This completes the proof of Theorem 2.
Proof of Theorem 3.
For a bounded linear operator A,
⟨ [A^†,[H,A]]⟩= ∫_-∞ ^∞ dωωtanhβω/2 Q_A^†,A (ω)
= ∫_-∞ ^∞ dωβ/2 g( βω/2) ω^2 Q_A^†,A (ω).
For a nonnegative even n/2, (n+2)/2 is an odd integer,
Lemma 6, Lemma 3,
Lemma 5 and Q_A^†,A (ω) ≥ 0 imply
⟨ [A^†,[H,A]]⟩≥∫_-∞ ^∞ dωβ/2∑_k=0 ^n+2g^(k) (0)/k!( βω/2)^kω^2 Q_A^†,A (ω)
=∫_-∞ ^∞ dω∑_k=0 ^n/2+1g^(2k) (0)/(2k)!( β/2)^2k+1ω^2k+2Q_A^†,A (ω)
=∫_-∞ ^∞ dω∑_k=0 ^n/2+1g^(2k) (0)/(2k)!( β/2)^2k+1Q_C_A^k+1^†,C_A^k+1 (ω)
=∑_k=0 ^n/2+1g^(2k) (0)/(2k)!( β/2)^2k+1⟨{C_A^k+1^†,C_A^k+1}⟩.
Since n/2 is even,
Lemma 6 and Q_A^†,A (ω) ≥ 0 imply
⟨ [A^†,[H,A]]⟩≤∫_-∞ ^∞ dω∑_k=0 ^ng^(k) (0)/k!( β/2)^k+1ω^k+2Q_A^†,A (ω)
=∫_-∞ ^∞ dω∑_k=1 ^n/2g^(2k) (0)/(2k)!( β/2)^2k+1ω^2k+2Q_A^†,A (ω)
=∫_-∞ ^∞ dω∑_k=0 ^n/2g^(2k) (0)/(2k)!( β/2)^2k+1Q_C_A^k+1^†,C_A^k+1 (ω)
=∑_k=0 ^n/2g^(2k) (0)/(2k)!( β/2)^2k+1⟨{C_A^k+1^†,C_A^k+1}⟩.
This completes the proof of Theorem 3.
Proof of Theorem 4.
The spectral representation of the
Duhamel function for bounded linear operators A^†, A is
(A^†,A) = ∫_-∞^∞dω/βωtanhβω/2 Q_A^†, A(ω)= ∫_-∞^∞dω/2 g( βω/2) Q_A^†, A(ω),
where g: ℝ→ℝ defined by (<ref>).
Define an integration measure
d μ (ω) := dω/2 g( βω/2)Q_A^†, A(ω)/ (A^†, A).
Note that
∫_-∞ ^∞ d μ(ω) =1.
The Jensen inequality for the
convex function f: ℝ→ℝ defined by (<ref>) implies
f( ⟨ [A^†,A ]⟩/2(A^†,A))= f( ∫_-∞ ^∞ d μ(ω) βω/2) ≤∫_-∞ ^∞ d μ(ω) f(βω/2)=
⟨{ A^†, A}⟩/2 (A^†, A).
This inequality gives
2(A^†,A)/⟨ [A^†,A ]⟩tanh⟨ [A^†,A ]⟩/2(A^†,A)= g( ⟨ [A^†,A ]⟩/2(A^†,A)) ≥2 (A^†, A)/⟨{ A^†, A}⟩,
then
tanh⟨ [A^†,A ]⟩/2(A^†,A)≥⟨ [A^†,A ]⟩/⟨{ A^†, A}⟩.
This inequality can be represented in terms of the function h : (-1,1) →ℝ defined by (<ref>)
(A^†, A)/⟨{ A^†, A}⟩≤⟨ [A^†,A ]⟩/⟨{ A^†, A}⟩/2 tanh^-1⟨ [A^†,A ]⟩/⟨{ A^†, A}⟩= h(⟨ [A^†,A ]⟩/⟨{ A^†, A}⟩),
which is obtained by Roepstorff <cit.>. Lemma 6 gives an upper bound on the right hand side
h(⟨ [A^†,A ]⟩/⟨{ A^†, A}⟩) ≤∑_k=0^n h^(k)(0)/k!(⟨ [A^†,A ]⟩/⟨{ A^†, A}⟩)^k.
Therefore,
(A^†, A)/⟨{ A^†, A}⟩≤∑_k=0^n h^(k)(0)/k!(⟨ [A^†,A ]⟩/⟨{ A^†, A}⟩)^k,
is obtained. This completes the proof of
Theorem 4.
§ APPLICATIONS TO THE TRANSVERSE FIELD SHERRINGTON-KIRKPATRICK MODEL
Here, we study quantum spin systems with random interactions. Let N be a positive integer and a site index
i (≤ N) is also a positive integer.
A sequence of spin operators
(σ^w_i)_w=x,y,z, 1≤ i ≤ N on a Hilbert space H :=⊗_i =1^N H_i is
defined by a tensor product of the Pauli matrix σ^w acting on H_i ≃ℂ^2 and unities.
These operators are self-adjoint and satisfy the commutation relation
[σ_k^y,σ_j^z]=2i δ_k,jσ_j^x ,
[σ_k^z,σ_j^x]=2i δ_k,jσ_j^y , [σ_k^x,σ_j^y]=2i δ_k,jσ_j^z ,
and each spin operator satisfies
(σ_j^w)^2 = 1.
The Sherrington-Kirkpatrick (SK) model is
well-known as a disordered classical spin system <cit.>.
The transverse field SK model
is a simple quantum extension.
Here, we study a magnetization process for a local field in these models.
Consider the following Hamiltonian with coupling constants
b_1, h∈ℝ
H_N( σ, b_1, g, h):=- 1 /√(N)∑_1≤ i<j≤ N g_i,jσ_i^z σ_j^z
-∑_j=1^N hσ_j^z-∑_j=1^N b_1 σ_j^x,
where g=(g_i,j)_1≤ i<j ≤ N
is a sequence of independent standard Gaussian random variables
obeying a probability density function
p(g):= ∏_1≤ i<j≤ N1/√(2π) e^-g_i,j ^2/2
The Hamiltonian is invariant under ℤ_2-symmetry U σ_i ^z U^† = -σ_i^z
for the discrete unitary transformation U:= ∏_1≤ i ≤ Nσ_i^x
for h= 0.
For a positive β, the partition function is defined by
Z_N(β, b_1, g, h) := Tr e^ - β H_N( σ,b_1, g, h ),
where the trace is taken over the Hilbert space H.
Here, we define a square root interpolation for the transverse field SK
model, as for the SK model given by Guerra and Talagrand <cit.>.
Let z=(z_j)_1≤ j ≤ N be a sequence of independent standard Gaussian random variables.
This method gives a variational solution of specific free energy.
Consider the following interpolated Hamiltonian with parameters s ∈ [0,1], b(s):=b_1s+b_0(1-s)
for b_0 ∈ℝ and q ∈ [0,1]
H(s, σ):=
- √(s/N)∑_1≤ i<j≤ N g_i,jσ_i^z σ_j^z-∑_j=1^N (√(q(1-s)) z_j +h)σ_j^z
-∑_j=1^N b(s) σ_j^x.
Define an interpolated function φ_N(s)
φ_N(s) :=1/N𝔼log Tr e^-β H(s, σ)
where 𝔼 denotes the expectation over all Gaussian random variables (g_i,j)_1 ≤ i<j ≤ N and (z_i)_1≤ i≤ N.
Note that φ_N(1) is given by
φ_N(1) = 1/N𝔼log Z_N(β, b_1,g,h),
which is proportional to the specific free energy of the transverse field SK model.
Let f be an arbitrary function
of a sequence of spin operators σ=(σ_i^w)_ w=x,y,z, 1 ≤ i ≤ N.
The expectation of f in the Gibbs state is given by
⟨ f( σ) ⟩_s= Tr f( σ) e^ - β H(s,σ)/ Tr e^ - β H(s,σ).
The derivative of φ_N(s) with respect to s
is given by
φ'_N(s)= β/2N^3/2√(s)∑_1≤ < j≤ N𝔼 g_i,j⟨σ_i^z σ_j^z ⟩_s - β√(q)/2N√(1-s)∑_i=1^N 𝔼 z_i ⟨σ_i^z ⟩_s+
β b'(s)/N∑_i=1^N 𝔼⟨σ_i^x ⟩_s.
The identities for the Gaussian random variables g_i,j and z_i
g_i,j p(g, z) = -∂ p/∂ g_i,j, z_i p(g, z) =- ∂ p/∂ z_i
and the integration by parts imply
φ'_N(s) = β^2/2N^2∑_1≤ < j≤ N𝔼 [(σ_i^z σ_j^z , σ_i^z σ_j^z )_s-⟨σ_i^z σ_j^z ⟩_s^2 ]- β^2q/2N∑_i=1^N 𝔼 [(σ_i^z , σ_i^z )_s-⟨σ_i^z ⟩_s^2 ]+
β b'(s)/N∑_i=1^N 𝔼⟨σ_i^x ⟩_s
= β^2(N-1)/4N[𝔼(σ_1^z σ_2^z , σ_1^z σ_2^z )_s
-1 ]- β^2q/2[ 𝔼 (σ_1^z , σ_1^z )_s-1]+
β b'(s) 𝔼⟨σ_1^x ⟩_s
+ β^2/4 (1-q)^2-β^2/4𝔼⟨ (R_1,2-q) ^2⟩_s
,
where
the overlap operator R_a,b is defined by
R_a,b := 1/N∑_i=1^N σ_i^z,aσ_i^z,b,
for independent replicated Pauli operators σ_i^z,a (a= 1, 2,⋯, n) obeying the same Gibbs state with
the replica Hamiltonian
H(s, σ^1, ⋯, σ^n):= ∑_a=1^n H(s, σ^a).
This Hamiltonian is invariant under
permutation of replica spins. This permutation symmetry is known to be the replica symmetry.
The order operator R_a,b measures the replica symmetry breaking as an order operator.
In the identity (<ref>), we use
an upper bound
𝔼(σ_1^z σ_2^z , σ_1^z σ_2^z )_s≤ 1
given by the right-hand side of the inequality (<ref>),
and
the lower bound on the Duhamel function (σ_1^z,σ_1^z)
-β^2 q/2𝔼[(σ_1^z , σ_1^z )_s-1]
≤β^4q/48𝔼⟨{[H,σ_1^z ]^†, [H,σ_1^z]}⟩_s
=β^4q/6b(s)^2,
given by the left-hand side of the inequality (<ref>).
Then, we have
φ'_N(s) ≤
β^4q/48𝔼⟨{[H,σ_1^z ]^†, [H,σ_1^z]}⟩_s+
β b'(s)𝔼⟨σ_1^x ⟩_s+β^2/4 (1-q)^2-β^2/4𝔼⟨ (R_1,2-q) ^2⟩_s
= β^4q/6 b(s)^2+β b'(s)𝔼⟨σ_1^x ⟩_s+β^2/4 (1-q)^2-β^2/4𝔼⟨ (R_1,2-q) ^2⟩_s
≤ β^4q/6 b(s)^2+β b'(s)tanhβ b(s)+β^2/4 (1-q)^2,
where an upper bound tanhβ b(s) ≥⟨σ_1^x ⟩_s has been used as shown by Leschke, Manai, Ruder and Warzel <cit.>.
The bound on the Duhamel function (σ_1^z , σ_1^z )_s
can be obtained also from
the Falk-Bruch inequality <cit.> and our result for g(z) defined by (<ref>), instead of the simple use of the
inequality (<ref>).
For Φ(rtanh r):=tanh r/r=g(r),
(σ_1^z , σ_1^z )_s
≥Φ(⟨ [σ_1^z,[β H,σ_1^z] ]⟩_s/4),
is obtained by Leschke, Manai, Ruder and Warzel
as a corollary of the Falk-Bruch inequality <cit.>.
The equality and the inequality
⟨ [σ_1^z,[β H,σ_1^z] ]⟩_s=4β b(s)⟨σ_1^x ⟩_s≤4β b(s)tanhβ b(s), the monotonicity and the convexity of the function
Φ,
the definition of Φ and Lemma 6 give the lower bound on (σ_1^z , σ_1^z )_s
(σ_1^z , σ_1^z )_s≥Φ(β b(s)⟨σ_1^x ⟩_s)
≥Φ(β b(s)tanhβ b(s))=g(β b(s))
≥1-β^2/3b(s)^2,
which gives the same upper bound
(<ref>).
The advantage of using
the new inequality (<ref>) in the present case is that
the lower bound on the Duhamel function is easily expressed in terms of the known simple function
of β and b(s) with far fewer calculations.
Although a relation between the Falk-Bruch inequality and the new inequality (<ref>) can be understood
in the present specific case,
it is difficult to clarify that in general case.
Integration
of the inequality (<ref>) over
s∈[0,1] gives
φ_N(1) ≤ φ_N(0) + ∫_0 ^1 ds [β^4q/6 b(s)^2+β b'(s)tanhβ b(s)+β^2/4 (1-q)^2]
= φ_N(0) + β^4q/18 (b_0^2+b_0b_1+b_1^2) + logcoshβ b_1/coshβ b_0+ β^2/4 (1-q)^2=:Φ(q,b_0).
The model at s=0 becomes independent spin model, and therefore
φ_N(0) = 𝔼log Trexpβ[ (√(q) z+h) σ^z + b_0 σ^x]
= 𝔼log 2 coshβ√((√(q) z +h)^2+ b_0^2).
A variational solution with the best bound
is obtained by minimizing the right hand side in (<ref>). The minimizer (q, b_0) should satisfy
0 = ∂/∂ qΦ(q,b_0)=𝔼β z( √(q)z +h) tanhβ√((√(q) z+h)^2+ b_0^2)/2√(q)√((√(q) z+h)^2+ b_0^2)
+β^4/18 (b_0^2+b_0b_1+b_1^2) - β^2/2 (1-q)
= 𝔼β^2 (√(q)z +h)^2/2[(√(q) z+h)^2+ b_0^2] cosh^2β√((√(q) z+h)^2+ b_0^2) +𝔼β b_0^2 tanhβ√((√(q) z+h)^2+ b_0^2)/2[(√(q) z+h)^2+ b_0^2]^3/2
+ β^4/18 (b_0^2+b_0b_1+b_1^2) - β^2/2 (1-q),
0 = ∂/∂ b_0Φ(q,b_0) =𝔼β b_0 tanhβ√((√(q) z+h)^2+ b_0^2)/√((√(q) z+h)^2+ b_0^2) -βtanhβ b_0
+β^4 q/18 (2b_0+b_1),
where the following integration by parts has been used to obtain the equation (<ref>)
𝔼β z( √(q)z +h) tanhβ√((√(q) z+h)^2+ b_0^2)/2√(q)√((√(q) z+h)^2+ b_0^2)
=𝔼β/√(q)∂/∂ z( √(q)z +h) tanhβ√((√(q) z+h)^2+ b_0^2)/2√((√(q) z+h)^2+ b_0^2).
This minimizer (q,b_0) gives the best bound on φ_N(1) as a variational solution
φ_N(1) ≤𝔼log 2 coshβ√((√(q) z +h)^2+ b_0^2)+ β^4q/18 (b_0^2+b_0b_1+b_1^2) + logcoshβ b_1/coshβ b_0+ β^2/4 (1-q)^2.
Note that the equation
(<ref>)
has a solution b_0 =0 in the classical limit b_1→0. In this case the equation (<ref>)
becomes
q= 𝔼tanh^2 β( √(q) z +h).
Then, the solution (<ref>) is identical to the SK solution <cit.>
in the classical limit b_1→0. In the classical case b_1=0,
it is conjectured that the replica symmetry is preserved with
lim_N→∞𝔼⟨ (R_1,2-q)^2⟩_1 =0,
and the SK solution of the specific free energy is exact
for
𝔼β^2/cosh^4 β (√(q)z +h)≤ 1,
whose boundary is called the Almeida-Thouless line <cit.>.
Recently, Chen has proven rigorously that the SK solution is exact for independent centered Gaussian random external fields,
instead of the uniform field h
<cit.>. For the uniform field h≠ 0, it still remains a conjecture.
Consider a simple case h=0, where the model has the ℤ_2 symmetry.
If the replica symmetric solution q=0 is assumed in this case, the equation (<ref>) becomes
0= tanhβ b_0 -β b_0 + β^3/9b_0(b_0^2+b_0b_1+b_1^2),
which fixes b_0, and the equation (<ref>) is valid for any b_0.
Therefore, the ℤ_2 and replica symmetric variational solution
of the specific free energy
is given by
-φ_N(1)/β≥ -1/βlog 2coshβ b_1- β/4,
under the assumption q=0 for h=0.
This lower bound can be compared to results obtained in other literatures.
Leschke, Rothlauf, Ruder and Spitzer evaluate the specific free energy in
a different rigorous
method based on the annealed free energy
<cit.>.
They first give
a simple estimate of its lower
bound
in the high temperature region β < 1 <cit.>.
This lower bound is exactly the same as the right hand side of the inequality
(<ref>)
in the infinite volume limit.
In addition, they obtain a corrected estimate in a high temperature expansion <cit.>.
Although this estimate might be better,
the correction is quite small.
The specific free energy f_ st(β,b_1,h) obtained by the replica trick with
the static approximation <cit.>
is
-φ_N(1)/β≃
f_ st(β,b_1,h=0)= -1/βlog 2coshβ b_1,
which violates a rigorous upper bound
-φ_N(1)/β≤ -1/βlog 2coshβ b_1- β/8(1/cosh^2 β b_1 +tanhβ b_1/β b_1) ,
given by Leschke, Rothlauf, Ruder and Spitzer<cit.>.
For a strong field h≫ 1, however,
the approximate specific free energy
f_ st(β,b_1,h)≃ -1/β𝔼log 2 coshβ√((√(q) z +h)^2+ b_1^2),
must be a good approximation, since the following deviation of the strong field limit vanishes
lim_h→∞[β f_ st(β, b_1, h)+φ_N(1)] =0.
On the other hand,
the upper bound (<ref>) and q→ 1
in this limit
give an upper bound on the following deviation
lim_h→∞[φ_N(1) -𝔼log 2 coshβ√((√(q) z +h)^2+ b_1^2)] ≤β^4/18 (b_0^2+b_0b_1+b_1^2) + logcoshβ b_1/coshβ b_0 =: D,
where b_0 is the solution of the equation
tanhβ b_0
-β^3 /18 (2b_0+b_1)=0,
obtained from (<ref>). Since the relative deviation vanishes
lim_h→∞ D/ φ_N(1)=0,
in the strong field limit, the upper bound on φ_N(1) given by the right hand side of (<ref>)
must be a good approximation for h≫1 as well as the approximate specific free energy
f_ st(β, b_1,h).
Acknowledgments
C.I. is supported by JSPS (21K03393).
13
B N. N. Bogolubov, Physica 26, Sl (1960).
FB H. Falk and L. W. Bruch, Phys. Rev. 180, 442 (1969).
R G. Roepstorff, Commun. Math. Phys. 53, 143 (1977).
BT J. G. Brankov and N. S. Tonchev, Cond. Matt. Phys. 14, 13003 (2011).
H A. B. Harris, J. Math. Phys. 8, 1044 (1967).
S B. S. Shastry, J. Phys. A: Math. Gen. 25, L249 (1992).
MW N. D. Mermin and H. Wagner, Phys. Rev. Lett. 17, 1133 (1966).
W H. Leschke, C. Manai, R. Ruder, and S. Warzel, Phys. Rev. Lett. 127, 207204 (2021).
G1 F. Guerra, Fields Inst. Commun. 30, 161 (2001).
T M. Talagrand, Mean field models for spin glasses I, II (Springer, Berlin, 2011).
SK D. Sherrington and S. Kirkpatrick, Phys. Rev. Lett. 35, 1792 (1975).
AT J. R. L. de Almeida and D. J. Thouless, J. Phys. A: Math. Gen. 11, 983 (1978).
WK-C W. -K. Chen, Electron. Commun. Probab. 26, 1 (2021).
LRRS H. Leschke, S. Rothlauf, R. Ruder and W. Spitzer, J. Stat. Phys. 182, 55 (2021).
KK D-H. Kim and J-J. Kim, Phys. Rev. B 66, 054432 (2002).
|
http://arxiv.org/abs/2306.10913v1
|
20230619132245
|
Semilinear fractional elliptic PDEs with gradient nonlinearities on open balls: existence of solutions and probabilistic representation
|
[
"Guillaume Penent",
"Nicolas Privault"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"math.AP",
"math.PR",
"35J15, 35J25, 35J60, 35J61, 35R11, 35B65, 60J85, 60G51, 60G52,\n 65C05, 33C05"
] |
trees
calc,backgrounds,arrows,matrix
shapes.geometric
shapes,snakes
(),a;
U
normalUeufmn
boldUeufbn
b̅
η
p
h
H
Ψ
D
ℂ
ℝ
ℤ
ℤ
1
Cov
Var
Dom
trace
𝕀Id
Ent
var
÷div
Diff
T
ℝ
G
ℕ
Z-.45emZ
0.27em height1.45ex width0.03em depth0em
-0.30em Q
0.225em height1.05ex width0.025em depth0em -0.25em Q
0.180em height0.78ex width0.020em depth0em -0.20em Q
0.27em height1.45ex width0.03em depth0em
-0.30em G
≫
0.225em height1.05ex width0.025em depth0em -0.25em G
0.180em height0.78ex width0.020em depth0em -0.20em G
propProposition[section]
lemma[prop]Lemma
definition[prop]Definition
corollary[prop]Corollary
theorem[prop]Theorem
remark[prop]Remark
example[prop]Example
equationsection
Dom
ℙ
trace
Ent
var
÷div
card
sinhc 0.02cm
hyp
16.5cm
22.2cm
0.cm
0.cm
0.4cm
0cm
0cm
1in
H
ℱ
thmTheorem[section]
rem[thm]Remark
R
C
N
Z
Q
D
D^d([0,1])
E
A
FF
FM
G
HH
Dom
trace
Ent
var
÷div
R
I
⊥⊥
⟹
∼
P
P
Var
Cov
2.5mm2.5mm
5pt
10pt
20pt
⟶
.3ex
⟨⟨
.3ex
⟩⟩
kern[1]#1@kerna
=<cit.>=<cit.><cit.>[1]#1
Semilinear fractional elliptic PDEs with gradient nonlinearities on open balls: existence of solutions and probabilistic representation
Guillaume Penent[mailto:[email protected]@e.ntu.edu.sg]
Nicolas Privault[
mailto:[email protected]@ntu.edu.sg
]
Division of Mathematical Sciences
School of Physical and Mathematical Sciences
Nanyang Technological University
21 Nanyang Link, Singapore 637371
July 31, 2023
=================================================================================================================================================================================================================================================================================================================
0.6cm
We provide sufficient conditions for the existence of viscosity solutions of fractional semilinear elliptic PDEs of index α∈ (1,2) with polynomial gradient nonlinearities on d-dimensional balls, d≥ 2. Our approach uses a tree-based probabilistic representation based on α-stable branching processes, and allows us to take into account gradient nonlinearities not covered by deterministic finite difference methods so far. Numerical illustrations demonstrate the accuracy of the method in dimension d=10, solving a challenge encountered with the use of deterministic finite difference methods in high-dimensional settings.
Keywords:
Elliptic PDEs,
semilinear PDEs,
branching processes,
fractional Laplacian,
gradient nonlinearities,
stable processes,
subordination,
Monte-Carlo method.
Mathematics Subject Classification (2020):
35J15,
35J25,
35J60,
35J61,
35R11,
35B65,
60J85,
60G51,
60G52,
65C05,
33C05.
0.7cm
-0.1cm
§ INTRODUCTION
We consider the class of semilinear elliptic PDEs
on the open ball
B(0,R) of radius R>0 in ℝ^d,
of the form
Δ_α u(x) +
f(x,u(x) ,∇ u(x) )
= 0, x ∈ B(0,R),
u(x) = ϕ(x), x ∈ℝ^d \ B(0,R),
where ϕ : ℝ^d →ℝ is a Lipschitz
function bounded on ^d ∖ B(0,R)
and
Δ_α u = - ( - Δ )^α /2 u
= 4^α /2Γ( d/2 + α /2 )/π^d/2|Γ(- α /2 )|lim_r→ 0^+∫_ℝ^d ∖ B(x,r)u( · +z)-u(z)/|z|^d+αdz,
denotes the fractional Laplacian
with parameter α∈ (0,2),
see, e.g., <cit.>,
where Γ(p) : = ∫_0^∞ e^-λ xλ^p-1 dλ
is the gamma function
and |z| is the Euclidean norm of z∈^d.
Here, f(x,y,z) is a polynomial nonlinearity
in the solution u and its gradient ∇ u =
( ∂ u / ∂ x_1,
…,
∂ u / ∂ x_d ), given
for some m≥ 0
by
f(x,y ,z) = ∑_l= (l_0,… , l_m) ∈ L_m c_l(x) y^l_0∏_i=1^m (b_i(x) · z)^l_i,
t∈_+, x∈ B(0,R), y∈, z∈^d,
where
L_m is a finite subset of ℕ^m+1,
c_l(x), l= (l_0,… , l_m)∈ L_m,
b_i(x), i=1,… , m,
are bounded measurable functions of x ∈^d,
with x· z := x_1z_1+⋯ + x_dz_d, x,z∈^d.
Elliptic PDEs
can be solved using weak solutions, see Definition 2.1 in <cit.>,
or viscosity solutions, see <cit.>
and Remark 2.11 in <cit.>.
Weak solutions can be obtained from the Riesz representation
or Lax-Milgram theorems,
see <cit.>, <cit.>.
See also <cit.>
for the use of the Perron method,
which however does not allow for nonlinearities as in (<ref>),
<cit.>
for semi-group methods,
and <cit.>, <cit.>
who respectively use branching diffusion processes and
superprocesses.
For problems of the form Δ_α u(x) + f(x) =0 with
u=ϕ on ^d ∖ O, existence of viscosity solutions
has been proved in <cit.>
under smoothness assumptions on
f,ϕ,
see also <cit.> and <cit.> for the existence of
viscosity solutions, resp. weak solutions, with nonlocal operators.
Regarding problems of the form Δ_α u(x) + f(x,u) =0,
existence of non trivial solutions on an open bounded set O with
u=0 on ^d ∖ O has been considered in
<cit.> using the mountain pass theorem.
On the other hand, stochastic branching processes
have been introduced in <cit.>,
<cit.> for the representation of PDE solutions.
The method has been applied to the blow-up and existence of
solutions for parabolic PDEs
in <cit.>, <cit.>.
This branching argument has been recently extended in
<cit.>
to the treatment polynomial nonlinearities in
gradient terms in elliptic PDEs,
following the approach of <cit.>
in the parabolic case.
In this approach, gradient terms
are associated to tree branches to which
a Malliavin integration by parts is applied.
In <cit.>,
this tree-based approach has been extended to
cover pseudo-differential operators of the form -η(-Δ /2)
and fractional Laplacians,
using a branching process 𝒯_x starting at x∈^d
and carrying a symmetric α-stable process.
PDE solutions will be constructed as the expectation
u(x) = 𝔼[ℋ_ϕ (𝒯_x)]
of a random functional ℋ_ϕ (𝒯_x)
of the underlying branching process, see (<ref>) below.
This approach has also been applied to nonlinear elliptic PDEs
with fractional Laplacians in <cit.>.
In this paper, we provide existence results and probabilistic representations for
the solutions of a large class of
semilinear fractional elliptic PDEs with gradient nonlinearities
of the form (<ref>), under the following conditions.
Assumption (AA):
The coefficients c_l (t,x),
l ∈ L_m, are uniformly bounded functions, i.e. we have
‖ c_l‖_∞ := sup_(t,x) ∈ [0,T] ×ℝ^d |c_l(t,x)| < ∞, l = (l_0,… ,l_m) ∈ L_m.
Assumption (BB):
The coefficients b_i(x), i = 0,…,m, are such that
b_0,∞ := max_1 ≤ i ≤ msup_x ∈ B(0,R) |b_i(x)| < ∞,
b_1,∞ := max_1 ≤ i ≤ msup_x ∈ B(0,R)|b_i(x)|/R-|x| < ∞.
We also denote by B(0,R) the
closed ball of radius R>0 in ℝ^d,
and we consider the fractional Sobolev space
H^α ( ^d ):=
{ u ∈ L^2( ^d ) : |u(x)-u(y)|/|x-y|^d/2+α /2∈ L^2 (^d×^d )}.
Next is the main result of this paper, see Theorem <ref> below,
in which we prove the existence of
(continuous) viscosity solutions
for fractional elliptic problems of the form (<ref>)
Let d≥ 2 and α∈ (1,2),
assume that the boundary condition ϕ
belongs to H^α (^d)
and is bounded on ^d ∖ B(0,R).
Then, under Assumptions (AA) and (BB),
the semilinear elliptic PDE (<ref>) admits a viscosity solution in
C^1(B(0,R)) ∩ C^0 ( B(0,R)),
provided that R and max_l ∈ L_m‖ c_l‖_∞ are
sufficiently small.
Our approach allows us to take into account gradient nonlinearities,
which has not been done by deterministic finite difference
methods, see e.g. 6.3 of <cit.> for the one-dimensional
Dirichlet problem.
We also note that the tree-based Monte Carlo method
applies to large dimensional problems
whereas the application of deterministic finite difference
methods in higher dimensions is challenging, see e.g. <cit.>.
The proof of Theorem <ref> relies on existence results
for nonlinear elliptic PDEs with fractional Laplacians
derived in
<cit.>.
Existence of solutions in Theorem <ref>
will be obtained through a probabilistic representation
of the form u(x) := 𝔼 [ ℋ_ϕ (𝒯_x,0) ],
x∈ B(0,R),
where ℋ_ϕ (𝒯_x,0)
is a functional, defined in (<ref>), below
of a random branching tree
𝒯_x,0, which provides an alternative to the use of
finite difference methods.
More precisely, for each i=0,1,… ,d
we construct a sufficiently integrable functional
ℋ_ϕ (𝒯_x,i)
of a random tree 𝒯_x,i such that we have
the representations
u (x) = 𝔼[ ℋ_ϕ (𝒯_x,0)],
b_i(x) ·∇ u(x) = 𝔼[ ℋ_ϕ (𝒯_x,i)],
x ∈^d,
see Proposition <ref>,
which also yields a probabilistic representations for
the solutions of a wide class of semilinear elliptic PDEs of the form
(<ref>).
The main difficulty is to show
the integrability required on ℋ_ϕ (𝒯_x,i)
in the framework of viscosity solutions.
In particular, in Proposition <ref> we show that
(ℋ_ϕ (𝒯_x,i))_x∈ B(0,R)
is bounded in L^1 (Ω ), uniformly in x∈ B(0,R),
provided that d≥ 2.
For this, we extend arguments of <cit.>
from the standard Laplacian Δ and Brownian motion
to the fractional Laplacian Δ_α : =-(-Δ)^α /2
and its associated stable process.
There are, however, significant differences with the Brownian case,
and in the stable setting
we rely on bounds on the fractional Green and Poisson kernel
and stable process hitting times from <cit.> and
<cit.>.
This paper is organized as follows.
In Section <ref> we present
the description of the branching mechanism.
In Section <ref> we state and prove our main result Theorem <ref>
which gives the probabilistic representation of the solution and
its partial derivatives.
Finally, in Section <ref>
we present a numerical implementation
to illustrate the method on specific examples,
using Monte Carlo simulations
for nonlinear fractional PDEs
in dimension 10.
Before proceeding further, we recall some
preliminary results on
fractional Laplacians on the ball B(0,R) in ^d.
§.§.§ Poisson and Green kernels
Given (X_t)_t∈_+ an ^d-valued α-stable process
we consider the process
X_t,x := x+X_t, t∈_+,
started at
x∈^d, and the first hitting time
τ_B (x) := inf{ t ≥ 0, X_t,x∉B(0,R) }.
The Green kernel G_R(x,y) is given by
𝔼[
∫_0^τ_B (x) f(X_t,x) dt
]
= ∫_B(0,R) G_R(x,y) f(y) dy,
x∈^d,
for f a nonnegative measurable function on ^d.
On the open ball B(0,R) with α∈ (0,2) ∖{d}
the Green function takes the form
G_R(x,y) = κ_α^d/|x-y|^d-α∫_0^r_0(x,y)t^α/2-1/(1+t)^d/2 dt,
see <cit.>, where
r_0(x,y) := (R^2-|x|^2)(R^2-|y|^2)/R^2|x-y|^2 κ_α^d := Γ(d/2)/2^απ^d/2Γ^2(α/2).
The Poisson kernel
P_R(x,y) of the harmonic measure ℙ^x (X_τ_B (x),x∈ dy)
is given by
P_R(x,y) = 𝒜(d,-α) ∫_B(0,R)G_R(x,v)/|y-v|^d+α dv
where 𝒜(d,-α) := 2^αΓ[(d+α)/2]/(π^d/2 |Γ(-α/2)|).
When R>0, |x|<R and |y|>R, the corresponding Poisson kernel is given by
P_R(x,y) = C(α ,d) ( R^2-|x|^2/|y|^2-R^2)^α/21/|x-y|^d
with C(α ,d) :=Γ(d/2) π^-d/2-1sin(πα/2),
and it satisfies the bound
|∇ P_R(x,y)| ≤ (d+α)P_R(x,y)/R-|x|,
x∈ B(0,R),
y ∈^d ∖B(0,R),
see Lemma 3.1 in <cit.>.
As a consequence, we have the bound
|∇_x G_R(x,y)| ≤ d G_R(x,y)/min (
|x-y|,
R-|x|),
x,y ∈ B(0,R), x≠ y,
see Corollary 3.3 in <cit.>.
In the sequel
we will need to estimate negative moments of the form
[|X_t|^-p],
where (X_t)_t∈_+ is an α-stable process.
For this, we represent
(X_t)_t∈_+ as the subordinated Brownian motion
(X_t)_t∈_+ = (B_S_t)_t∈_+, where (S_t)_t∈_+
is an α/2-stable subordinator
with Laplace exponent η(λ) = (2λ )^α / 2,
i.e.
𝔼[e^-λ S_t] = e^-t ( 2 λ)^α / 2, λ , t ≥ 0,
see, e.g., Theorem 1.3.23 and pages 55-56 in <cit.>.
Using the fact that B_S_t/√(S_t) follows the
normal distribution 𝒩(0,1) given S_t,
for d≥ 1 and p ∈ (0,d) we have
[|X_t|^-p] = [|B_S_t|^-p]
= [ S_t^-p/2[ S_t^p/2/|B_S_t|^p | S_t] ]
= [ S_t^-p/2∫_𝕊^d-1μ_d (d σ) ∫_0^∞ r^d-1-pe^-r^2/2/(2π)^d/2 dr ]
= 2 2^(d-p-2)/2/2^d/2Γ(d/2)Γ((d-p)/2)
[ S_t^-p/2]
= C_α,d,p/t^p/α,
t>0, α∈ (1,2),
where μ_d denotes the surface measure on the
d-dimensional sphere 𝕊^d-1,
C_α,d,p := 2^1-pΓ(p/α) Γ((d-p)/2) /αΓ(p/2)Γ(d/2),
and we used the relation
𝔼[ S_t^-p]
= α 2^1-p t^-2p/αΓ(2p/α) / Γ(p), p,t>0,
see, e.g., Relation (1.10) in <cit.>.
§ MARKED BRANCHING PROCESS
Let ρ: ℝ^+ → (0,∞ ) be a
probability density function on _+,
and consider a probability mass function
(q_l_0,… ,l_m)_(l_0,… ,l_m)∈ L_m on L_m
with q_l_0,… ,l_m > 0,
(l_0,… ,l_m)∈ L_m.
In addition, we consider
* an i.i.d. family (τ^i,j)_i,j≥ 1 of random variables
with distribution ρ (t)dt on _+,
* an i.i.d. family (I^i,j)_i,j≥ 1 of discrete
random variables, with
ℙ( I^i,j=(l_0,… ,l_m) ) = q_l_0,… ,l_m >0,
(l_0,… , l_m)∈ L_m,
* an independent family (X^i,j)_i,j≥ 1
of symmetric α-stable processes.
The sequences (τ^i,j)_i,j≥ 1, (I^i,j)_i,j≥ 1 and
(X^i,j)_i,j≥ 1 are assumed to be mutually independent.
The probabilistic representation for the solution of (<ref>)
will use a branching process starting
from a particle x∈ B(0,R) with label 1=(1) and mark i ∈{0,1, …, m },
which evolves according to
the process X_s,x^1 = x + X_s^1,1,
s ∈ [0,T_1 ], with
T_1 = τ^1,1∧τ_B (x) : = min(
τ^1,1 ,τ_B (x) ), where
the hitting time τ_B(x) is written as
τ_B (x) := inf{ t ≥ 0, x + X_t^1,1∉B(0,R) },
and we omitted the information on the mark (1,1)
in the notation τ_B (x).
Note that by the bound (1.4) in <cit.> we have
[ τ_B (x) ] < ∞, and therefore
τ_B (x) is almost surely finite for all x∈ B(0,R).
If τ^1,1<τ_B(x), the process branches at time τ^1,1
into new independent copies of
(X_t)_t ∈_+, each of them started at
X_x,τ^1,1^1.
Based on the values of I^1,1 =(l_0,… , l_m)∈ L_m,
a family of |l|:=l_0+⋯ +l_m of new branches
carrying respectively the marks i=0,… ,d
are created with the probability q_l_0,… ,l_m,
where
* the first l_0 branches
carry the mark 0 and
are indexed by (1,1),(1,2),… ,(1,l_0),
* the next l_1 branches
carry the mark 1 and
are indexed by (1,l_0+1),… ,(1,l_0 + l_1), and so on.
Each new particle then follows independently
the same mechanism as the first one, and
every branch stops when it leaves the domain B(0,R).
Particles at generation n≥ 1 are assigned a label of the form
k = (1,k_2,… ,k_n) ∈ℕ^n,
and their parent is labeled k- := (1,k_2,… ,k_n-1).
The particle labeled k is born at time T_k-
and its lifetime τ^n,π_n(k) is the element of index
π_n(k) in the i.i.d. sequence
(τ^n,j)_j≥ 1,
defining an injection
π_n:ℕ^n →ℕ,
n≥ 1.
The random evolution of particle k
is given by
X_t,x^k := X^k-_T_k-,x+X_t-T_k-^n,π_n(k),
t∈ [T_k-,T_k],
where T_k := T_k- + τ^n,π_n(k)∧τ_B ( X^k-_T_k-,x) and
τ_B ( X^k-_T_k-,x):= inf{ t ≥ 0, X^k-_T_k-,x+X_t^n,π_n(k)∉B(0,R) }.
If τ^n,π_n(k) < τ_B ( X^k-_T_k-,x),
we draw a sample
I_k := I^n,π_n(k) = (l_0,… ,l_m)
of I^n,π_n(k),
and the particle k branches into
|I^n,π_n(k)|=l_0+⋯ +l_m offsprings at generation (n+1),
which are indexed by (1,… ,k_n,i), i=1,… ,|I^n,π_n(k)|.
The particles whose index ends with an integer between 1 and l_0
will carry the mark 0, and those with index ending with an integer between
l_0+⋯ +l_i-1 +1 and l_0+⋯ + l_i will carry a mark
i∈{1,… ,d}.
Finally, the mark of particle k will be denoted by
θ_k∈{0,1,… ,d }.
Note that the indexes are only be used to distinguish the particles in the
branching process, and they are distinct from the marks.
The set of particles dying inside B(0,R) is denoted by 𝒦^∘,
whereas those dying outside form a set denoted by 𝒦^∂. The particles of n-th generation, n≥ 1, will be denoted by 𝒦_n^∘ (resp. 𝒦_n^∂) if they die inside the domain (resp. outside).
When started from
a position x∈^d and a mark
i ∈{0,1,… ,d} on its first branch,
the above construction yields
a marked branching process called a random marked tree, and
denoted by 𝒯_x,i.
The tree 𝒯_x,0
will be used for the stochastic representation of the solution
u(x) of the PDE (<ref>), while the trees 𝒯_x,i
will be used for the stochastic representation of b_i(x) ·∇ u(x).
The set of particles dying inside B(0,R) is denoted by 𝒦^∘,
whereas those dying outside form a set denoted by 𝒦^∂.
The next table summarizes the notation introduced so far.
Object Notation
[0.5ex]
Initial position x
Tree rooted at x with initial mark i 𝒯_x,i
Particle (or label) of generation n≥ 1 k=(1,k_2,… ,k_n)
First branching time T_1
Lifespan of a particle T_k - T_k-
Birth time of the particle k T_k-
Death time of the particle k∈𝒦^∘ T_k = _k- + τ^n,π_n(k)
Death time of the particle k∈𝒦^∂ T_k = T_k- + τ_B ( X^k-_T_k-,x)
Position at birth of the particle k X^k_T_k-,x
Position at death of the particle k X^k_T_k,x
Mark of the particle k θ_k
Exit time starting from x ∈ B(0,R) τ_B (x) := inf{ t ≥ 0, x + X_t ∉B(0,R) }
To represent the structure
of the tree we use the following conventions, in which
different colors mean different ways of branching:
level 1=[level distance=4cm, sibling distance=4cm]
level 2=[level distance=5cm, sibling distance=3cm]
0.55!
[scale=0.9,grow=right, sloped][H]
[ellipse split,draw,purple,text=black,thick]time lower position
child
node[ellipse split,draw,purple,text=black,thick]time lower position
child
node[ellipse split,draw,thick]...
edge from parent
node[above]label
node[below]mark
child
node[ellipse split,draw,thick]...
edge from parent
node[above]label
node[below]mark
edge from parent
node[above]label
node[below]mark
child
node[ellipse split,draw,blue,text=black,thick]time lower position
edge from parent
node[above]label
node[below]mark
;
For an example in dimension d=1, a sample tree for the PDE
Δ_α u (t,x) + c_0 (t,x) + c_0,1(t,x) u (t,x) ∂ u/∂ x (t,x) = 0
consists
in two types of branching:
we can either not branch (which is represented in blue),
or branch into two branches,
one bearing the mark 0 and the other one bearing the mark 1,
represented in purple.
The black color is used for leaves, namely the particles that leave the domain B(0,R).
0.85!
[scale=0.9,grow=right, sloped]
[ellipse split,draw,cyan,thick]0 lower x
child
node[ellipse split,draw,purple,text=black,thick] T_1 lower X^1_T_1,x
child
node[ellipse split,draw,purple,text=black,thick] T_1 + T_(1,2) lower X^(1,2)_T_(1,2),x
child
node[ellipse split,draw,blue,text=black,thick, right=4cm, below=-1cm]T_1+ T_(1,2)+ T_(1,2,2) lower X^(1,2,2)_T_(1,2,2),x
edge from parent
node[above] (1,2,2)
node[below]1
child
node[ellipse split,draw,thick, right=0.3cm]T_1+ T_(1,2)+τ_B (X^(1,2,2)_T_(1,2,2),x) lower X^(1,2,1)_T_(1,2,1),x
edge from parent
node[above](1,2,1)
node[below]0
edge from parent
node[above] (1,2)
node[below] 1
child
node[ellipse split,draw,blue,text=black,thick,below=-1.6cm] T_1 + T_(1,1) lower X^(1,1)_T_(1,1),x
edge from parent
node[above] (1,1)
node[below] 0
edge from parent
node[above] 1
node[below]0
;
In the above example we have
𝒦^∘= {1, (1,1) , (1,2) ,(1,2,2)}
and 𝒦^∂ = {(1,2,1)}.
§ PROBABILISTIC REPRESENTATION OF PDE SOLUTIONS
In this section we work under the following assumption,
where τ_B (x) denotes the hitting time defined in
(<ref>).
Assumption (CC):
Let d≥ 2 and p∈ (1,d).
The common probability density function ρ
of the random times τ^i,j's satisfies the conditions
*
sup_t ∈ (0,1]1/ρ(t) t^1/α < ∞,
*
[ 1 / F (τ_B (0))]
= [ ( ∫_τ_B (0)^∞ρ (t) dt )^-1]
< ∞.
Letting h denote a bounded measurable function on B(0,R),
we consider the functions
χ^h_1(x) := 𝔼[ ∫_0^τ_B (x) h(X_t,x)dt ]
= ∫_B(0,R) G_R(x,y) h(y) dy
and
χ_2(x) := 𝔼[ ϕ(X_τ_B (x),x) ]
= ∫_^d ∖ B(0,R)P_R(x,y) ϕ (y) dy,
x∈ B(0,R).
For this,
using (<ref>)-(<ref>) and the fact that d≥ 2,
we differentiate (<ref>)
and (<ref>) under the integral sign, to obtain
∇χ^h_1(x) = ∫_B(0,R)∇ G_R(x,y) h(y) dy
= 𝔼[
∫_0^τ_B (x)∇ G_R(x,X_t,x)/G_R(x,X_t,x) h(X_t,x) dt
]
= 𝔼[
∫_0^τ_B (x)𝒲_B(0,R)(t,x,X)
h(X_t,x) dt
],
and
∇χ_2(x) = ∫_^d ∖ B(0,R)∇ P_R(x,y) ϕ (y) dy
= 𝔼[
∇ P_R(x,X_τ_B (x),x)/P_R(x,X_τ_B (x),x)ϕ(X_τ_B (x),x)]
= 𝔼[
𝒲_∂ B(0,R) (x,X)
ϕ(X_τ_B (x),x)],
where 𝒲_∂ B(0,R) (x,X)
and 𝒲_B(0,R)(t,x,X) are the random weights
respectively defined as
𝒲_∂ B(0,R) (x,X) := ∇ P_R(x,X_τ_B (x),x)/P_R(x,X_τ_B (x),x) 𝒲_B(0,R)(t,x,X) := ∇ G_R(x,X_t,x)/G_R(x,X_t,x)
which shows that χ_1 , χ_2 ∈ C^1(B(0,R))∩ C^0 ( B(0,R)),
provided that α∈ (1,2), since h is bounded on B(0,R).
Next, we let
𝒲(t,x,X)
:=
𝒲_B(0,R) (t,x,X) 1_{X_t,x∈ B(0,R)}
+
𝒲_∂ B(0,R) (x,X) 1_{X_t,x∉B(0,R)}
and
𝒲_k := 1_{θ_k = 0} + _{θ_k≠ 0 } b_θ_k(X_T_k-) ·𝒲(T_k-T_k-,X_T_k-^k,X^k),
and consider the functional ℋ_ϕ
of the random tree 𝒯_x,i, defined as
ℋ_ϕ (𝒯_x,i) :=
∏_k∈𝒦^∘c_I_k(X^k_T_k,x)𝒲_k/q_I_kρ(T_k - T_k-)∏_k∈𝒦^∂ϕ(X^k_T_k,x) 𝒲_k/F(T_k - T_k-),
x∈ B(0,R),
in which the branching evolution starts from the
mark θ_1 =i, i=0,… ,m.
The goal of this section is to prove the following result,
which implies Theorem <ref> as
Assumption (CC) is satisfied for
a suitable choice of probability density function ρ (t),
see Proposition <ref> below.
Let d≥ 2 and α∈ (1,2).
Assume that the boundary condition ϕ belongs to H^α (^d)
and is bounded on ^d ∖ B(0,R).
Under Assumptions (AA)-(BB)-(BGJhypC),
if R>0
and max_l ∈ L_m‖ c_l‖_∞ are
sufficiently small,
the semilinear elliptic PDE (<ref>) admits a viscosity solution in
C^1(B(0,R)) ∩ C^0 ( B(0,R))
represented as
u(x) := 𝔼 [ ℋ_ϕ (𝒯_x,0) ],
x∈ B(0,R).
The proof of Theorem <ref> is postponed to the end of this section.
Using the functional ℋ_ϕ, in the next theorem
we obtain a probabilistic representations for
the solutions of semilinear elliptic PDEs of the form
(<ref>).
Let d ≥ 2, α∈ (1,2), and
assume that the family (ℋ_ϕ (𝒯_x,i))_x∈ B(0,R)
is bounded in L^1(Ω ) uniformly in x∈ B(0,R),
i=0,… ,m.
Then, the function
u(x) := 𝔼 [ ℋ_ϕ (𝒯_x,0) ],
x∈B(0,R),
is a viscosity solution in C^1(B(0,R)) ∩ C^0(B(0,R))
of the semilinear elliptic PDE (<ref>).
In addition,
for
i =1,… , m
the gradient
b_i(x)·∇ u(x) can be represented as the expected value
b_i(x)·∇ u(x) = 𝔼[ ℋ_ϕ (𝒯_x,i)],
x ∈ B(0,R), i=0,… ,m.
Using the first branching time T_1, we get:
u(x) = 𝔼[ℋ_ϕ (𝒯_x,0)]
= 𝔼[ ϕ(X_τ_B (x),x^1)/F( T_1) 1_{ T_1 = τ_B (x) } + c_I_1(X_T_1,x^1) /q_I_1ρ(T_k)∏_i=0^I_1-1ℋ_ϕ(𝒯_X_T_1,x^1) 1_{ T_1 < τ_B (x) }]
= 𝔼[ ϕ( X_τ_B (x),x^1) + ∫_0^τ_B (x)∑_l ∈ℒ_m c_l u^l_0(X_t,x^1) ∏_i=1^m v_i^l_i(X_t,x^1) dt],
where we let
v_i(x) := 𝔼 [ ℋ_ϕ (𝒯_x,i) ], x∈ B(0,R),
i=1,… , m.
By (<ref>)-(<ref>) the function (<ref>)
is differentiable
as u(x) and the v_i(x)'s are bounded on B(0,R),
and by (<ref>) and (<ref>) we have
b_i(x) ·∇ u(x) = b_i(x) ·𝔼[ ℋ_ϕ ( 𝒯_x,0 ) 𝒲(T_1,x,X) ]
= 𝔼[ ℋ_ϕ ( 𝒯_x,0 )
b_i(x) ·𝒲(T_1,x,X) ]
= 𝔼 [ ℋ_ϕ (𝒯_x,i) ]
=
v_i(x), x∈ B(0,R),
i=0,1,… , m.
Therefore, by (<ref>) and (<ref>) we obtain
u(x) = 𝔼[ ϕ(X_τ_B (x),x^1)
+ ∫_0^τ_B (x) f( X_t,x^1 ,u(X_t,x^1),∇ u(X_t,x^1)) dt ],
x∈ B(0,R).
It then follows from a classical argument, see, e.g.,
3 of <cit.>, that u is a viscosity solution of (<ref>).
Indeed, for any δ > 0, by the Markov property we also have
u(x) = 𝔼[ u (X_δ∧τ_B (x),x^1)
+ ∫_0^δ∧τ_B (x) f( X_t,x^1 ,u(X_t,x^1),∇ u(X_t,x^1)) dt ],
x∈ B(0,R).
Next, let ξ∈𝒞^2(B(0,R)) such that x is a maximum point of u-ξ and u(x) = ξ(x). By the Itô-Dynkin formula,
we get
𝔼[ξ(X^1_δ∧τ_B (x),x)]
= ξ(x)
+ 𝔼[ ∫_0^δ∧τ_B (x)Δ_αξ(X^1_t,x) dt ].
Thus, since u(x) = ξ(x) and u≤ξ, we have
𝔼[ ∫_0^δ∧τ_B (x)(
Δ_αξ(X^1_t,x)
+ f(X^1_t,x , u(X^1_t,x),∇ u(X_t,x^1))
) dt ] ≥ 0.
Since X_t,x converges in distribution to the constant x∈^d
as t tends to zero,
it admits an almost surely convergent subsequence,
hence by continuity and boundedness of f( · ,u( · ))
together with the mean-value and dominated convergence theorems,
we have
Δ_αξ(x) + f(x,ξ(x),∇ξ (x) ) ≥ 0,
hence u is a viscosity subsolution
(and similarly a viscosity supersolution)
of (<ref>).
Lemma <ref> and Proposition <ref>
below will be used to conclude the below proof of Theorem <ref>.
In the proof of the next lemma we use the filtration (ℱ_n)_n≥ 1
defined by
ℱ_n := σ(T_k,I_k,X^k,k∈⋃_i=1^n ℕ^i), n ≥ 1.
Assume that v:B(0,R)→_+ is a bounded measurable function satisfying
the inequality
v(x) ≥
K_2
+
𝔼[
∫_0^τ_B (x)( c 1_[0,1](t) ρ (t)
+ K_1
1_(1,∞ )(t)
)
∑_l = (l_0, … , l_m ) ∈ℒ_m v^|l|(X_t,x) dt ],
x∈ B(0,R), for some
K_1, K_2 , c > 0,
where |l| = l_0+⋯ +l_m.
Then, we have
v(x) ≥𝔼[
∏_k∈⋃_i=1^n 𝒦_i^∘
T_k-T_k- > 1
K_1 /q_I_kρ(T_k-T_k-)∏_k∈⋃_i=1^n 𝒦_i^∘
T_k-T_k-≤ 1
c /q_I_k∏_k∈⋃_i=1^n 𝒦_i^∂K_2/F(T_k-T_k-)∏_k∈𝒦_n+1 v(X^T_k-,x) ],
x∈ B(0,R),
n≥ 1, where 𝒦_i^∘ (resp. 𝒦_i^∂),
i=1,… , n+1, denotes the set of i-th generation particles
which die inside (resp. outside) the domain B(0,R).
Since T_1 is independent of
(X_s,x)_s∈_+ and has the probability density ρ, letting
f̃ (y):= K_1 ∑_l∈ℒ_m y^|l|,
we have
v(x) ≥𝔼[
𝔼[
K_2 + ∫_0^τ_B (x)( c 1_[0,1](t) ρ (t)
+
K_1 1_(1,∞ )(t)
)
f̃(v(X_t,x))
dt
|
(X_s,x)_s∈_+]]
=
𝔼[
𝔼[
K_2/F(τ_B (x)) 1_{ T_1= τ_B (x) } |
(X_s,x)_s∈_+]
]
+ 𝔼[
𝔼[
∫_0^τ_B (x)( c 1_[0,1](t)
+ K_1 1_(1,∞ )(t)
1/ρ (t) )
f̃(v(X_t,x))
ρ (t) dt
|
(X_s,x)_s∈_+]]
=
𝔼[ K_2/F(τ_B (x)) 1_{ T_1= τ_B (x)}
+
c f̃(v(X_s,x))
1_{ T_1≤min ( 1 , τ_B (x) ) }
+
K_1/ρ(T_1) f̃(v(X_s,x))
1_{ 1 < T_1 < τ_B (x) }]
= 𝔼[ K_2/F( T_1) 1_{ T_1= τ_B (x) }
+ c/q_I_1
v^I_1(X_T_1,x)
1_{ T_1≤min ( 1 , τ_B (x) ) }
+ 1/ρ(T_1) K_1 /q_I_1
v^I_1(X_T_1,x)
1_{ 1 < T_1 < τ_B (x) }],
showing that
v(x) ≥𝔼[
∏_k∈𝒦_0^∘
T_k-T_k- > 1
K_1 /q_I_kρ(T_k-T_k-)∏_k∈𝒦_0^∘
T_k-T_k-≤ 1
c/q_I_k∏_k∈𝒦_0^∂K_2/F(T_k-T_k-)∏_k∈𝒦_1 v(X_T_k-,x) ],
x∈ B(0,R).
Repeating the above argument for the particles in k∈𝒦_2, we find
v(X_T_k-,x)
≥𝔼[ K_2 /F( T_k) 1_{ X_T_k,x∉ B(0,R)}
+ 1/q_I_k 1_{ X_T_k,x∈ B(0,R) }(
c 1_{ T_1≤min ( 1 , τ_B (x) ) }
+ K_1 /ρ(T_k) 1_{ 1 < T_1 < τ_B (x) })
| ℱ_1 ].
Plugging this expression in (<ref>) above and using the tower
property of the conditional expectation, we obtain
v(x) ≥𝔼[
∏_k∈⋃_i=1^1 𝒦_i^∘
T_k-T_k- > 1
K_1 /q_I_kρ(T_k-T_k-)∏_k∈⋃_i=1^1 𝒦_i^∘
T_k-T_k-≤ 1
c /q_I_k∏_k∈⋃_i=1^1 𝒦_i^∂K_2/F(T_k-T_k-)∏_k∈𝒦_2 v ( X_T_k-,x ) ],
and repeating this process inductively leads to
v(x) ≥𝔼[
∏_k∈⋃_i=1^n 𝒦_i^∘
T_k-T_k- > 1
K_1 /q_I_kρ(T_k-T_k-)∏_k∈⋃_i=1^n 𝒦_i^∘
T_k-T_k-≤ 1
c /q_I_k∏_k∈⋃_i=1^n 𝒦_i^∂K_2/F(T_k-T_k-)∏_k∈𝒦_n+1 v(X_T_k-,x) ].
In what follows, we let x ∨ y = max(x,y), x,y∈.
Let R>0, d≥ 2, and α∈ (1,2).
Under Assumptions (AA)-(BB)-(CC), let
K_1 :=
3 max ( 1 ∨ (d b_1,∞ ) , db_0,∞ C_α , d , 1 )
K_2 := ‖ϕ‖_∞max ( 1, (d+α) b_1,∞ )
√(𝔼[ 1 / F (τ_B (0)) ]).
Assume also that there exists a bounded strictly positive weak solution v ∈ H^α/2(^d) ∩ L^∞(^d) to the following partial differential inequalities:
Δ_α v(x) + K_1 ∑_l∈ℒ_m v^|l|(x) ≤ 0,
x∈ B(0,R),
v(x) ≥ K_2, x∈^d ∖ B(0,R).
Then, for sufficiently small max_l ∈ L_m‖ c_l ‖_∞
we have the bound
[|ℋ_ϕ (𝒯_x,i)|] ≤ v(x) ≤‖ v ‖_∞ < ∞, x∈ B(0,R),
and thus (ℋ_ϕ (𝒯_x,i))_x∈ B(0,R)
is bounded in L^1 (Ω ), uniformly in x∈ B(0,R),
i=0,… ,m.
For x ∈ B(0,R), let
w_i (x)
:= 𝔼[ |ℋ_ϕ (𝒯_x,i)| ]
= 𝔼_i [ ∏_k∈𝒦^∘|c_I_k(X^k_T_k,x)||𝒲_k|/q_I_kρ (T_k-T_k-)∏_k∈𝒦^∂| ϕ(X^k_T_k,x) 𝒲_k|/F(T_k-T_k-)],
i = 0,… ,m.
For k∈𝒦^∘ with mark
θ_k=0 we have 𝒲_k=1, so that
|c_I_k(X^k_T_k,x)||𝒲_k|
≤‖ c_I_k‖_∞ .
On the other hand, when k∈𝒦^∘ has a mark
θ_k≠ 0, using (<ref>) and the Cauchy-Schwarz inequality we have
|𝒲_k|
≤d |b_θ_k(X^k_T_k-,x)|/min(
R - | X_T_k-| ,
|X^k_T_k,x-X^k_T_k-,x|
)
≤ d max(
|b_θ_k(X^k_T_k-,x)|/
R - | X_T_k-|
,
|b_θ_k(X^k_T_k-,x)|/|X^k_T_k,x-X^k_T_k-,x|)
≤ db_1,∞ + db_0,∞/|X^k_T_k,x-X^k_T_k-,x|.
Regarding the right product in (<ref>),
when k∈𝒦^∂,
the definition of 𝒲_∂ B(0,R) (x,X)
in (<ref>), together with the
bound (<ref>) and the Cauchy-Schwarz inequality, imply
|ϕ(X^k_T_k,x) 𝒲_k|≤ K_3,
where K_3:=‖ϕ‖_∞max ( 1, (d+α) b_1,∞ ).
By conditional independence
given 𝒢 := σ(τ^i,j,I^i,j : i,j ≥ 1)
of the terms in the product over
k∈𝒦^∘∪𝒦^∂, which now only
involve random terms of the form X_T_k-X_T_k-
given T_k-T_k-, by (<ref>) we have
w_i(x)
≤𝔼[ ∏_k∈𝒦^∘‖ c_I_k‖_∞/q_I_k[ 1/ρ (T_k-T_k-)( 1 + d b_1,∞ + d b_0,∞/ |X_T_k-X_T_k-|) | 𝒢] ∏_k∈𝒦^∂[ K_3/F(T_k-T_k-) | 𝒢] ]
= 𝔼[ ∏_k∈𝒦^∘(
‖ c_I_k‖_∞/q_I_kρ (T_k-T_k-)( 1 + d b_1,∞ +
d b_0,∞ C_α,d,1/(T_k-T_k-)^1/α) )
∏_k∈𝒦^∂K_3/F(T_k-T_k-)]
≤𝔼[ ∏_k∈𝒦^∘(
3 ‖ c_I_k‖_∞/q_I_kρ (T_k-T_k-)max( 1 ∨ (d b_1,∞ ) ,
d b_0,∞ C_α,d,1/(T_k-T_k-)^1/α) )
∏_k∈𝒦^∂K_3/F(T_k-T_k-)].
Next, we let
K_4 :=
3 max(
sup_t∈ [0, 1]1∨ (db_1,∞)/ρ(t)
,
b_0,∞sup_t∈ [0, 1]d C_α,d,1/ρ(t) t^1/α)
,
and split the left terms between small and
large values of T_k-T_k-, as follows:
w_i (x)
≤𝔼[ ∏_k∈𝒦^∘ 3 ‖ c_I_k‖_∞/q_I_kρ (T_k-T_k-)(
max ( 1 ∨ (d b_1,∞ ) , db_0,∞ C_α , d , 1 )
1_{T_k-T_k- > 1 }.
.
+ max(
1∨ (db_1,∞)
,
d b_0,∞ C_α,d,1/(T_k-T_k-)^1/α)
1_{T_k-T_k-≤ 1 })
∏_k∈𝒦^∂K_3/F(T_k-T_k-)]
≤𝔼[ ∏_k∈𝒦^∘‖ c_I_k‖_∞/q_I_k(
K_1
/ρ (T_k-T_k-) 1_{T_k-T_k- > 1 }
+ K_4
1_{T_k-T_k-≤ 1 })
∏_k∈𝒦^∂K_3/F(T_k-T_k-)]
=
𝔼[
∏_k∈𝒦^∘ T_k-T_k- > 1
K_1 ‖ c_I_k‖_∞/q_I_kρ (T_k-T_k-)∏_k∈𝒦^∘
T_k-T_k-≤ 1 K_4 ‖ c_I_k‖_∞/q_I_k∏_k∈𝒦^∂K_3/F(T_k-T_k-)].
Using the inequality
√(𝔼[ 1 / F(T_k-T_k-) | 𝒢])≤𝔼[ 1 / F(T_k-T_k-) | 𝒢]
which follows from the fact that F(t) ≤ 1 for all t≥ 0,
we have
𝔼[
1/F (T_k-T_k-) | 𝒢]
≤√(𝔼[
1/F (T_k-T_k-) | 𝒢])√(𝔼[
1/F(T_k-T_k-) | 𝒢])
≤√(𝔼[
1/F (T_k) | 𝒢])𝔼[
1/F(T_k-T_k-) | 𝒢]
≤√(𝔼[
1/F (τ_B (0)) ]
)𝔼[
1/F(T_k-T_k-) | 𝒢],
which yields
w_i (x)
≤𝔼[
∏_k∈𝒦^∘ T_k-T_k- > 1
K_1 max_l ∈ L_m‖ c_l ‖_∞/ρ (T_k-T_k-)∏_k∈𝒦^∘
T_k-T_k-≤ 1 max_l ∈ L_m‖ c_l ‖_∞/q_I_k
K_4
∏_k∈𝒦^∂K_2 /F(T_k-T_k-) ].
Next, we mollify v∈ H^α/2(^d)
into
v_ε(x) :=
1/ε∫_-∞^∞ψ( x-y/ε)v(y)dy, x ∈,
ε >0,
where ψ (x) is a mollifier on
such that ∫_-∞^∞ψ (y) dy =1.
By (<ref>) and Jensen's inequality, we have
Δ_α v_ε (x)
+ K_1 ∑_l∈ℒ_m v_ε^|l|(x)
= 1/ε∫_-∞^∞Δ_αψ( x-y/ε)v(y)dy
+ K_1 ∑_l∈ℒ_m(
1/ε∫_-∞^∞ψ( x-y/ε)v(y)dy
)^|l|
≤ 1/ε∫_-∞^∞Δ_αψ( x-y/ε)Δ_α v(y)dy
+ K_1 ∑_l∈ℒ_m1/ε∫_-∞^∞ψ( x-y/ε)v^|l|(y)dy
≤ 0,
x∈ B(0,R).
Applying the Itô-Dynkin formula
to v_ε (X_s,x) with
v_ε∈ H^α(^d),
by (<ref>) we obtain
v_ε(x)
= 𝔼[ K_2 -
∫_0^τ_B (x)Δ_α v_ε(X_t,x) dt ]
≥ 𝔼[ K_2 + ∫_0^τ_B (x) K_1 ∑_l∈ℒ_m v_ε^|l|(X_t,x) dt ],
x∈ B(0,R).
Thus, passing to the limit as ε tends to zero,
by dominated convergence and the facts that
[ τ_B (x) ] < ∞ and v(x) is upper and lower bounded in
(0,∞), for some sufficiently small c>0 we have
v(x) ≥ K_2 + 𝔼[ ∫_0^τ_B (x) K_1 ∑_l∈ℒ_m v^|l|(X_t,x) dt ]
≥ K_2 + 𝔼[ ∫_0^τ_B (x)( c 1_[0,1](t) ρ (t)
+ K_1
1_(1,∞ )(t)
)
∑_l∈ℒ_m v^|l|(X_t,x) dt ],
x∈ B(0,R).
Therefore, by Lemma <ref> we have
v(x) ≥𝔼[
∏_k∈⋃_i=1^n 𝒦_i^∘
T_k-T_k- > 1
K_1 /q_I_kρ(T_k-T_k-)∏_k∈⋃_i=1^n 𝒦_i^∘
T_k-T_k-≤ 1 c /q_I_k∏_k∈⋃_i=1^n 𝒦_i^∂K_2/F(T_k-T_k-)∏_k∈𝒦_n+1 v(X_T_k-) ],
hence using Fatou's lemma we obtain as n tends to infinity we have
v(x) ≥ 𝔼[
∏_k∈𝒦^∘
T_k-T_k- > 1 K_1 /q_I_kρ(T_k-T_k-)∏_k∈𝒦^∘
T_k-T_k-≤ 1 c/q_I_k∏_k∈𝒦^∂K_2/F(T_k-T_k-)]
≥ 𝔼[
∏_k∈𝒦^∘
T_k-T_k- > 1 K_1 max_l ∈ L_m‖ c_l ‖_∞/q_I_kρ(T_k-T_k-)∏_k∈𝒦^∘
T_k-T_k-≤ 1 K_4 max_l ∈ L_m‖ c_l ‖_∞/q_I_k∏_k∈𝒦^∂K_2/F(T_k-T_k-)]
≥ w_i(x), x∈ B(0,R),
provided that max_l ∈ L_m‖ c_l ‖_∞
is small enough, which gives (<ref>).
<ref>.
By Theorem 1.2 in <cit.>,
the partial differential inequality (<ref>) admits a
nonnegative (continuous) viscosity solution v(x) on ^d
when R>0 is sufficiently small.
In addition, by Proposition 3.5 in <cit.>,
v ∈ H^α/2(^d) ∩ L^∞(^d) and is a
weak solution of (<ref>).
We conclude by applying Propositions <ref> and <ref>.
The next proposition provides an example of probability density function
ρ satisfying Assumption (CC).
In this case, if (<ref>)
admits a nonnegative weak solution v ∈ H^α/2(^d) ∩ L^∞(^d), then
(ℋ_ϕ (𝒯_x,i))_x∈ B(0,R)
is bounded in L^1 (Ω ), uniformly in x∈ B(0,R)
by Proposition <ref>.
In particular, (ℋ_ϕ (𝒯_x,i))_x∈ B(0,R)
is uniformly bounded in L^2(Ω ) if d≥ 3.
Let α∈ (1,2),
δ∈ (0,1 - 1/α ],
a >1,
and d≥ 2.
The probability density function ρ (t) defined as
ρ(t) := κ_1 t^δ-1e^-t 1_[0, 1 ](t)
+ κ_2/t^a 1_( 1 ,∞ ) (t), t > 0,
with κ_1,κ_2>0,
satisfies Assumption (CC).
We check that (<ref>) is satisfied, since for t∈ (0,1 ] we have
1/ρ (t)t^1/α = t^-1/α + 1 - δe^t/κ_1≤e /κ_1.
Regarding (<ref>)
we note that, since F is a nonincreasing function,
[ 1 / F (τ_B (0)) 1_{τ_B (0) ≤ 1 }] ≤
1 / F (1).
On the other hand, since 1 / F (t) ≈ t^a-1, t>1,
and a-1>0,
by Theorem 3.2 in <cit.> applied on the
bounded domain B(0,R), we have
[ 1_{τ_B (0) > 1 }1/F (τ_B (0))] ≤1/κ_2[
(τ_B (0))^a-1]
< ∞.
level 1=[level distance=4cm, sibling distance=3cm]
level 2=[level distance=5cm, sibling distance=5cm]
§ NUMERICAL EXAMPLES
In this section,
we consider numerical applications of the probabilistic representation
(<ref>).
For the generation of random samples of
the α/2-stable subordinator
S_t, we use the formula
S_t := 2t^2/αsin( α(U+π / 2) /2 )/cos^2/α (U)(
cos (U- α (U+π / 2) /2 )/E)^-1+2/α
based on the Chambers-Mallows-Stuck (CMS) method,
where U is uniform on (-π / 2,π /2),
and E is exponential with unit parameter,
see Relation (3.2) in <cit.>.
For simplicity of implementation, the probability density ρ (t)
of τ^i,j, i,j≥ 1,
is taken to be exponential with parameters ranging from 1.5 to 1.7.
Given k≥ 0, we consider the function
Φ_k,α (x) := (1-|x|^2)^k+α / 2_+,
x∈^d,
which is Lipschitz if k>1-α/2, and solves the Poisson problem
Δ_αΦ_k,α = -Ψ_k,α
on ^d, with
Ψ_k,α (x)
:= {[ Γ( ( d+α ) / 2)
Γ(k+1+ α / 2 )/
2^-αΓ(k+1)Γ( d / 2 ) _2F_1(
d+α/2,-k;d/2;|x|^2
), |x|≤ 1; 2^αΓ( ( d+α ) /2 )
Γ(k+1+ α / 2 )/Γ(k+1+ ( d+α ) / 2 )
Γ(- α / 2 )
|x|^d+α_2F_1(
d+α/2,2+α/2;k+1+d+α/2;
1/|x|^2), | x|>1 ].
x∈^d, where _2F_1 ( a,b;c;y) is
Gauss's hypergeometric function,
see (5.2) in <cit.>,
Lemma 4.1 in <cit.>,
and Relation (36) in <cit.>.
§.§.§ Linear gradient term
We take m=1, L_1 = { (0,1) }, and
c_0,1(x) := Ψ_k,α(x) + (2k+α)|x|^2(1-|x|^2)^k+α/2,
b_1(x) := (1-|x|^2) x,
and consider the PDE
Δ_α u (x) + Ψ_k,α(x) + (2k+α)|x|^2(1-|x|^2)^k+α/2 + (1-|x|^2) x ·∇ u(x) = 0,
x ∈ B(0,1),
with u(x) =0 for x ∈^d ∖ B(0,R),
and explicit solution
u(x) = Φ_k,α (x)
= (1-| x|^2)^k+α/2_+,
x ∈^d.
The random tree associated to (<ref>) starts at point x∈ B(0,1)
and branches into 0 branch or 1 branch
as in the following random samples:
0.9!
[scale=0.9,grow=right, sloped]
[ellipse split,draw,cyan,text=black,thick]0 lower x
child
node[ellipse split,draw,red,text=black,thick] T_1 lower X^1_T_1,x
child
node[ellipse split,draw,red,text=black,thick, right=0.cm] T_(1,1) lower X^(1,1)_T_(1,1),x
child
node[ellipse split,draw,black, right=0.cm] T_(1,1,1) : = T_(1,1)+τ_B (X^(1,1,1)_T_(1,1,1),x) lower X^(1,1,1)_T_(1,1,1),x
edge from parent
node[above] (1,1,1)
edge from parent
node[above] (1,1)
edge from parent
node[above] 1
;
0.506!
[scale=0.9,grow=right, sloped]
[ellipse split,draw,cyan,text=black,thick]0 lower x
child
node[ellipse split,draw,red,text=black,thick] T_1 lower X^1_T_1,x
child
node[ellipse split,draw,blue,text=black,thick, right=0.cm] T_(1,1) lower X^(1,1)_T_(1,1),x
edge from parent
node[above] (1,1)
edge from parent
node[above] 1
;
-0.3cm
Figures <ref>-1a) and <ref>-1b)
respectively use 10^7 and 2× 10^7 Monte Carlo samples.
§.§.§ Nonlinear gradient term
In this example we take L_1 = { (0,2) },
c_0,2(x) := Ψ_k,α(x) + (2k+α)^2|x|^4(1-|x|^2)^2k+α,
b_1(x) := (1-|x|^2) x,
and consider the PDE with nonlinear gradient term
Δ_α u (x) + Ψ_k,α(x) + (2k+α)^2|x|^4(1-|x|^2)^2k+α + ((1-|x|^2) x·∇ u(x) )^2 = 0,
x ∈ B(0,1),
with u(x) =0 for x ∈^d ∖ B(0,R),
and explicit solution
u(x) = Φ_k,α (x)
= (1-| x|^2)^k+α/2_+,
x ∈^d.
The random tree associated to (<ref>) starts at a point x∈ B(0,1)
and branches into 0 branch, 1 branch,
or 2 branches as in the following random tree sample:
0.95!
[scale=0.9,grow=right, sloped]
[ellipse split,draw,cyan,text=black,thick]0 lower x
child
node[ellipse split,draw,violet,text=black,thick] T_1 lower X^1_T_1,x
child
node[ellipse split,draw,violet,text=black,thick,right=1.3cm,below=-3cm] T_(1,2) lower X^(1,2)_T_(1,2),x
child
node[ellipse split,draw,blue,text=black,thick,right=5.2cm,below=-2.9cm]T_(1,2,2) lower X^(1,2,2)_T_(1,2,2),x
edge from parent
node[above](1,2,2)
child
node[ellipse split,draw,thick, right=1.04cm]T_(1,2,1):=T_(1,2)+ τ_B (X^(1,2,1)_T_(1,2,1),x) lower X^(1,2,1)_T_(1,2,1),x
edge from parent
node[above](1,2,1)
edge from parent
node[above] (1,2)
child
node[ellipse split,draw,blue,text=black,thick, right=0.cm] T_(1,1) lower X^(1,1)_T_(1,1),x
edge from parent
node[above] (1,1)
edge from parent
node[above] 1
;
The simulations of Figure <ref>
use five million Monte Carlo samples.
' #10=#11.5ex`0
#10=#11.5ex`0 '
28
urlstyle
[Agarwal and Claisse(2020)]claisse
A. Agarwal and J. Claisse.
Branching diffusion representation of semi-linear elliptic PDEs and
estimation using Monte Carlo method.
Stochastic Processes and their Applications, 1300
(8):0 5006–5036, 2020.
[Applebaum(2009)]applebk2
D. Applebaum.
Lévy processes and stochastic calculus, volume 116 of
Cambridge Studies in Advanced Mathematics.
Cambridge University Press, Cambridge, second edition, 2009.
[Barles et al.(2008)Barles, Chasseigne, and Imbert]barles2
G. Barles, E. Chasseigne, and C. Imbert.
On the Dirichlet problem for second-order elliptic
integro-differential equations.
Indiana Univ. Math. J., 570 (1):0 213–246,
2008.
[Bass and Cranston(1983)]basscranston
R.F. Bass and M. Cranston.
Exit times for symmetric stable processes in R^n.
Ann. Probab., 110 (3):0 578–588, 1983.
[Biler et al.(2015)Biler, Imbert, and Karch]biler2015nonlocal
P. Biler, C. Imbert, and G. Karch.
The nonlocal porous medium equation: Barenblatt profiles and other
weak solutions.
Archive for Rational Mechanics and Analysis, 2150
(2):0 497–529, 2015.
[Bogdan et al.(2002)Bogdan, Kulczycki, and Nowak]bogdangradient
K. Bogdan, T. Kulczycki, and A. Nowak.
Gradient estimates for harmonic and q-harmonic functions of
symmetric stable processes.
Illinois J. Math., 460 (2):0 541–556, 2002.
[Bogdan et al.(2015)Bogdan, Grzywny, and Ryznar]bogdanbarrier
K. Bogdan, T. Grzywny, and M. Ryznar.
Barriers, exit time and survival probability for unimodal Lévy
processes.
Probab. Theory Related Fields, 1620 (1-2):0
155–198, 2015.
[Bony et al.(1968)Bony, Courrège, and Priouret]bony
J.-M. Bony, P. Courrège, and P. Priouret.
Semi-groupes de Feller sur une variété à bord compacte et
problèmes aux limites intégro-différentiels du second ordre donnant
lieu au principe du maximum.
Ann. Inst. Fourier (Grenoble), 180 (fasc. 2):0
369–521 (1969), 1968.
[Bucur(2016)]bucur
C. Bucur.
Some observations on the Green function for the ball in the
fractional Laplace framework.
Commun. Pure Appl. Anal., 150 (2):0 657–699,
2016.
[Felsinger et al.(2015)Felsinger, Kassmann, and Voigt]felsinger
M. Felsinger, M. Kassmann, and P. Voigt.
The Dirichlet problem for nonlocal operators.
Math. Z., 2790 (3-4):0 779–809, 2015.
[Getoor(1961)]getoor
R.K. Getoor.
First passage times for symmetric stable processes in space.
Trans. Amer. Math. Soc., 101:0 75–90, 1961.
[Henry-Labordère et al.(2019)Henry-Labordère, Oudjane, Tan, Touzi,
and Warin]labordere
P. Henry-Labordère, N. Oudjane, X. Tan, N. Touzi, and X. Warin.
Branching diffusion representation of semilinear PDEs and Monte
Carlo approximation.
Ann. Inst. H. Poincaré Probab. Statist., 550
(1):0 184–210, 2019.
[Huang and Oberman(2014)]huang-oberman
Y. Huang and A. Oberman.
Numerical methods for the fractional Laplacian: a finite
difference-quadrature approach.
SIAM J. Numer. Anal., 520 (6):0 3056–3084,
2014.
[Huang and Oberman(2016)]oberman
Y. Huang and A. Oberman.
Finite difference methods for fractional Laplacians.
Preprint arXiv:1611.00164, 2016.
[Ikeda et al.(1968-1969)Ikeda, Nagasawa, and Watanabe]inw
N. Ikeda, M. Nagasawa, and S. Watanabe.
Branching Markov processes I, II, III.
J. Math. Kyoto Univ., 8-9:0 233–278, 365–410,
95–160, 1968-1969.
[Kwaśnicki(2017)]tendef
M. Kwaśnicki.
Ten equivalent definitions of the fractional Laplace operator.
Fractional Calculus and Applied Analysis, 200
(1):0 7–51, 2017.
[Le Gall(1995)]LGBroSna
J.-F. Le Gall.
The Brownian snake and solutions of Δ u=u^2 in a domain.
Probab. Theory Related Fields, 1020 (3):0
393–432, 1995.
[López-Mimbela(1996)]lm
J.A. López-Mimbela.
A probabilistic approach to existence of global solutions of a system
of nonlinear differential equations.
In Fourth Symposium on Probability Theory and Stochastic
Processes (Spanish) (Guanajuato, 1996), volume 12 of Aportaciones Mat.
Notas Investigación, pages 147–155. Soc. Mat. Mexicana, México, 1996.
[Mou(2017)]mou
C. Mou.
Perron's method for nonlocal fully nonlinear equations.
Analysis and PDE, 100 (5):0 1227–1254, 2017.
[Nagasawa and Sirao(1969)]N-S
M. Nagasawa and T. Sirao.
Probabilistic treatment of the blowing up of solutions for a
nonlinear integral equation.
Trans. Amer. Math. Soc., 139:0 301–310, 1969.
[Penent and Privault(2022)]penent
G. Penent and N. Privault.
Existence and probabilistic representation of the solutions of
semilinear parabolic PDEs with fractional Laplacians.
Stochastics and Partial Differential Equations: Analysis and
Computations, 10:0 446–474, 2022.
[Penent and Privault(2023)]penent2
G. Penent and N. Privault.
Existence of solutions for nonlinear elliptic PDEs with fractional
Laplacians on open balls.
Preprint arXiv:2110.09941v2, 21 pages, Communications on Pure and
Applied Analysis, to appear, 2023.
[Ros-Oton(2016)]ros-oton2016
X. Ros-Oton.
Nonlocal elliptic equations in bounded domains: a survey.
Publ. Mat., 600 (1):0 3–26, 2016.
[Ros-Oton and Serra(2014)]ros-oton
X. Ros-Oton and J. Serra.
The Dirichlet problem for the fractional Laplacian: Regularity up
to the boundary.
J. Math. Pures Appl., 1010 (3):0 275–302,
2014.
[Servadei and Valdinoci(2012)]servadei
R. Servadei and E. Valdinoci.
Mountain pass solutions for non-local elliptic operators.
J. Math. Anal. Appl., 3890 (2):0 887–898,
2012.
[Servadei and Valdinoci(2014)]servadei2014
R. Servadei and E. Valdinoci.
Weak and viscosity solutions of the fractional Laplace equation.
Publ. Mat., 580 (1):0 133–154, 2014.
[Skorokhod(1964)]skorohodbranching
A.V. Skorokhod.
Branching diffusion processes.
Teor. Verojatnost. i. Primenen., 9:0 492–497, 1964.
[Weron(1996)]weron1996chambers
R. Weron.
On the Chambers-Mallows-Stuck method for simulating skewed
stable random variables.
Statistics & probability letters, 280 (2):0
165–171, 1996.
|
http://arxiv.org/abs/2307.00153v1
|
20230630220502
|
A symbolic approach to discrete structural optimization using quantum annealing
|
[
"Kevin Wils",
"Boyang Chen"
] |
cs.CE
|
[
"cs.CE"
] |
§ INTRODUCTION
Quantum computers are rather unique devices that, by leveraging quantum mechanical principles, theoretically allow certain types of problems to be solved much more efficiently than is possible with classical computers <cit.>. While classical computers use binary bits, 1s and 0s, to perform their computations, quantum computers make use of quantum bits. Quantum bits, or qubits, can not only represent the classical 0 and 1 states, but can also exist in a quantum superposition of these states. This quantum superposition, when leveraged effectively, is one of the reasons why quantum computers promise better performance in certain applications.
There are two main types of quantum computers currently in development, being the General Purpose Quantum Computer (GPQC) and the Quantum Annealer (QA). With the GPQC, most of the potential improvements stem from the fact that these systems can run complex quantum algorithms, allowing for more efficient problem-solving methods to be devised. An overview of quantum algorithms is given by Montanaro <cit.>. On the other hand, a QA can only use the quantum annealing algorithm to solve very specific types of optimization problems, known as quadratic unconstrained binary optimization (QUBO) or Ising model problems <cit.>. The QA is said to leverage quantum tunneling to aid in finding optimal solutions to optimization problems <cit.>.
Both quantum computing technologies are still quite novel, and quantum computing hardware is still in its infancy compared to the advanced state of classical computing technologies. Research into practical applications of quantum computing stem only from the last several years, however, some studies have already shown promising results <cit.>. A review of applications on structural mechanics is given by Tosti Balducci et al. <cit.>. Furthermore, in industry, some companies are already pushing for the development of early practical applications of quantum computing <cit.>. For example, Airbus has posted the Airbus Quantum Computing Challenge. One of the challenges is to optimize a wingbox structure, the main load-bearing component in aircraft wings, using a quantum computer <cit.>. Another example comes from Volkswagen, who have researched how a QA can be used to optimize a traffic flow problem <cit.>. Volkswagen has already applied this research for the real-time optimization of public transport routes in Lisbon <cit.>.
In the aerospace industry there is a continuous demand to develop the most lightweight structures, as this can lead to increased fuel efficiency, or increased payload capacity, both of which can lead to higher profits. This research will investigate the concept of using a quantum computer to assist in the optimization of mechanical structures. More specifically, the optimization of 2D truss structures will be targeted, as they are among the simplest structures to start with and are easily scalable to arbitrarily complex forms. Of the two types of quantum computers, the QA is chosen due to the higher level of technological maturity compared to GPQC, offering significantly more qubits of processing power. The main commercial supplier of QA technology is the company D-Wave Systems Inc. In this research, the D-Wave 2000Q quantum annealer, which has roughly 2000 qubits available, is used. During the execution of this research, a monthly allowance of 60 seconds of quantum processing unit (QPU) access time was available. This limit was taken into account for the scope of the testing that was performed. Because the QA can only solve specific types of optimization problems, the objective in this work will be to investigate how the truss optimization problem can be made compatible with the QA.
The rest of the paper will start with a brief description of the background on QA in Section <ref>.
§ BACKGROUND ON QA: QUBO AND ISING FORMULATIONS
For any problem that one wishes to solve using a QA, the first step is to ensure the problem is written with either a QUBO or an Ising formulation. In both cases, the QA attempts to find a solution for which the Hamiltonian energy is minimal.
To define a QUBO problem, an N× N matrix Q is needed. The matrix Q is typically written as an upper triangular matrix. The QA attempts to find the optimal binary bitstring x of length N that minimizes the Hamiltonian energy H, as shown in <ref> <cit.>. When a solution to the QUBO problem is found, the binary variables in the solution vector x will then have values of either 0 or 1.
min(H) = x^T Qx
s.t. x_i ∈{0 , 1}∀ i ∈{ 1,2, …, N }
In the Ising formualtion, the system energy is given by a Hamiltonian function, as shown in <ref> <cit.>.
H ( s) = ∑_i=1^N h_i s_i + ∑_i<j J_ij s_i s_j
In this equation, s = [ s_1 , s_2 , …, s_N ] are Ising spins, which can take values of either -1 or 1. The parameters h_i and J_ij are qubit biases (self-interaction) and coupling strengths (qubit-qubit interaction) respectively. The summation as defined in <ref> then yields the Ising Hamiltonian H <cit.>. The QA will attempt to find a solution for the Ising spins in s for which the Hamiltonian energy H is minimal.
For this research, the QUBO problem framework is used, because the binary nature of the problem variables provides a convenient foundation to define a truss optimization problem. This will be described in the upcoming section.
§ PROBLEM DESCRIPTION
To explore the feasibility of casting a structural optimization problem into a QUBO format, simple two-dimensional truss optimization problems will be investigated. A discrete truss sizing optimization problem can be defined, whereby the cross-sectional area of truss members can be chosen from a predefined discrete set (e.g., a finite number of cross-sectional configurations as provided by the truss supplier). The objective of the optimization is to find the cross-sectional area of each truss member such that the stress in every truss member is as close as possible to the material's limit stress, thereby achieving the optimal use of material and hence minimum weight of the structure.
The truss systems that are defined for this research incrementally increase in their complexity. In total, three truss systems are defined: a basic two-truss system, a three-truss system, and a slightly more complex four-truss system. The three sample truss systems are shown in <ref>.
The exact definitions of these truss systems, the boundary conditions, and the applied loads are given in <ref>. For each system, the same fictitious material is used, with a Young's modulus of 200 GPa, and a material limit stress of 100 MPa. The material limit stress is assumed to be identical for both compressive and tensile loads. Furthermore, for every truss element, three different allowable choices for the truss cross-sectional area are defined, as shown in <ref>.
Having defined the three sample problems, the next step is to start investigating the method by which these discrete truss sizing optimization problems can be cast into a QUBO format. In the upcoming section this process will be described, along with the difficulties and pitfalls that were encountered.
§ METHOD
§.§ General Concept
For a truss system consisting of N truss elements, a set of C discrete choices is defined for the truss cross sectional area for every truss element. As such, for every truss element n the set of possible choices can be written as shown in <ref>.
A_n,set={A_n,1 , A_n,2 , … , A_n,C}
To define the cross-sectional area of truss n, a set of binary qubit variables is needed that each corresponds to one of the possible choices of cross-sectional area, giving <ref>:
q_n,set = {q_n,1 , q_n,2 , … , q_n,C}
With: q_n,c∈{ 0,1 } ∀ c ∈{ 1,2,… C }
The total cross-sectional area of truss n can then be defined by summation of the discrete choices and their corresponding binary qubit variable as shown in <ref>.
A_n = ∑_c=1^Cq_n,c A_n,c
In the case that, for truss n, only one of the qubits in q_n,set is equal to 1, and the others are equal to 0, then this binary variable would correspond directly to a particular choice in cross-sectional area. For a truss system of N truss elements and C discrete choices per truss element, the number of variables needed would become N × C in total. Together, they form the solution vector of the problem. For example, for the two-truss problem, the solution vector would be defined as:
q = [q_1,1, q_1,2, q_1,3, q_2,1, q_2,2, q_2,3]
To select the mid-sized choice for each truss element in the two-truss problem, the solution vector would evaluate to:
q = [0, 1, 0, 0, 1, 0]
It may be noted at this point that it is technically possible to select multiple cross-sectional areas for a single truss element. This would occur when a truss element has more than one associated binary variable set to a value of 1. Per <ref>, this would mean that the area of the truss element becomes the summation of multiple available choices. However, in this research the goal is to make a single distinct choice from the available set of choices. Solutions where exactly one cross-sectional area is selected per truss element will be considered valid solutions. Any other potential solution, selecting either none, or multiple cross-sectional areas per truss element will be considered invalid.
With the truss cross-sectional area defined in terms of qubit variables as shown above, a symbolic solution process for the truss optimization problems has been conceived, such that the objective function would eventually be expressed as a QUBO function of these qubit variables. We would then use QA to find the solutions of these qubit variables which would minimize the objective function. The following steps summarize this symbolic solution process:
* Using the expression for the truss cross-sectional area, the element stiffness matrices of the truss members can also be written in terms of the qubit variables.
* Using the FEM assembly procedure, the symbolic global stiffness matrix of the entire truss structure can be assembled from each of the element stiffness matrices.
* Proceeding as normal with the FEM analysis, boundary conditions must be taken into account, and a vector of applied loads must be known. By inverting the symbolic global stiffness matrix, and multiplying this inverse matrix with the load vector, a symbolic vector of nodal displacements can be obtained.
* Using the symbolic vector of nodal displacements, and the known initial length of every truss element, symbolic expressions for the strains of truss elements can be set up.
* By multiplying the symbolic expressions of the truss strains with the Young's modulus, symbolic expressions for the truss stresses are obtained.
* The symbolic expressions for the truss stresses will be used to construct an objective function for which the minimum solution encodes the optimal choice of cross-sectional area for every truss element in the structure.
* The symbolic objective function will be transformed into a QUBO format, then sent to D-Wave to find the minimum solution.
§.§ Symbolic Finite Element Method
§.§.§ Finding Expressions for Nodal Displacement
In the displacement-based linear finite element method for truss structures, the nodal Degrees of Freedom (DoFs) are the displacements. The stiffness matrix of each truss element can be defined by its nodal coordinates, cross-sectional area, and the Young's modulus of the material. These element stiffness matrices are then assembled according to the element's connectivity matrix to form a global system equation, typically as shown in <ref>.
K u = f
The matrix K is known, and represents the global stiffness matrix of the finite-element structure. The vector f is also known, as this vector defines the forces which are applied to the structure. The goal for the linear finite-element problem is to find the solution vector u, which contains the displacement of every node in the structure. By knowing the displacements of all nodes in the structure, other metrics such as the element strain and stress can also be calculated.
As the element stiffness matrix depends on the cross-sectional area, which is now represented symbolically in <ref>, the stiffness matrix of truss element n will then be a function of the associated qubit variables in <ref>. The assembled global stiffness matrix will then be a function of all the qubit variables of this structure, i.e., K will be a function of the vector q as shown in <ref> for the two-truss example. A script has been written that can set up the truss finite-element system equation symbolically:
K(q) u = f
The above symbolic system equation has been implemented and solved in both Python and Matlab. Based on our experience, the Matlab backslash operator seems to solve for the solution u faster than Python Sympy. Hence, it is used here to obtain the symbolic nodal displacement vector:
u(q) = K(q) \f
Having obtained the symbolic expressions for the nodal displacements, it became clear why this had been troublesome to calculate. The expressions for the nodal displacements are extremely long, even for such simple finite-element problems. This constitutes a bottleneck of the proposed approach, on which we will discuss later. Nevertheless, once we have obtained these expressions, the symbolic finite-element process can be continued.
§.§.§ Finding Expressions for Strain
Engineering strain is a common choice of strain measure for truss elements:
ϵ = L_𝑑𝑖𝑠𝑝 - L_0/L_0
where the original length of the truss element is denoted as L_0, and the deformed length of the truss element as L_𝑑𝑖𝑠𝑝. However, when this strain was implemented, it was found that this leads to a symbolic expression for the truss strain that contains many absolute functions. The appearance of these absolute functions was problematic as they prevented the symbolic expression from being written purely as a sum of quadratic polynomial terms, which is eventually necessary for the QUBO problem formulation.
To circumvent the appearance of the absolute functions in the symbolic expressions for the truss strain, the Green-Lagrange strain formulation was implemented, as given in <ref>. Under the infinitesimal deformation assumption, when L_0≈ L_𝑑𝑖𝑠𝑝, <ref> and <ref> are equivalent.
ϵ = 1/2(L_𝑑𝑖𝑠𝑝^2/L_0^2 - 1)
With the Green-Lagrange strain implementation it was seen that the absolute functions no longer appeared in the symbolic truss strain expressions. Thus, the symbolic expressions for the strain of every truss element in the sample truss sizing optimization problems could be found. In turn, these can then be used to find expressions for the truss element stresses.
§.§.§ Expressions for Stress
Expressions for truss stresses follow from <ref>, in which the material Young's modulus is denoted by E.
σ = E ϵ
To give an example of the symbolic expressions for the truss element stresses, <ref> give the stress in truss element 1 for the two-truss problem. Note, the expression for the stress in the truss element takes a fractional form.
σ_1 = N_1/D_1
With:
[c]0.5
N_1 = 7.4046706× 10^29× q_1,1
+9.3715363× 10^29× q_1,2
+1.1569798× 10^30× q_1,3
+9.0707215× 10^30× q_2,1
+1.0412818× 10^31× q_2,2
+1.1847473× 10^31× q_2,3
+1.6660509× 10^30× q_1,1× q_1,2
+1.8511677× 10^30× q_1,1× q_1,3
+2.0825636× 10^30× q_1,2× q_1,3
+1.4664165× 10^34× q_1,1× q_2,1
+1.6833582× 10^34× q_1,1× q_2,2
+1.6497186× 10^34× q_1,2× q_2,1
+1.9152597× 10^34× q_1,1× q_2,3
+1.893778× 10^34× q_1,2× q_2,2
+1.8330206× 10^34× q_1,3× q_2,1
+2.1546671× 10^34× q_1,2× q_2,3
+2.1041978× 10^34× q_1,3× q_2,2
+2.3940746× 10^34× q_1,3× q_2,3
+1.943726× 10^31× q_2,1× q_2,2
+2.0733078× 10^31× q_2,1× q_2,3
+2.2214012× 10^31× q_2,2× q_2,3
+3.1415357× 10^34× q_1,1× q_2,1× q_2,2
+3.3509714× 10^34× q_1,1× q_2,1× q_2,3
+3.5342277× 10^34× q_1,2× q_2,1× q_2,2
+3.5903265× 10^34× q_1,1× q_2,2× q_2,3
+3.7698428× 10^34× q_1,2× q_2,1× q_2,3
+3.9269196× 10^34× q_1,3× q_2,1× q_2,2
+4.0391173× 10^34× q_1,2× q_2,2× q_2,3
+4.1887143× 10^34× q_1,3× q_2,1× q_2,3
+4.4879081× 10^34× q_1,3× q_2,2× q_2,3
[c]0.5
D_1 = 1.1847473× 10^32× q_1,1× q_2,1
+1.3600415× 10^32× q_1,1× q_2,2
+1.4994458× 10^32× q_1,2× q_2,1
+1.547425× 10^32× q_1,1× q_2,3
+1.7213026× 10^32× q_1,2× q_2,2
+1.8511677× 10^32× q_1,3× q_2,1
+1.9584598× 10^32× q_1,2× q_2,3
+2.1250649× 10^32× q_1,3× q_2,2
+2.4178516× 10^32× q_1,3× q_2,3
+2.6656814× 10^32× q_1,1× q_1,2× q_2,1
+3.0600935× 10^32× q_1,1× q_1,2× q_2,2
+2.9618683× 10^32× q_1,1× q_1,3× q_2,1
+3.4817064× 10^32× q_1,1× q_1,2× q_2,3
+3.4001039× 10^32× q_1,1× q_1,3× q_2,2
+3.3321018× 10^32× q_1,2× q_1,3× q_2,1
+3.8685626× 10^32× q_1,1× q_1,3× q_2,3
+3.8251169× 10^32× q_1,2× q_1,3× q_2,2
+4.352133× 10^32× q_1,2× q_1,3× q_2,3
+2.5387442× 10^32× q_1,1× q_2,1× q_2,2
+2.7079938× 10^32× q_1,1× q_2,1× q_2,3
+3.2130982× 10^32× q_1,2× q_2,1× q_2,2
+2.901422× 10^32× q_1,1× q_2,2× q_2,3
+3.4273047× 10^32× q_1,2× q_2,1× q_2,3
+3.9667878× 10^32× q_1,3× q_2,1× q_2,2
+3.6721122× 10^32× q_1,2× q_2,2× q_2,3
+4.2312404× 10^32× q_1,3× q_2,1× q_2,3
+4.5334718× 10^32× q_1,3× q_2,2× q_2,3
+5.7121745× 10^32× q_1,1× q_1,2× q_2,1× q_2,2
+6.0929861× 10^32× q_1,1× q_1,2× q_2,1× q_2,3
+6.3468606× 10^32× q_1,1× q_1,3× q_2,1× q_2,2
+6.5281994× 10^32× q_1,1× q_1,2× q_2,2× q_2,3
+6.7699846× 10^32× q_1,1× q_1,3× q_2,1× q_2,3
+7.1402181× 10^32× q_1,2× q_1,3× q_2,1× q_2,2
+7.2535549× 10^32× q_1,1× q_1,3× q_2,2× q_2,3
+7.6162327× 10^32× q_1,2× q_1,3× q_2,1× q_2,3
+8.1602493× 10^32× q_1,2× q_1,3× q_2,2× q_2,3
For reference, the symbolic expressions for the truss stresses of each of the sample problems are available online <cit.>.
Now that a method has been implemented that can write the stresses in terms of qubit variables, the next step is to set up an objective function that, when minimized, should yield the most optimal choice of cross-sectional area for every truss element in the structure. However, this process uncovered another challenge, which is discussed in the upcoming section.
§.§ Development of an Objective Function
§.§.§ Fractional Objective Function
Using the expression for the stress in a truss element, an objective function can be set up describing the difference between the truss element stress and the maximum limit stress allowed by the material properties. If such a difference is minimized, then the truss element will be as close as possible to the material failure stress, meaning the weight of the truss is implicitly minimized.
When setting up the objective function, it is important to consider that the truss element stress evaluates to a negative number for truss elements under compression. However, because it is not known beforehand which truss elements will experience compressive or tensile stresses, this poses an issue. It is not possible to apply an absolute function to the truss element stress, as such a mathematical function is incompatible with QUBO problem formulations. As an alternative, the expression for the truss element stress can be squared to ensure that it always evaluates to a positive number. To keep consistent units, this also means that the maximum allowable stress is squared. A minimization problem is obtained by squaring the difference between the squared material limit stress and the squared truss element stress. As such, for a single truss element n, a minimization objective function can be defined as shown in <ref>.
F_n = (σ_𝑙𝑖𝑚𝑖𝑡^2 - σ_n^2 )^2
With <ref>, the minimum solution should encode the choice of truss cross-sectional area that minimizes the absolute difference between the material limit stress and the truss stress. To set up an objective function that describes the entire system of truss elements, the summation is taken of the objective functions of every individual truss element, which leads to <ref>.
F = ∑_n=1^NF_n = ∑_n=1^N(σ_𝑙𝑖𝑚𝑖𝑡^2 - σ_n^2 )^2
With <ref> a general method is obtained that can be used to set up an objective function for each of the sample problems. It is important to note that, because the truss element stresses σ_n are fractional in nature, as was shown in <ref>, the objective function from <ref> will also have a fractional form. Hence, the resulting objective function is referred to as the fractional objective function. To test whether this method of setting up an objective function works as expected, the objective functions are set up for each of three sample problems, and are evaluated by a brute-force analysis, i.e., the objective function is evaluated at every possible solution. The results for the three sample problems are shown in <ref>, and were produced using the Matlab code which is available online <cit.>. The following global minimum solutions were found:
* Two-truss problem: minimum at solution number 7, [ 0,0,1,1,0,0 ]
* Three-truss problem: minimum at solution number 21, [ 0,0,1,1,0,0,0,0,1 ]
* Four-truss problem: minimum at solution number 7, [ 1,0,0,1,0,0,0,0,1,1,0,0 ]
The above brute-force results will be used as references to benchmark the performance of QA. One may be tempted to directly send the objective function in <ref> to QA and see how it performs. However, one challenging aspect remains: the function has a fractional form. Fractional objective functions are incompatible with QUBO problem formulations, and cannot be directly solved by the QA. As such, in the next section, a method is investigated by which a potential non-fractional objective function can be set up that approximates the original fractional one.
§.§.§ Non-Fractional Objective Function
In a QUBO-based objective function, only a summation of linear and quadratic terms is allowed. Therefore, in this section a potential method is investigated to formulate a non-fractional objective function for the discrete truss sizing optimization problem.
The reserve factor of a truss element gives a measure of how close the truss element is to material failure. The reserve factor for a truss element n is calculated via <ref>.
RF_n = σ_𝑙𝑖𝑚𝑖𝑡/|σ_n|
To mitigate the absolute function for compatibility with QUBO, the material limit stress σ_limit and truss element stress σ_n can be squared. By doing so, a squared reserve factor can be calculated, according to <ref>.
RF^2_n = σ_𝑙𝑖𝑚𝑖𝑡^2/σ^2_n
Using <ref> the theoretical optimal value of the squared reserve factor shall be 1 if the cross-sectional area can be varied arbitrarily. Given that the squared reserve factor is written as a fraction, this means that the numerator and the denominator should be equal to each other in the optimal case. In this case the numerator minus the denominator should also yield a result of zero. However, assuming that the currently known symbolic expression for the truss reserve factor actually describes a sub-optimal case, this will result in an error term. An optimization problem can then be set up for which the goal is to minimize this error, by simply squaring the expression. This ensures that the optimum solution will be the one where the squared error is closest to zero, giving an objective function for the minimization. The steps to this process are shown in <ref>.
If: RF^2_n = σ_𝑙𝑖𝑚𝑖𝑡^2/σ^2_n and σ^2_n = σ^2_N,n/σ^2_D,n
Rewriting: RF^2_n = σ_𝑙𝑖𝑚𝑖𝑡^2 σ^2_D,n/σ^2_N,n = N_n/D_n
When: RF^2_n = N_n/D_n = 1
Then: N_n=D_n
Optimally: N_n-D_n = 0
Sub-optimally: N_n-D_n = ϵ
Optimizing: min(ϵ^2)⇒min((N_n-D_n)^2)
Considering the above reasoning, the non-fractional objective function for a truss element n is written in <ref>.
F_n = (N_n - D_n)^2 = ( σ_𝑙𝑖𝑚𝑖𝑡^2 σ^2_D,n - σ^2_N,n)^2
The objective function for the complete truss system is then found by taking the sum for all truss elements, as given in <ref>.
F = ∑_n=1^NF_n = ∑_n=1^N( σ_𝑙𝑖𝑚𝑖𝑡^2 σ^2_D,n - σ^2_N,n)^2
This reformulated non-fractional objective function is again tested with the sample problems, using the brute-force analysis method <cit.>, to find out if the minimum solutions remain the same as those of the original fractional objective function. The results of these analyses are shown in <ref>, with the global minimum solutions indicated with a small red circle. The following global optimum solutions were obtained:
* Two-truss problem: minimum at solution number 7, [ 0,0,1,1,0,0 ]
* Three-truss problem: minimum at solution number 1, [1,0,0,1,0,0,1,0,0]
* Four-truss problem: minimum at solution number 1, [1,0,0,1,0,0,1,0,0,1,0,0]
Using the non-fractional objective function, the expected solution for the two-truss problem is returned. However, for the three- and four-truss problems, the results are however not those expected. Therefore this method is presumed to be flawed. In the method, the difference between N_n and D_n is considered, rather than considering the ratio N_n/D_n which defines the squared reserve factor. A solution that minimizes the difference between N_n and D_n may not necessarily be the same solution for which the ratio between N_n and D_n is closest to a value of 1. Hence, this may be the source of the unexpected behavior produced by this objective function method. In the next section, an alternative method is discussed, with which a fractional objective function can still be made compatible with the QA.
§.§ Iterative non-fractional approximations to the fractional objective function
developed a method with which the optimum solution to fractional objective functions can be found, by iteratively solving an adaptive non-fractional function <cit.>. The iterative scheme described by the authors is presented in <ref>.
[][c]
With : F = N (𝐪) /D (𝐪)
0: Initialize variables
𝑖𝑡𝑒𝑟 = 0
λ = 0
𝑜𝑏𝑗 = ∞
δ = 10^-6
1: 𝐰𝐡𝐢𝐥𝐞| λ - obj | > δ
2: 𝑖𝑡𝑒𝑟 = 𝑖𝑡𝑒𝑟 + 1
3: F_nf(𝐪) = N (𝐪) - λ D(𝐪)
4: find 𝐪̂ s.t. min(F_nf(𝐪̂))
5: 𝑜𝑏𝑗 = λ
6: λ = F(𝐪̂)
In this scheme the fractional objective function F(𝐪), written in terms of the binary problem variables 𝐪, is rewritten into a non-fractional form. This happens in step 3 of the scheme, producing the non-fractional objective F_nf(𝐪). If this non-fractional function can be written in a QUBO form, then the partially optimal solution 𝐪̂ can be found using the QA, in step 4 of the iterative scheme. In step 6, the objective function value of the original fractional function is evaluated using the partially optimal solution that was found to the non-fractional function. When the difference between the objective function values found in consecutive iterations is less than δ the while-loop is broken and the analysis has converged. Ultimately, the final solution that is found for 𝐪̂ will be the minimum solution to the original fractional objective function.
This iterative method for finding the optimal solution to fractional objective functions was implemented in the Python scripts for the discrete truss sizing optimization problems <cit.>. However, the method was slightly extended, to include a user-defined maximum allowed number of iterations. This was added as an additional condition to while-loop in step 1 of <ref>. In other words, defining step 1 as:
𝐰𝐡𝐢𝐥𝐞| λ - obj | > δ AND 𝑖𝑡𝑒𝑟≤𝑖𝑡𝑒𝑟_max
Setting a maximum number of iterations will help to prevent wasting excessive quantum computational time in the case that the analysis has difficulty converging on an optimal solution.
§.§ Objective Function Processing to Yield a QUBO Problem
With the iterative scheme that was defined by in <cit.>, the fractional objective function for the discrete truss sizing optimization problem can be solved by iteratively finding the solution to an adaptive non-fractional function. Once this non-fractional form is determined, there are a number of processing and simplification steps that can be taken to finally yield a QUBO problem. By reducing the complexity of the objective function, the QA might be able to more easily identify the global optimum solution. The processing steps will allow the user to fine-tune the performance of the QA more easily, and aid in finding valid solutions to the discrete truss sizing optimization problem. In the following sections the simplifications and processing steps are discussed.
§.§.§ High-Order Truncation
The first simplification to the rewritten non-fractional objective function is to truncate excessively high-order terms. Specifically, for a system with N truss elements, any term in the objective function that is above N^thorder can be truncated. These terms do not contribute any useful information about valid solutions to the optimization problem. Since, for a valid solution to the truss optimization problem, only N number of qubits are expected to end in a 1 state, while all other qubits should end in a 0 state. Therefore, all terms in the objective function that are above N^th order never contribute to valid solutions. These terms can therefore safely be truncated in order to simplify the overall objective function, while having no influence on valid problem solutions.
The objective functions generally contain every possible unique multiplication between the qubit variables. Therefore, truncating the excessively high-order terms from the objective function can have a very large impact on the overall complexity of the objective function. This was investigated with a simple Excel sheet, which is made available online <cit.>. For example, in the two-truss problem, a total of six qubit variables are used. In total, this gives 63 different unique multiplications of qubit variables from first to sixth order. For this problem, all terms above second order only contribute information to invalid problem solutions, as for valid solutions it is expected that exactly two qubit variables will be given a value of 1. As such, terms above second order can all safely be removed from the objective function. After truncating the terms above second order, only 21 total first- and second-order terms remain in the objective function. This means that the overall number of terms in the objective function is reduced by 66%. In a similar vein, for the three-truss problem, this truncation reduces the number of terms in the objective function by roughly 75%. For the four-truss problem, the reduction is approximately 80%. It is expected that such a significant reduction in the complexity of the objective function will benefit the QA, allowing it to more easily find valid global minimum solutions to the truss sizing optimization problems.
§.§.§ Linear Scaling
When submitting problems to the QA, there are a number of parameters that can be fine-tuned to alter or improve the performance of the QA. The magnitude of these parameters relative to each other is typically what drives the performance of the QA. To set a consistent baseline for solving the truss sizing optimization problems, it is therefore convenient to scale the magnitude of the problem to a user-defined value. In this way, the magnitude of the truss optimization problem can be scaled with respect to other constraints. Thus, a linear scaling of each of the terms in the objective function can be performed, to ensure that the terms have a consistent and controllable magnitude.
First, the term with the maximum absolute magnitude in the objective function, c_𝑚𝑎𝑥, must be found. Then, every term in the objective function F can be divided by this magnitude, to perform the linear scaling. A user-defined parameter, c_𝑢𝑠𝑒𝑟, is introduced to ensure that the maximum magnitude of the terms in the objective function can be exactly specified. Thus, the linear scaling of the objective function F is performed as shown in <ref>.
F_scaled = c_𝑢𝑠𝑒𝑟× F/c_𝑚𝑎𝑥
Inside the linearly scaled objective function F_scaled, the term with the maximum magnitude will have a magnitude of c_user. By altering the value of c_𝑢𝑠𝑒𝑟, the user is able to control the relative importance of the objective function with respect to other problem-specific parameters.
§.§.§ Non-Linear Scaling
Throughout the brute-force testing of the objective functions for the different sample problems in <ref>, it was seen that with certain objective functions the global minimum solution and other local minimum solutions can have nearly identical function values. This was especially the case for the three-truss problem, using the fractional objective function. When the differences between solutions are small, it becomes difficult for the QA to find the global optimum solution. The main cause for very small differences between local minima and the global minimum is due to specific terms in the objective function with very small, high-precision coefficients.
Due to analog control errors, small and high-precision coefficients are difficult for the QA to correctly take into account <cit.>. Furthermore, when a problem is submitted to the D-Wave QA, it is by default automatically scaled (by auto_scale) to ensure the problem coefficients fall inside the controllable range of the QA. For linear QUBO coefficients this range is between -2 and 2. For the quadratic QUBO coefficients the maximum available range is between -2 and 1 <cit.>, however, by default the QA used a range between -1 and 1. These ranges can be directly queried from the D-Wave solver by Python commands:
[][c]DWaveSampler().properties['h_range']
[][c]DWaveSampler().properties['j_range']
It is good to be mindful of this automatic scaling before submitting problems to the QA, to prevent cases where a problem is unexpectedly scaled down to a magnitude that cannot feasibly be controlled by the QA.
To assist the QA with small high-precision coefficients, it would be beneficial if these coefficients could be amplified. In turn, this may also amplify the differences between the global and other local minimum solutions. This would potentially help the QA in finding the global optimum solution. Furthermore, this may also help the QA to more effectively utilize information from the objective function, as previously insignificant terms may be amplified to a magnitude that the QA can actually take into account. To help address these issues, a non-linear scaling method was developed, with which very small coefficients in the objective function are scaled to become larger and more influential, while terms terms that are already significant are not scaled as much.
A Python function was created and implemented that performs this non-linear scaling. The method relies on simple user-defined parameters that will allow for manual tweaking once the problem has been fully implemented for use with the QA. Given the user-defined scaling parameter c_𝑁𝐿, a positive coefficient c_𝑖𝑛+ from the objective function is input into the non-linear scaling function. The non-linearly scaled coefficient c_𝑜𝑢𝑡+ is then found by <ref>.
c_𝑜𝑢𝑡+ = c_𝑖𝑛+/c_𝑖𝑛+ + c_𝑁𝐿
For negative input coefficients, c_𝑖𝑛-, the non-linear scaling is performed by <ref>, yielding the negative non-linearly scaled coefficient c_𝑜𝑢𝑡-.
c_𝑜𝑢𝑡- = -c_𝑖𝑛-/c_𝑖𝑛- - c_𝑁𝐿
To determine if the input coefficient must be scaled using either <ref> or <ref>, a simple if-statement is used. By setting c_𝑁𝐿 to be a certain small number, such as 0.025, the amount of scaling that is applied to small coefficients is much more aggressive than for relatively larger coefficients.
As a demonstration of the non-linear scaling, several plots are given in <ref>, showing the influence of changing the parameter c_𝑁𝐿. It can be seen that for small coefficients the scaling is much more significant than for relatively larger coefficients. However, for very aggressive values of c_𝑁𝐿, such as 0.001, this can also cause all relatively large coefficients to become essentially equal. The use of this non-linear scaling function will therefore be a balancing act of increasing the importance of small coefficients, while not losing distinction for the larger terms. The Python code for the non-linear scaling function, and for producing the plot from <ref> is available online <cit.>.
The effect of the non-linear scaling is also investigated for the specific truss sizing sample problems. The non-linear scaling is performed during the iterative solving procedure. Specifically, it is performed between steps 3 and 4 of the procedure outlined in <ref>, and is applied to the rewritten non-fractional objective function. The effect of the non-linear scaling is investigated for the first iteration of the sample problems. This first iteration is convenient to investigate, as <ref> shows that λ = 0, meaning that the objective function in step 3 in <ref> only involves the numerator of the original fractional objective function.
The rewritten non-fractional function is first linearly scaled, choosing the user-defined maximum coefficient to be c_𝑢𝑠𝑒𝑟 = 1. This brings the function values in the energy landscape to a more reasonable magnitude and allows for the non-linear scaling to work as intended. Then, the non-linear scaling can be applied, using a scaling parameter of c_𝑁𝐿=0.1. The plots in <ref> show both the original (linearly scaled) and non-linearly scaled energy landscapes of the first iteration in the solving procedure for the sample truss problems. It can be seen that the small fluctuations in the energy landscape are amplified, which should make the problem easier to solve for the QA. As such, the non-linear scaling will be applied when submitting problems to the QA.
It is important to note that this non-linear scaling function was developed solely for the purpose of increasing the differences between the global and local minima in the energy landscapes, as a part of this study. It is not intended to be used as a general-purpose tool for problems with unknown energy landscapes. Since, when using overly aggressive scaling factors for c_𝑁𝐿, this may cause the global minimum solution to change. However, this is not expected to be an issue for the truss sizing optimization problem. In step 6 of the iterative solving procedure in <ref>, the interim solution q̂ is evaluated with the original fractional objective function, meaning the iterative procedure should eventually converge on the global optimum of the original fractional objective function.
§.§.§ Truncation of Insignificant Terms
Once the linear and non-linear scaling has been performed it is possible to further simplify the objective function. After all scaling has been applied, certain terms in the objective function might still have a magnitude very close to zero. Such terms are difficult for the QA to take into account, as there is a finite precision with which qubit biases can be controlled within the physical hardware of the QA <cit.>. Since the magnitude of terms in the objective function might be smaller than the precision that the QA can control, such terms cannot reliably be taken into account. This is because the analogue control-error for the qubit biases can be larger than the actual term in the objective function. Thus, terms that are too small in magnitude can simply be removed from the objective function, as trying to include them is akin to simply introducing noise into the function. A conservative truncation magnitude, beyond the precision that the QA can control, is to remove terms with magnitudes smaller than 10^-8. This reduces the complexity of the objective function, which may have a positive influence on the ability of the QA to find the global optimum solution to the truss sizing optimization problem.
§.§.§ Unary Constraint
The majority of all possible solutions to the truss optimization problems are invalid. Solutions are invalid when an incorrect number of cross-sectional areas are selected. In other words, when either zero or more than one cross-section is selected for a truss member, the solution is invalid. Solutions can only be valid when exactly one cross-sectional area is chosen for every truss member. To promote the selection of valid solutions, the unary constraint is implemented.
For every truss element n in a truss system, with a total of C possible choices of cross-sectional area, <ref> must be true for valid solutions to the truss sizing optimization problem.
∑_c=1^Cq_n,c = 1
Since only one of the qubits on which this constraint acts must take a value of 1, while the others must all equal 0, this constraint is sometimes referred to as the unary constraint. Further elaboration on the unary constraint is given in <cit.>. In the context of the truss optimization problem, it enforces that only one cross-section is chosen per truss element. However, in the form shown in <ref>, the constraint cannot be applied in the QUBO problem framework. This is because the constraint is currently written as an equality constraint, which per definition is incompatible with quadratic unconstrained binary optimization problems. For the constraint to become compatible with QUBO problems it must be rewritten as a minimization problem. A common method is to rewrite the equality constraint as a penalty function, using a `squared-error' approach <cit.>. This approach is also used, for example, in the works by and <cit.>. Thus, the constraint can be rewritten as a minimization problem as shown in <ref>.
(∑_c=1^Cq_n,c) - 1 = 0
((∑_c=1^Cq_n,c) - 1)^2 = 0
Now, adding in a penalty scaling factor λ, the Hamiltonian energy penalty function for the unary constraint becomes:
H_U = λ((∑_c=1^Cq_n,c) - 1)^2
For the truss sizing optimization problems, only three possible choices of cross-sectional area are available per truss element, meaning that C=3. Therefore, <ref> can be expanded and simplified, as shown in <ref>.
H_U = λ(q_n,1 + q_n,2 + q_n,3 - 1 )(q_n,1 + q_n,2 + q_n,3 - 1 )
H_U = λ(q_n,1^2 + q_n,1q_n,2 + q_n,1q_n,3 - q_n,1.
. + q_n,1q_n,2 + q_n,2^2 + q_n,2q_n,3 - q_n,2.
. + q_n,1q_n,3 + q_n,2q_n,3 + q_n,3^2 - q_n,3.
. - q_n,1 - q_n,2 - q_n,3 +1 )
Knowing that q_n,c∈{0,1}, which means that q_n,c^2 = q_n,c, the expression can be further simplified:
H_U = λ(2 q_n,1q_n,2 + 2 q_n,2q_n,3 + 2 q_n,1q_n,3 - q_n,1 - q_n,2- q_n,3 + 1)
Lastly, the final constant term can be dropped, since it is independent of the qubit variables, and does not affect the minimization problem. Doing so makes the unary constraint penalty function compatible with the QUBO problem framework. Since, the penalty function can now be written as a pure summation of linear and quadratic terms, as shown in <ref>.
H_U = λ(2 q_n,1q_n,2 + 2 q_n,1q_n,3 + 2 q_n,2q_n,3 - q_n,1 - q_n,2- q_n,3)
By adding the unary constraint, for each truss element, to the overall truss sizing objective function, the QA is more likely to find valid solutions. The strength of the unary constraint can be fine-tuned by altering the value of the user-defined parameter λ. By trail and error it has been found that a good starting value of λ is to be twice the magnitude of the maximum term in the objective function. However, if invalid solutions are consistently found, the strength of the constraint can be increased until noncompliance with the constraint stops.
§.§.§ Quadratization
Up until this point, the objective function has been manipulated, scaled, and truncated in order for it to become more compatible with the QUBO problem formulation. However, one key issue has yet to be solved: the function might still contain many terms that are greater than quadratic order. Therefore, it can still not be used with the QA, since per definition the QA can only solve quadratic problems. Performing a quadratization of a high-order objective function ensures that it is rewritten as a quadratic-order function, with equivalent solutions.
There are many different methods of quadratization discussed in literature, an extensive overview of which is given by <cit.>. Some of these methods utilize auxiliary variables to rewrite the high-order objective function into an equivalent quadratic-order expression, while other methods are able to do so without the need for auxiliary variables. Each method has its respective benefits and drawbacks. For example, it is convenient when no auxiliary variables are needed, yet in that case it may require much effort to rewrite the objective function in an equivalent quadratic form. Alternatively, if a method uses auxiliary variables, it might be easier to find an equivalent quadratic form, but the additional variables increase the complexity of the objective function, making it more difficult to find the optimum <cit.>.
In practice, the most straightforward way to perform the quadratization is to simply rely on the implementation that is provided by D-Wave <cit.>. In their implementation, all high-order terms are rewritten and replaced to be in terms of auxiliary variables, such that the final problem is at most of quadratic order. An additional user-defined parameter is used to select the strength with which the quadratization is enforced <cit.>. If the quadratization is not enforced correctly, this can result in a poor approximation of the original high-order objective function. The quadratization strength is problem-dependent and must be tuned such that the quadratization is always obeyed, to have an accurate representation of the original high-order objective function.
With the quadratization, the discrete truss sizing optimization problem is finally written as a QUBO problem. This vital step finally concludes all of the preparatory work that was necessary for the problem to be compatible with the quantum computing hardware. The next section will discuss the solution process for the discrete truss sizing optimization problems.
§.§ Solving the Discrete Truss Sizing QUBO
Having completed the previous processing steps, the truss sizing optimization problems can now finally be written in a QUBO formulation and can be solved using the QA. In this section, some notable parameters that influence the behavior of the QA are discussed, as well as the testing approach for understanding the performance of the QA for the three sample problems.
§.§.§ Overview of Analyses
To solve the reference truss problems, two different analysis methods are applied. First, brute-force evaluation of the original fractional objective function is used to obtain a baseline solution. Then, quantum annealing is used, following the procedures described in the previous sections. The specifications of the local classical computing hardware that was used in this research are given in <ref>.
The brute-force analyses will simply serve to obtain a reference solution to the truss sizing optimization problems. There are other more efficient (and more complicated) classical analysis methods available for truss optimization problems, as reviewed by <cit.>. However, for the purposes of this study, simple brute-force analysis is sufficient to produce the reference solutions. Each of the brute-force analyses will be performed three times, so that an average solve time can be calculated.
The main focus for the QA analyses will be to find the computational time and the probability of obtaining the global optimum solution. It will also be measured how long the symbolic finite element approach takes to set up the original fractional objective function for the truss sizing optimization problems. The goal will be to find out whether it is feasible to apply quantum computing to these practical truss optimization problems. Furthermore, these analyses will show whether the methods outlined in the previous sections provide a feasible means of translating the truss sizing problem to a QUBO format. To calculate the average computational time and show a basic probability for finding the global optimum solution, the QA analyses are each performed ten times. The number of analyses could unfortunately not be increased due to limitations placed on the amount of quantum computational time allotted to basic user accounts on the D-Wave Leap platform.
§.§.§ Parameter Tuning
In the previous sections, a number of user-defined parameters were introduced that influence the objective functions of the three truss sizing problems. Values need to be chosen for these parameters before the problems can be submitted to the QA. Overall, the following parameters all need to be given values:
* Iterative solving procedure
* Maximum number of iterations allowed
* Iteration convergence threshold
* Objective function processing
* Highest order terms allowed
* Linear scaling magnitude
* Non-linear scaling strength
* Precision truncation magnitude
* Unary contraint strength
* Quadratization strength
* Quantum annealing
* Number of reads
* Chain strength
To aid in finding sensible values for most of the above parameters, without wasting the limited amount of quantum computational time available, an alternative classical solver is used. D-Wave provides a Simulated Annealing (SA) solving algorithm, which, similar to their quantum annealer, can be used to solve QUBO problems. The main difference is that the SA solver only relies on the local classical computing hardware, and does not expend any of the quantum computational time allowance. The SA solver was therefore used to test the functionality of the Python code, and to find initial values for the relevant problem parameters.
Values for the parameters related to the iterative solving procedure were first chosen. Testing via initial trial SA analyses showed that around 5 iterations are needed to find a converged solution. This value will be tripled for the quantum annealing analyses to give some room for potential errors to be corrected during the procedure. Thus, the maximum number of iterations within one solving attempt will be set to 15. If the procedure does not converge after the maximum number of iterations is reached, the analysis is stopped, to prevent excessive expending of computational time. The convergence threshold δ for the iterative procedure (as seen in <ref>) is set at a value of 10^-6, which is the same value used by
Starting values for the different objective function processing parameters were also found from initial trial SA analyses. First of all, the highest order of terms allowed in the objective function for each of the sample problems will be set equal to the number of truss elements. Second, it was chosen to set the linear scaling maximum magnitude c_𝑢𝑠𝑒𝑟 to a value of 1 for all analyses. Third, the non-linear scaling parameter c_𝑁𝐿 has been set to a value of 0.1 for all analyses. During brute-force testing it was seen that this value improves the distinction between minima in the energy landscapes for the sample problems, while preserving the global minima. Fourth, the unary constraint strength was set to a value of 10 for the two- and three-truss problems. For the four-truss problem it was set to a value of 20. Testing via SA showed that with these settings the constraint was obeyed consistently, yielding valid solutions to the truss sizing problems. Fifth, terms that have a magnitude smaller than 10^-8 are truncated, to slightly reduce the number of terms in the objective functions. This is a conservative truncation, as it is well beyond the precision that the QA hardware can control. Finally, the quadratization strength is set to a value of 10 for the two- and three-truss problems, and a value of 20 for the four-truss problem, such that it equally matches the strength of the unary constraint.
For the QA, some additional parameters are needed to solve the sample problems. The number of reads describes the number of times a specific problem is solved by the QA, before the final best-performing solution is returned to the user. Increasing the number of reads increases the likelihood of obtaining optimal solutions, at the expense of additional computational time. To minimize the expenditure of quantum computational time, a viable number of reads was first estimated using the SA solver.
Using the SA solver, some tests were performed in which the number of reads was set to values of 16, 64, and 256, for each of the three sample problems. For the two-truss problem, the global optimum results were obtained every single time, regardless of the number of reads. For the three-truss problem, using 64 and 256 reads reliably gave optimal or near-optimal results. For the four-truss problem, only when setting the number of reads to 256 was the global optimum result reliably obtained.
The settings for the number of reads for the QA analyses were chosen based on the testing that was done using SA. Namely, for the two-truss problem, the number of reads will be set to 16 and 64. For the three-truss problem the number of reads will be set to 64 and 256. Finally, for the four-truss problem, the number of reads will only be set to 256.
The chain strength parameter for the QA relates to a physical issue called chain break, which can occur while the QA is solving problems. To explain the fine-tuning of the chain strength parameter, some additional context will first be given to explain the chain break phenomenon.
A QUBO problem is defined through an upper-triangular N × N matrix Q. The terms on the diagonal of the matrix Q relate only to a single problem variable, while the off-diagonal terms describe an interaction between two different problem variables. On the D-Wave 2000Q, which was used during this research, each qubit can only directly interact with at most six other qubits <cit.>. This limited connectivity between qubits can present an issue for larger QUBO problems.
When a QUBO is submitted to the QA, it must be embedded onto the physical architecture of the QPU. The embedding translates the logical structure of a QUBO problem and translates it to the physical structure of the QA. This also means that the logical problem variables are translated to physical qubits on the QPU. However, when the QUBO problem necessitates a high connectivity between logical problem variables it can become impossible to directly embed every logical variable onto individual single qubits. By chaining together strings of multiple qubits, and enforcing them to act as single logical problem variables, the connectivity between the embedded logical variables can be increased beyond the limitations of the physical hardware <cit.>.
In practice, problems can become very challenging for the QA to solve if the number of interactions between variables is high. A QUBO problem with high connectivity requirements will lead to embeddings with very long qubit chains. In turn, the longer the qubit chains are, the more difficult it is to enforce the chained qubits to act in unison. When a qubit chain is working as intended, the chained qubits will all end up in the same final ground state, all being either 0 or 1. However, when a qubit chain is broken the qubits in the chain end up with a mix of 0 and 1 states. This makes the final state of the qubit chain, and the intended final state of the corresponding logical problem variable unclear. By increasing the chain strength, the risk of chain breaks occurring is reduced. However, overly high chain strength should be avoided as the relative strength of the objective function itself is reduced, potentially making it more difficult to obtain the true global optimum solution.
Initially, a chain strength of 10 was selected, but it was seen that chain breaks would still occur for the three- and four-truss problems. For those problems a chain strength of 30 performed better, preventing chain breaks from occurring. There are very many additional parameters that can be tuned to alter the performance of the quantum annealer, but these were left in their default configurations <cit.>.
In <ref> a summary is given of all parameters used for each of the analyses. Parameters that are irrelevant or not applicable for an analysis are indicated by NA.
§ RESULTS
§.§ Results: Two-Truss Problem
The two-truss problem is the simplest of the three sample problems. To set up the original fractional objective function for the problem, the symbolic finite element approach is used. For this problem, it took approximately 13.5 seconds to set up the fractional objective function. Once this objective function is found, it is written to a text file. The objective function is then imported and interpreted before performing every analysis, and no longer needs to be set up from scratch. This saves some time when performing the three brute-force analyses, and for the QA, where ten analyses are performed for each different setting of the number of reads.
For the brute-force analysis, evaluating the fractional objective function for valid solutions, an average of 0.234 seconds was needed to find the global optimum solution. The baseline global optimum solution is [0, 0, 1, 1, 0, 0]. This indicates the largest possible cross-sectional area should be chosen for the first truss element, while for the second truss element the smallest cross-sectional area should be chosen.
Using the QA, the analyses were performed ten times, to gain some insight into the probability of obtaining the globally optimal solution. In <ref>, the solutions which were found by the QA are presented in a histogram. Superimposed on the histogram is a line-plot of the original fractional objective function, which had previously been calculated by brute force.
It can be seen from <ref> that, regardless of the choice for the number of reads, the QA finds solution number 7, which corresponds to the same global optimum solution as is obtained via the brute-force analyses.
When it comes to the computational time for the QA analyses, the following results were obtained:
* With number of reads = 16
* Average total time = 19.52 s. Standard deviation = 5.10 s.
* Average QPU time = 59176 μs. Standard deviation = 15061 μs.
* With number of reads = 64
* Average total time = 19.06 s. Standard deviation = 0.21 s.
* Average QPU time = 118207 μs. Standard deviation = 10.7 μs.
From these results it is seen that setting the number of reads to 64 leads to more consistent analysis times, compared to using only 16 reads. On the other hand, by setting the number of reads to 64, the amount of QPU access time is increased significantly.
§.§ Results: Three-Truss Problem
The three-truss problem is a more complicated problem to set up compared to the two-truss problem, due to the increased number of variables. Setting up the fractional objective function for this problem, and writing that expression to a text file took approximately 81.4 seconds. Once the expression was written to a file, it could simply be imported and reused for every analysis. Overall, three brute-force analyses were performed for the three-truss problem, as well as ten QA analyses for each setting of the number of reads.
Considering the brute-force analysis of the fractional objective function, it took an average of 0.9 seconds to find the global optimum solution [0, 0, 1, 1, 0, 0, 0, 0,1]. This solution indicates that the largest cross-sectional area should be chosen for the first and third truss elements, and that the smallest cross section should be chosen for the second truss element.
With the QA, the analyses were performed with the number of reads set to values of 64 and 256. The solution probability histogram in <ref> shows the results that were obtained. The figure also shows the original fractional objective function, as was calculated via brute-force. Analyses that ended with non-valid results are indicated by the 'NV' column in the histogram.
From <ref>, it can be seen that there is a large difference in performance, depending on the setting for the number of reads. Setting the number of reads to 64 leads to non-valid solutions for most analyses. At this setting, the QA does not reliably produce usable results. However, when the number of reads is increased to 256, optimal or near-optimal valid solutions are obtained most of the time. From these results it is also clear that the problem is more difficult for the QA to solve compared to the two-truss problem.
As part of the QA analyses, the following computational times were measured:
* With number of reads = 64
* Average total time = 98.7 s. Standard deviation = 37.5 s.
* Average QPU time = 337477 μs. Standard deviation = 117808 μs.
* With number of reads = 256
* Average total time = 48.8 s. Standard deviation = 20.6 s.
* Average QPU time = 558414 μs. Standard deviation = 244331 μs.
It is seen that setting the number of reads to 256 leads to more consistent behavior, as evidenced by the smaller standard deviation for the total analysis time. Furthermore, the higher number of reads allows for the analysis to converge more quickly, as fewer iterations are generally needed to find a solution. However, the consequence of choosing a higher number of reads is that the amount of QPU access time is also increased.
§.§ Results: Four-Truss Problem
The last problem, with the highest number of variables involved, is the four-truss problem. Using the symbolic finite element method, it took approximately 3431 seconds to set up the fractional objective function. This is a very significant increase compared to the roughly 81 seconds that were needed to set up the objective function of the three-truss problem. In this case, it is a great benefit that the fractional objective function is written to a text file. Testing of the analysis procedures is much faster when the objective function text file can simply be imported and interpreted, compared to prefacing every analysis by a nearly hour-long setup process. Overall, the brute-force analysis was performed three times, and the QA analysis was performed ten times with the number of reads set to 256.
Via brute-force analysis, the global optimum solution [1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0], was found in an average of 4.4 seconds. This solution indicates that the first, second, and fourth truss elements should optimally use the smallest choice for the cross-sectional area. The third truss element should use the largest available cross-sectional area.
The histogram of the results that were obtained by the QA are shown in <ref>. The figure also shows a line plot of the original fractional objective function, which was determined by the brute-force analyses.
From <ref>, it is seen that seemingly random, but valid, solutions are obtained by the QA. Only one solution was found that comes within 5% of the global optimum solution. Namely, solution number 34 was found once and has an an objective function value of 1.103. The global optimum solution is located at solution number 7, and has an objective function value of 1.065. It is worth noting that seven out of the ten analyses ended with an iteration run-out, rather than ending with a converged final solution.
The computational time for the QA analyses was obtained:
* With number of reads = 256
* Average total time = 437 s. Standard deviation = 39.0 s.
* Average QPU time = 1280156 μs. Standard deviation = 251109 μs.
The average time to solve is approximately nine times longer than that for the three-truss problem, using the same number of reads. Furthermore, the average of 1.28 seconds of QPU access time per analysis meant that any further testing was not possible for this problem, as this would consume too much of the 60-second monthly QPU access budget.
§ CONCLUSIONS
This work has established a new method to apply QA to the discrete optimization of truss structures. A symbolic FEM approach is employed to express the objective function in terms of qubit variables. As the qubit variables encode the design choices of the truss structure, they would inevitably take part in the formulation of the stiffness matrix of the structure, which leads to a fractional form for the objective function. 's iterative approach is used to approximate the original fractional objective function with a non-fractional one such that it can be made compatible with QA. The proposed method has been applied on three different discrete truss sizing optimization problems. It is found that the QA is able to find the global minimum solution if sufficient reads can be afforded. However, there are several challenges that need to be addressed for this approach to be scalable to larger problems.
§.§ Symbolic Finite Element Method
The first step to translating the truss sizing problems to a compatible format is to define an objective function, written in terms of binary variables. For this purpose the symbolic finite element method was used, which was implemented in Matlab. The time to setup the objective function for each of the three sample problems was measured, and is reiterated in <ref>.
The timings given in <ref> show that the symbolic finite element method implementation scales poorly as the number of problem variables increases. Even if the classical computing hardware used for this method improves, this method is likely infeasible for larger problems. While the symbolic finite element method allows the practical application of quantum annealing to be investigated for simple truss sizing problems, the current implementation cannot be directly applied to larger and more realistic truss structures. Further research is needed to investigate more efficient ways of solving symbolic system equations.
§.§ Fractional Objective Function
When the symbolic finite element method is used to set up an objective function for a discrete truss sizing optimization problem, the resulting objective function has a fractional form. This further complicates the journey to achieving a QUBO compatible form. An iterative method was implemented that rewrites the original fractional objective function into a non-fractional form. This non-fractional function can then be further processed in order to achieve the final QUBO problem. However, a number of points make the current implementation difficult to use on the QA hardware.
Firstly, the objective function that is obtained from the symbolic finite element method contains all possible multiplications between the available binary variables. This means that the problem initially requires an all-to-all connectivity between qubits, if it was to be directly embedded on the QPU. This research describes a number of steps that were taken to reduce the complexity of the truss sizing problem, such as removing excessively high-order terms, and truncating insignificant terms. Nevertheless, the connectivity requirements for the truss sizing problems remain far beyond the natural capability of the QA, meaning that long qubit chains are needed to embed the truss sizing problems. Not only do these long qubit chains have a negative impact on the performance of the QA, but larger truss sizing problems could eventually become infeasible to embed on the QPU, as the problem requirements could easily outgrow the physical capabilities of the QA hardware.
Secondly, as the symbolic finite element method yields a fractional objective function, the current approach relies on an iterative scheme in order to translate the objective function to a QUBO format. This means that various sources of overhead are repeatedly added to the solving process: repeatedly processing new non-fractional objective functions, waiting for an embedding to be calculated, awaiting your turn in the D-Wave problem submission queue, waiting to obtain the results from D-Wave, and repeatedly going through potentially inefficient Python programming. All of this contributes to the long total solve time, particularly for larger problems.
§.§ Quantum Annealing
When finally the objective function is successfully translated to a QUBO compatible form, it is submitted to the D-Wave QA to obtain a solution. From the results obtained for the three sample problems, it is seen that the QA has increasing difficulty in finding optimal or near-optimal solutions as the problem gets larger. For the largest sample problem, the optimum solution was not found within the allowable QPU time. More reads and longer QPU calculation time will be expected to find the solution.
Overall, the method proposed in this research constitutes a proof-of-concept in using QA for discrete structural optimization. The concepts of setting up a symbolic objective function, finding ways to simplify an objective function, and eventually translating a problem to a QUBO format may also be applicable to other optimization problems on QA, particularly when the qubit variables form part of the stiffness matrix and a fractional objective function needs to be evaluated. The identified challenges requires further research effort to improve its scalability onto larger problems.
Conceptualization, K. Wils and B. Chen; methodology, K. Wils and B. Chen; software, K. Wils; validation, K. Wils and B. Chen; formal analysis, K. Wils; investigation, K. Wils; resources, K. Wils; data curation, K. Wils; writing—original draft preparation, K. Wils; writing—review and editing, B. Chen; visualization, K. Wils; supervision, B. Chen; project administration, B. Chen; All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
Not applicable.
Not applicable.
All Python and Matlab code produced during this research, as well as the results that were obtained, are publicly available <cit.>.
The authors would like to acknowledge the intellectual discussions with Mr. Giorgio Tosti Balducci, PhD candidate in the same department, during this project.
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
MDPI Multidisciplinary Digital Publishing Institute
DOAJ Directory of open access journals
QA Quantum Annealer
GPQC General Purpose Quantum Computer
QPU Quantum Processing Unit
QUBO Quadratic Unconstrained Binary Optimization
FEM Finite Element Method
SA Simulated Annealing
-0cm
References
|
http://arxiv.org/abs/2306.01607v3
|
20230602151412
|
Evolution of genuine states to molecular ones: The $T_{cc}(3875)$ case
|
[
"L. R. Dai",
"J. Song",
"E. Oset"
] |
hep-ph
|
[
"hep-ph"
] |
[][email protected]
School of Science, Huzhou University, Huzhou 313000, Zhejiang, China
Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC Institutos de Investigación de Paterna, Aptdo.22085, 46071 Valencia, Spain
[][email protected]
School of Physics, Beihang University, Beijing, 102206, China
Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC Institutos de Investigación de Paterna, Aptdo.22085, 46071 Valencia, Spain
[][email protected]
Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC Institutos de Investigación de Paterna, Aptdo.22085, 46071 Valencia, Spain
We address the issue of the compositeness of hadronic states and demonstrate that starting with a genuine state of nonmolecular nature, but which couples to some meson-meson component to be observable in that channel, if that state is blamed for a bound state appearing below the meson-meson threshold it gets dressed with a meson cloud and it becomes pure molecular in the limit case of zero binding. We discuss the issue of the scales, and see that if the genuine state has a mass very close to threshold, the theorem holds, but the molecular probability goes to unity in a very narrow range of energies close to threshold. The conclusion is that the value of the binding does not determine the compositeness of a state. However, in such extreme cases we see that the scattering length gets progressively smaller and the effective range grows indefinitely. In other words, the binding energy does not determine the compositeness of a state, but the additional information of the scattering length and effective range can provide an answer. We also show that the consideration of a direct attractive interaction between the mesons in addition to having a genuine component, increases the compositeness of the state. Explicit calculations are done for the T_cc(3875) state, but are easily generalized to any hadronic system.
Evolution of genuine states to molecular ones: The T_cc(3875) case
E. Oset
July 31, 2023
====================================================================
§ INTRODUCTION
The dilemma between molecular states and genuine quark states is the subject of a continuous debate in hadron physics. Concretely, concerning the T_cc(3875) state there are works that support the T_cc as a molecular state of D D^* nature <cit.>, as well as others
that advocate a compact tetraquark nature <cit.>, while other works suggest a mixture of both components <cit.>.
In the present work we start with a genuine state which allows to be observed in some meson-meson components and prove that in the
limit of small binding the state becomes purely molecular. The issue of quark cores being dressed by molecular components is well
known and already discussed in the past concerning the nature of the “σ" meson (f_0(500) nowadays)<cit.>.
The dressing of a possible compact T_cc state with D D^* components
is also addressed in <cit.>.
We investigate in detail the scale, of what “small binding” means to claim a full molecular state, and
show that the binding itself does not allow one to conclude that a state is molecular. On the other hand we also show that
if a pure genuine state is associated to a weakly bound state, it results into a very small scattering length and very
large effective range for the meson-meson component, which indicates that the measurement of these magnitudes is extremely useful to find out the nature of the hadronic states. In this respect it is useful to call the attention to other works done in this direction.
In <cit.> the compositeness (molecular probability) of hadronic state is discussed in terms of the binding, but
the consideration of the range of the interaction has as a consequence a larger molecular components for the T_cc when
the range is changed from the long range of pion exchange to a shorter range of vector meson exchange. Probabilities of the
molecular component for only the D^0 D^*+ component are also evaluated in <cit.>. A more complete work considering
the scattering lengths and effective ranges, as well as the D^0 D^0 π^+ mass distribution of the experiment <cit.>, is done in <cit.> and
concludes that the sum of probabilities the D^0 D^*+ and D^+ D^*0 components, is compatible with unity, stressing the molecular nature of the state.
The value of the effective range and scattering length to determine the compositeness of a state has also been emphasized
from the very beginning in the pioneer work of Weinberg <cit.> under strict conditions of zero range interaction and
very small binding, but the first condition was released in a recent work <cit.> and both conditions were released in
the work of <cit.>, leading in both cases to strategies based on the knowledge of the binding, effective range and scattering length that improve considerably over the original formulas of <cit.> (see also <cit.>).
The formalism presented here and the conclusions are general, but we particularize to the study of the T_cc(3875) and
show that the large effective range and scattering length that one obtains assuming a genuine state to be responsible for the
T_cc binding are very far off from those already determined from the experimental study of this state.
§ FORMALISM
Let us assume that we have a hadronic state of bare mass m_R, not generated by the interaction of meson-meson components, for instance a compact quark state.
We assume that even if small, the state couples to one meson-meson component, where the effects of this state can be observed. We think from the beginning on the
T_cc(3875) and the D D^* component. To simplify the study we consider an I=0 state and just one channel, although the consequences are general and would apply
to the lowest threshold of the D^0 D^*+ component. This said, we can write for the D D^* amplitude the diagram of Fig. <ref> and
the D D^* amplitude of Eq. (<ref>).
t̃_D D^*,D D^* (s)=g̃^2/s-s_R
This amplitude is not unitarity. It is rendered unitary immediately by iterating the diagram of Fig. <ref> as shown in Fig. <ref>. What we are doing
with the diagram of Fig. <ref> is to insert the D D^* selfenergy in the propagator of Eq. (<ref>). We have then
t_D D^*,D D^*(s)=g̃^2/s-s_R-g̃^2 G_D D^*(s)
where G_D D^*(s) is the D D^* selfenergy which we choose to regularize with a sharp cutoff.
G_D D^*(s)= ∫_| q|<q_ maxd^3q/(2π)^3 ω_1 + ω_2/2 ω_1 ω_2 1/s-(ω_1 + ω_2)^2+iϵ
where ω_i = √(q^2 +m_i^2). The unitarity of the t_D D^*,D D^* amplitude is shown immediately by means of
Im t^-1= Im(s-s_R/g̃^2- G_D D^*(s)) =- Im G_D D^*(s) = k/8π√(s)
with k the meson-meson on shell momentum, k=λ^1/2(s,m_D^2,m_D^*^2)/(2√(s)).
Having g̃^2 positive and Re G_D D^*(s) negative,
one can see from Eq. (<ref>) that the
D D^* selfenergy is negative and moves the pole s_R of the bare resonance to lower energies. Let us assume that
g̃^2 is such that the bare state R, conveniently dressed with the D D^* selfenergy, is responsible for the
appearance of a pole at s_0, below the D D^* threshold. Since the D D^* selfenergy is negative, we take then s_R above the D D^* threshold. Studies of the tetraquark structure for the T_cc state provide in most cases
masses above that threshold, like the one of Ref. <cit.> which is 102 above the D^0 D^*+ threshold,
and which we take as reference.
The condition that a pole appears at s_0 is easily obtained from Eq. (<ref>) as
s_0-s_R-g̃^2 G_D D^* (s_0)=0 ,
which provides the value of g̃^2 needed to accomplish it.
The next step is to calculate the molecular probability. According to <cit.> the molecular
probability is obtained from
P = -g^2 ∂ G/∂ s|_s=s_0
where s_0 is the square of the mass of the physical state, which we assume to be below the threshold, as in the case of the T_cc(3875).
In Eq. (<ref>) g is the coupling of the state to the D D^* component and g^2 the residue of the
t_D D^*,D D^* matrix of Eq. (<ref>) at the pole. Thus
g^2 = lim_s → s_0 (s-s_0) g^2/s-s_R-g^2 G_D D^*(s)
= g^2/1-g^2 ∂ G/∂ s|_s=s_0
where in the last step we have used L'Hôpital rule. Then the molecular probability is
P=- g^2 ∂ G/∂ s/1-g^2 ∂ G/∂ s|_s=s_0
We can see several limits:
g̃^2 → 0 , P → 0 , the genuine state survives
g̃^2 →∞ , P → 1 , the state becomes pure molecular
s_0 → s_ th , P → 1 , the state becomes pure molecular
The third case is interesting, it is a consequence of unitarity and analyticity of the t and G functions. Indeed,
∂ G/∂ s→∞/_s_0 → s_ th, and then the 1 in the
denominator of Eq. (<ref>) can be neglected and P → 1. We can then state clearly that when the binding
energy goes to zero the state becomes fully molecular, the genuine component has been fagocitated by the
molecular component that assumes all the probability of the state.
This conclusion has also been reached before in <cit.>.
One might finish here, but there is the important issue of the scales.
In other words, what does s_0 → s_ th means in a real case, 10, 1, 10^-2? The answer to this
question is provided in the following section.
§ RESULTS FOR THE COMPOSITENESS AS A FUNCTION OF S_R
In Figs. <ref>, <ref>, <ref>, <ref> we show the results for the molecular probability P
of Eq. (<ref>) for different values of s_R, s_R=√(s_ th)+Δ√(s_R) with Δ√(s_R)=102,10,1,0.1,
as a function of s_0, the assumed value of the square of the energy of the bound state. In Fig. <ref> we observe that for
Δ√(s_R)=102, P goes indeed to 1 when s_0 → s_ th, as it should, but for s_0^ exp (√(s_0)=√(s_ th)-0.360)
P already has value around 0.9, depending a bit on the assumed value of q_ max, indicating that the
original genuine state has evolved to become practically a molecular state.
The case of Δ√(s_R)=10 is shown in Fig. <ref>. The trend is the same. P → 1 as s_0 → s_ th, but
for s_0^ exp the value of P is now smaller than before, of the order of 0.5.
We repeat the calculations for Δ√(s_R)=1 in Fig. <ref> and we see now the same trend of P when s_0 → s_ th.
However, the “scale" that we mentioned before shows up clearly since the change of P → 1 appears for values of √(s_0) -√(s_ th) of the order of
10^-1. For s_0^ exp the value of P is smaller than 0.15, indicating that the state remains mostly nonmolecular.
The results with the extreme case of Δ√(s_R)=0.1 further illustrate the point since now P → 1 in an extremely narrow region of
s_0 → s_ th and at s_0 the value of P is smaller than 0.05. The state is basically nonmolecular in nature.
The results shown above indicate that the value of the binding energy by itself cannot give a proof of the nature of the state.
Even if a state is very close to threshold, a genuine state with energy very close to threshold can reproduce the binding with a negligible probability of
molecular component. It is important to state this fact because intuitively, a bound state very close to a threshold of a pair of particles is often interpreted
as been a molecular state of that pair.
This said, let us see what other magnitudes can really tell us about the nature of the state.
§ SCATTERING LENGTH AND EFFECTIVE RANGE
The relationship of the scattering matrix t with the one used in Quantum Mechanics is given by
t=-8 π√(s) f^ QM≃ -8 π√(s) 1/-1/a + 1/2 r_0 k^2-ik
then
t^-1=-1/8 π√(s)(-1/a+ 1/2 r_0 k^2-ik)
Note that Im t^-1 given by - Im G_D D^*(s) in Eq. (<ref>) provides indeed the imaginary part of the right hand side of Eq. (<ref>), the token of unitarity in the amplitude that we are using. From
Eq. (<ref>) it is easy to induce
-1/a=s_ th-s_R/g̃^2- Re G_D D^* (s_ th)
1/2 r_0=∂/∂ k^2{(-8 π√(s))(s-s_R/g̃^2 -
Re G_D D^* (s))}|_s=s_ th
or
r_0= 2 √(s)/μ∂/∂ s{(-8 π√(s))(s-s_R/g̃^2 -
Re G_D D^* (s))}|_s=s_ th
with μ the reduced mass of the D, D^* mesons with μ=m_D m_D^*/(m_D +m_D^*).
In Table <ref> we show the results of a and r_0 as a function of Δ√(s_R) when the state is bound at s_0^ exp. What we obtain is that as Δ√(s_R) becomes smaller, decreasing the
molecular probability, the scattering length becomes smaller and smaller and the effective range grows indefinitely. The values obtained for Δ√(s_R)=0.1, where the molecular component is small, less than 0.05,
are of the order of 0.61-0.87 for the scattering length, and of the order of -114 -(-168). Even for Δ√(s_R)=1 where the molecular probability would be of the order of 15%, the scattering lengths
are in the range of 1.56-2.1 and the effective range from -56.7 -(-38.2). The lesson we draw from there is that
the values of a and r_0 are very useful to determine the molecular probability of the state. The numbers mentioned before are in sheer disagreement from those obtained experimentally in <cit.>, which are of the order of a∼ 6-7, r_0∼ -3.9 for the D^0 D^*+ channel. Let us stress once more that in the work of <cit.> the scattering length and effective range of the D^0 D^*+, D^+ D^*0 channel, together with the
D^0 D^0 π^+ mass spectrum, were analyzed allowing both a molecular and a genuine component and it was concluded that
the state was 100% molecular within the small uncertainties of the analysis. The present work offers a broad perspective on why that conclusion was obtained.
§ MIXTURE OF COMPACT AND MOLECULAR COMPONENTS
So far we have just started from a pure nonmolecular state and we show that the dressing with the meson-meson
cloud renders the state molecular in the limit of a small binding. The pure molecular states are obtained starting
with an energy independent potential V between the particles of the meson pair, with the scattering amplitude
becoming
T=V/1-V G
If we have a mixture of the genuine state and the molecular one, this can be accounted for by taking a
potential
V'=V+g̃^2/s-s_R
It is easy to generalize the probability P to this case and we find
P=- [g^2+(s-s_R)V ] ∂ G/∂ s/1- [g^2+(s-s_R)V ] ∂ G/∂ s-VG |_s=s_0
The pole at s_0 appears now when
s_0-s_R-[g^2+(s_0-s_R) V ] G(s_0)=0
We conduct now a new test. We take a potential V short of binding, meaning that by itself would have
1-V G(s) of the denominator of Eq. (<ref>) at the threshold s=s_ th. Hence
1-V G (s_ th)=0
We compare this potential with the one we obtain from the local hidden gauge approach
<cit.>
V =β V_ LHG =β (-1) 1/2 g'^2 [3s-(M^2+m^2+M'^2+m'^2)-1/s(M^2-m^2)(M'^2-m'^2)] 1/M^2_ρ
with g'=M_V/2 f (M_V=800, f=93) and M,m the masses of D^* and D, and
the same for M',m'. We obtain
β=0.74 for q_ max=450
β=0.52 for q_ max=650
Since V is short of binding, we allow the nonmolecular component, the term g̃^2/(s-s_R) to be responsible for the binding. Then we obtain the results of P shown in Table <ref>.
For different values of Δ√(s_R) what we find is that if in addition to the genuine state we add some
potential between the D,D^* strong enough, but not enough to bind by itself, the effect of it is that it increases the molecular probability bringing it close to unity.
We also observe the feature that the bigger the value of s_R, the smaller is the relative increase in the compositeness (see also
similar results in related studies in connection with lattice QCD data <cit.>).
What one concludes from here is that if one has a state close to threshold of a pair of particles and there is some attractive interacting potential between these particles, the
chance that the state is a molecular state increases appreciably. Certainly, if the potential is enough to bind by itself one does not need a nonmolecular component, but what we see is that even if it exists it does not change
the fate of the state turning molecular. Yet, the complement of the scattering length and effective range,
as well as mass distribution close to threshold, help finally to make a precise determination of the molecular probability of the state.
§ CONCLUSIONS
In this work we have addressed the issue of the dressing of an elementary, or genuine state, by meson components and how this genuine state can eventually turn into a pure mesonic molecular state due to this meson cloud. For this purpose we start from a state which is purely genuine, let us say for instance a compact quark state, which has a certain coupling to a meson-meson component, such that its effects can be observed in this meson-meson channel. Then we demand that this state becomes a bound state below the meson-meson threshold and then determine the probability that the state has become molecular. We demonstrate that when the binding energy of the state goes to the meson-meson threshold, the state becomes 100% molecular. Yet, the important issue is the scale of energies where this happens. We discuss the issue in detail. For this purpose we show the molecular probability as a function of the binding energy for different values of the genuine state mass, M_R. We observe that if M_R is far away from the meson-meson threshold, then the bound state goes fast to being molecular as we approach the threshold. However, as M_R gets closer to the meson-meson threshold the theorem holds equally but the probability goes only to 100% at energies extremely close to threshold, such that even for states bound by 0.360 MeV, like the T_cc(3875), the molecular probability can be very small. The conclusion is that the proximity of a state to a threshold is not a guaranty that the state is of molecular nature. However, there is a consequence of having the genuine state responsible for the state found, because the scattering length becomes gradually smaller and the effective range grows indefinitely and reaches unphysical values for a case like the T_cc(3875) mentioned above. Indeed, we find that if one demands that the T_cc(3875) is a genuine, nonmolecular state, the scattering length and effective range obtained are in sheer disagreement with data. The conclusion is then that the binding, together with measurements of the scattering length and effective range can provide an answer to the compositeness of a state, but not the binding alone.
We also show that if we have a mixture of a genuine state and an additional direct attractive interaction between the mesons, the state becomes more molecular for the same mass M_R of the genuine state. Certainly, with enough attraction, one can generate the state without the need of an extra genuine component.
The present work brings light to the continuous debate over the nature of hadronic states and provides a perspective on issues discussed before in the Literature, on the relevance of the scattering length and effective range, or mass distributions close to threshold, to determine the compositeness of hadronic states. Although we have particularized the calculations for the case of the T_cc(3875), the results and conclusions are general and the method employed in the analysis can be easily extrapolated to any other hadronic cases.
§ ACKNOWLEDGMENTS
This work is partly supported by the National Natural Science Foundation of China
under Grants Nos. 12175066, 11975009, 12247108 and the China Postdoctoral Science Foundation under Grant No. 2022M720359.
This work is also partly supported by the Spanish Ministerio de
Economia y Competitividad (MINECO) and European FEDER funds under Contracts No. FIS2017-84038-C2-1-P
B, PID2020-112777GB-I00, and by Generalitat Valenciana under contract PROMETEO/2020/023. This project has
received funding from the European Union Horizon 2020 research and innovation programme under the program
H2020-INFRAIA-2018-1, grant agreement No. 824093 of the STRONG-2020 project.
This research is also supported by the Munich Institute for Astro-, Particle and BioPhysics (MIAPbP)
which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)
under Germany's Excellence Strategy-EXC-2094 -390783311.
23 X. Z. Ling, M. Z. Liu, L. S. Geng, E. Wang, J. J. Xie, Phys. Lett. B 826 (2022) 136897
24 X. K. Dong, F. K. Guo, B. S. Zou, Commun. Theor. Phys. 73 (2021) 125201
25 A. Feijoo, W. H. Liang, E. Oset, Phys. Rev. D 104 (2021) 114015
26 S. Fleming, R. Hodges, T. Mehen, Phys. Rev. D 104 (2021) 116010
18 H. Ren, F. Wu, R. Zhu,
Adv. High Energy Phys. 2022 (2022) 9103031
19 K. Chen, R. Chen, L. Meng, B. Wang, S. L. Zhu,
Eur. Phys. J. C 82 (2022) 581
20 M. Albaladejo,
Phys. Lett. B 829 (2022) 137052
21
M. L. Du, V. Baru, X. K. Dong, A. Filin, F. K. Guo, C. Hanhart, A. Nefediev, J. Nieves, Q. Wang,
Phys. Rev. D 105 (2022) 014024
22
V. Baru, X. K. Dong, M. L. Du, A. Filin, F. K. Guo, C. Hanhart, A. Nefediev, J. Nieves, Q. Wang,
Phys. Lett. B 833 (2022) 137290
23bis N. Santowsky, C. S. Fischer, Eur. Phys. J. C 82 (2022) 313
24bis C. Deng, S. L. Zhu,
Phys. Rev. D 105 (2022) 054015
25bis
H. W. Ke, X. H. Liu, X. Q. Li,
Eur. Phys. J. C 82 (2022) 144
26bis
S. S. Agaev, K. Azizi, H. Sundu,
JHEP 06 (2022) 057
27 Y. Kamiya, T. Hyodo, A. Ohnishi, Eur. Phys. J. A 58 (2022) 131
28 L. Meng, B. Wang, G. J. Wang, S. L. Zhu, Phys. Rep. 1019 (2023) 1
29 L. M. Abreu, Nucl. Phys. B 985 (2022) 115994
30 S. Chen, C. Shi, Y. Chen, M. Gong, Z. Liu, W. Sun, R. Zhang,
Phys. Lett. B 833 (2022) 137391
31
M. Albaladejo, J. Nieves, Eur. Phys. J. C 82 (2022) 724
32 F. Z. Peng, M. S. Sánchez, M. J. Yan, M. P. Valderrama,
Phys. Rev. D 105 (2022) 034028
entem P. G. Ortega, J. Segovia, D. R. Entem, F. Fernández,
Phys. Lett. B 841 (2023) 137918
27bis J. Carlson, L. Heller, J. A. Tjon, Phys. Rev. D 37 (1988) 744
28bis B. Silvestre-Brac, C. Semay, Z. Phys. C 57 (1993) 273
29bis C. Semay, B. Silvestre-Brac, Z. Phys. C 61 (1994) 271
30bis S. Pepin, F. Stancu, M. Genovese, J. M. Richard,
Phys. Lett. B 393 (1997) 119
32bis J. l. Ballot, J. M. Richard, Phys. Lett. B 123 (1983) 449
33bis S. Zouzou, B. Silvestre-Brac, C. Gignoux, J. M.
Richard, Z. Phys. C 30 (1986) 457
taoguo Tao Guo, Jianing Li, Jiaxing Zhao, and Lianyi He, Phys. Rev. D 105 (2022) 014021
wzgang Qi Xin, Zhi-Gang Wang, Eur. Phys. J. A 58 (2022) 110
yanparon
M. J. Yan, M. P. Valderrama,
Phys. Rev. D 105 (2022) 014007
rosina D. Janc, M. Rosina, Few Body Syst. 35 (2004) 175
beveren1 E. van Beveren, T. A. Rijken, K. Metzger, C. Dullemond, G. Rupp, J. E. Ribeiro, Z. Phys. C 30 (1986) 615
beveren2 E. van Beveren, D. V. Bugg, F. Kleefeld, G. Rupp,
Phys. Lett. B 641 (2006) 265
tronqvist N. A. Törnqvist, M. Roos, Phys. Rev. Lett. 76 (1996) 1575
juanalba M. Albaladejo, J. Nieves, L. Tolos, Phys. Rev. C 104 (2021) 035203
hyodotom T.Kinugawa, T. Hyodo, arXiv: 2303.07038 [hep-ph]
mishacom R. Aaij et al., (LHCb Collaboration), Nature Phys. 18 (2022) 751
lhcbmisha R. Aaij et al. (LHCb Collaboration), Nature Commun. 13 (2022) 3351
ourwork L. R. Dai, L. M. Abreu, A. Feijoo, E. Oset, arXiv:2304.01870 [hep-ph]
weinberg S. Weinberg, Phys. Rev. 137 (1965) B672
juancompo
M. Albaladejo, J. Nieves,
Eur. Phys. J. C 82 (2022) 724
daisongcompo J. Song, L. R. Dai, E. Oset, Eur. Phys. J. A 58 (2022) 133
fkguo Yan Li, Feng-Kun Guo, Jin-Yi Pang, Jia-Jun Wu, Phys. Rev. D 105 (2021) L071502
baru V. Baru, J. Haidenbauer, C. Hanhart, Yu. Kalashnikova, A. Kudryavtsev, Phys. Lett. B 586 (2004) 53
kinu T. Kinugawa and T. Hyodo, EPJ Web Conf. 262 (2022) 01019
quigg E. J. Eichten, C. Quigg, Phys. Rev. Lett. 119 (2017) 202002
hyodoijmp T. Hyodo, Structure and compositeness of hadron resonances,
Int. J. Mod. Phys. A 28 (2013) 1330045
danijuan D. Gamermann, J. Nieves, E. Oset, E. Ruiz. Arriola, Phys. Rev. D 81 (2010) 014029
sazdjian Hagop Sazdjian, Symmetry 14 (2022) 515
h1 M. Bando, T. Kugo, K. Yamawaki, Phys. Rept. 164 (1988) 217
h2 M. Harada, K. Yamawaki, Phys. Rept. 381 (2003) 1
h3 U. G. Meissner, Phys. Rept. 161 (1988) 213
h4 H. Nagahiro, L. Roca, A. Hosaka, E. Oset, Phys. Rev. D 79 (2009) 014015
raq F. Gil-Domínguez, R. Molina, arXiv: 2306.01848
|
http://arxiv.org/abs/2306.02997v1
|
20230605160809
|
Spectra of Quotient Modules
|
[
"Michael Didas",
"Jörg Eschmeier",
"Michael Hartz",
"Marcel Scherer"
] |
math.FA
|
[
"math.FA",
"Primary 47A13, Secondary 47A10, 47A45"
] |
Schloss Dagstuhl – Leibniz-Zentrum für Informatik GmbH, Oktavie-Allee, 66687 Wadern, Germany
[email protected]
Fachrichtung Mathematik, Universität des Saarlandes, 66123 Saarbrücken, Germany
[email protected]
Fachrichtung Mathematik, Universität des Saarlandes, 66123 Saarbrücken, Germany
[email protected]
MH and MS were partially supported by the Emmy Noether Program of the German Research Foundation (DFG Grant 466012782).
Fachrichtung Mathematik, Universität des Saarlandes, 66123 Saarbrücken, Germany
[email protected]
[2010]Primary 47A13; Secondary 47A10, 47A45
Spectra of Quotient Modules
Marcel Scherer
July 31, 2023
===========================
We determine the Taylor spectra of quotient tuples of the d-shift on Drury-Arveson spaces
with finite-dimensional coefficient spaces.
We show the the Taylor spectrum can be described in terms of the approximate zero
set of the annihilator ideal, and in terms of the pointwise behavior of the inner multiplier
associated with the quotient tuple.
§ INTRODUCTION AND MAIN RESULTS
Let be a complex Hilbert space and let M ∈(M_z, H^2_d()) be a closed invariant subspace for the tuple M_z=(M_z_1,…, M_z_d) ∈ L(H^2_d())^d consisting of the multiplication operators with the coordinate functions on the -valued Drury-Arveson space H^2_d() over the Euclidean unit ball _d ⊂^d.
Quotient tuples of the form M_z/M on H^2_d(𝒟) / M appear as model operators for pure commuting
row contractions; see Section <ref> below for more details.
The aim of this note is to provide descriptions of the Taylor spectrum of the quotient tuple M_z /M in the case of finite-dimensional .
Towards a precise formulation of our main result, we define the approximate zero set of a function f:_d → to be
(f) = {λ∈𝔹_d; lim inf_z →λ |f(z)| = 0 }.
Equivalently, λ∈(f) if and only if there exists a sequence (z_k)_k ≥ 0 in 𝔹_d
such that lim_k →∞ z_k = λ and lim_k →∞ f(z_k) = 0.
Moreover, with each M ∈(M_z, H^2_d()) we associate a closed ideal
I(M) = { f ∈(H^2_d); fH^2_d() ⊂ M }
of the multiplier algebra (H^2_d). It turns out that the Taylor spectrum in the quotient module H^2_d(𝒟)/M can be expressed in terms of this so-called annihilator ideal I(M):
Let 𝒟 be a finite-dimensional complex Hilbert space and M ∈(M_z,H^2_d()). Then, for the Taylor spectrum of the tuple induced by M_z on H^2_d()/M, we have
σ(M_z,H^2_d()/M) = ⋂_f ∈ I(M)(f).
Moreover, the Taylor spectrum and the right spectrum coincide in this case.
This result is motivated by recent work of Clouâtre and Timko <cit.>, which in particular contains
the equality of the Taylor spectrum and the approximate zero set in the case of one-dimensional 𝒟.
The inclusion “⊂” for finite-dimensional 𝒟 can be deduced from the results of Clouâtre and Timko (see the discussion preceding Proposition <ref> below for details). This relies on the corona theorem for H^2_d. Related spectral inclusion theorems can also be found in <cit.>.
For the reverse inclusion, we establish an alternative description of the spectrum in terms of
an operator-valued multiplier that generates M. To be more precise, if M ∈(M_z, H^2_d()), then by the McCullough-Trent version of Beurling's invariant subspace theorem (Theorem 4.1 in <cit.>), there exist a Hilbert space and a holomorphic multiplier θ: _d → L(, ) from H^2_d(ℰ) to
H^2_d(𝒟) such that
M = θ H^2_d()
and θ is inner, which means by definition that the induced multiplication operator M_θ: H^2_d() → H^2_d() is a partial isometry. A result of Greene, Richter and Sundberg (Theorem 3.2 in <cit.>) then guarantees that for almost every z ∈∂_d the non-tangential boundary value θ(z) ∈ L(, ) exists (in the SOT) and is a partial isometry.
To formulate our result appropriately, we need the following generalized notion of pointwise surjectivity for operator-valued maps:
Given a holomorphic operator-valued map θ: _d → L(, ), we say that θ is surjective at λ∈_d if either λ∈_d and θ(λ)=, or λ∈∂_d and there exists an extension of θ to a holomorphic map
θ: U → L(, ) on some open set U ⊃_d ∪{λ}
such that θ(λ) =. Our proof of the reverse inclusion “⊃” in Theorem <ref> relies on the following result of independent interest.
Let 𝒟 be a finite-dimensional Hilbert space, M ∈ Lat(M_z,H^2_d(𝒟)) and
θ:_d → L(ℰ,𝒟) an inner multiplier from H^2_d(ℰ) to
H^2_d(𝒟) with M = θ H^2_d(ℰ). Then
σ(M_z,H^2_d(𝒟)/M) = {λ∈𝔹_d; θλ}.
The inclusion “⊂” is established in Proposition <ref>.
The reverse inclusion is finally settled as Corollary <ref>.
The main ingredients in the proof are a result of Greene <cit.> (to handle the part inside _d) and structure theory of pure row contractions (in particular their characteristic function <cit.>) applied to T = P_M^M_z|M^. We will also see that the Taylor spectrum σ(M_z, H^2_d(𝒟)/M) agrees
with the right spectrum σ_r(M_z, H^2_d( 𝒟)/M).
Note that the set appearing on the right-hand side in the statement of the preceding theorem extends the classical notion of support of an inner function θ: → on the unit disc: Recall that λ∈ belongs to supp(θ) if either θ(λ) = 0 or θ does not holomorphically extend across λ. This concept has also been one of the starting points for <cit.>, but was generalized in another direction there.
In the scalar-valued case =, the set of points λ∈_d where θ(λ) : → is not surjective, is easily seen to coincide with the common zero set Z(M) of all functions in M, thus the statement of Theorem <ref> then specializes to
σ(M_z, H^2_d/M) = Z(M) ∪ S(M)
with S(M)={λ∈∂_d; θ not surjective at λ}.
The fact that σ(M_z/M) ∩_d = Z(M) was first observed by Gleason, Richter and Sundberg in <cit.>.
It was conjectured in <cit.> that S(M) = {λ∈∂_d; lim inf_z→λθ(z) = 0}.
This equality would follow if the corona theorem of Costea, Sawyer and Wick <cit.>
held for bounded row multipliers. Since this is not known,
we must leave the question of Gleason, Richter and Sundberg open here.
§ CALCULATING THE SPECTRUM INSIDE B_D
We begin by setting up the necessary notation from multivariable spectral theory. Let T∈ L(X)^d be a commuting d-tuple of operators on a complex Banach space X.
We write K_∙(T,X) for the Koszul complex
0 ⟶Λ^d(X)
δ_d,T⟶Λ^d-1(X)
δ_d-1,T⟶…δ_2,T⟶Λ^1(X)
δ_1,T⟶Λ^0(X)
⟶ 0
consisting of the spaces K_p(T,X) = Λ^p(X) = X ⊗⋀^p^d ≅ X^dp and the boundary maps defined by the formula
δ_p,T(x ⊗ e_I) = ∑_α=1^p (-1)^α-1 T_i_αx ⊗ e_I_α (x ∈ X, e_I = e_i_1∧…∧ e_i_p),
where I=(i_1, …, i_p) ∈^p is a multi-index with i_1< i_2 < … < i_p.
Here ⋀^p^d stands for the p-fold exterior product of ^d with itself, (e_1, …, e_d) is the standard basis of ^d and the multi-index I_α∈^p-1 arises from I ∈^p by dropping the α-th entry.
The Taylor spectrum of T (and its various subsets) are explained in terms of the homology groups of K_∙(T,X),
H_p(T,X) = δ_p,T / δ_p+1,T (p=0, …, d).
The Taylor spectrum of T is defined as the set of points in ^d for which the Koszul complex of λ-T is not exact, i.e.,
σ(T) = {λ∈^d; H_p(λ-T,X) ≠ 0 for some p ∈{1, …, d}},
where λ - T stands for the operator tuple with entries λ_i· 1_X - T_i (1≤ i ≤ d).
It is well known that σ(T) ⊂^d is compact.
As usual, we write ρ(T) = ^d∖σ(T) for the resolvent set.
A particular role for our calculations is played by the right spectrum
σ_r(T) = {λ∈^d; H_0(λ-T,X) ≠ 0 }.
The right essential spectrum σ_re(T) consists of all λ∈σ_r(T) where even H_0(λ-T,X) = ∞.
Note that, modulo the identifications Λ^0(X) ≅ X and Λ^1(X)≅ X^d, we have
δ_1,λ-T (x_i)_i=1^d = ∑_i=1^d (λ_i - T_i)x_i ((x_i)_i=1^d ∈ X^d),
and therefore H_0(λ-T,X) ≅ X / ∑_i=1^d (λ_i - T_i)X. Similarly, up to isomorphy, δ_d,T acts as
δ_d,T x = (T_ix)_i=1^d (x ∈ X),
and hence H_d(λ-T,X) ≅⋂_i=1^d (λ_i-T_i).
We recall a result of Devin Greene <cit.> which leads to a description of the points in the Taylor spectrum
of the quotient tuple M_z/M in 𝔹_d.
This result relates the homology of the Koszul complex of a multiplication tuple to the
homology of a localized resolution.
Let be complex Hilbert space. Given an M_z-invariant subspace M ∈(M_z, H^2_d()), we apply the McCullough-Trent version of Beurling's invariant subspace theorem (Theorem 4.1 in <cit.>) inductively to obtain Hilbert spaces _i (i≥0) starting with _0 = together with multipliers θ_i ∈(H^2_d(_i), H^2_d(_i-1)) for i≥ 1 such that the induced multiplication operators form an exact sequence
…⟶ H^2_d(_2) M_θ_2⟶
H^2_d(_1) M_θ_1⟶
H^2_d() q⟶ H^2_d()/M → 0.
Localizing the right-truncated sequence to a point λ∈_d, we obtain a complex
…⟶_2 θ_2(λ)⟶_1 θ_1(λ)⟶⟶ 0
denoted by (_∙, θ_∙(λ)).
The following result is due to Greene <cit.>. For completeness sake,
we indicate a shortened version of the original proof based on standard homological algebra.
Given λ∈_d and M ∈(M_z, H^2_d()) for some complex Hilbert space , there are vector space isomorphisms
H_p(λ-M_z, H^2_d()/M) ≅ H_p(_∙, θ_∙(λ)) (p≥ 0).
Let _λ : H^2_d() →, f ↦ f(λ), denote the point evaluation at λ. It is well known that the augmented Koszul complex
K_∙(λ-M_z, H^2_d()) _λ⟶⟶ 0
is exact in the case = (see, e.g., <cit.>). Since tensoring with 1_ preserves exactness, it remains exact in the general case.
We consider the double complex K =(K_p,q, ∂', ∂”)
with spaces
K_p,q = K_q(λ - M_z,H^2_d(𝒟_p)), p-th row (K_p,∙,∂”_∙)
equal to (-1)^p times the augmented
Koszul complex of the commuting
tuple λ - M_z ∈ L(H^2_d(𝒟_p))^d
and q-th column (K_∙,q,∂'_∙) given by the
nq-fold direct sum of the complex (H^2_d(𝒟_∙),M_θ_∙),
respectively (𝒟_∙(θ_∙(λ)) as the last column:
⋮[d, ""] ⋮[d, ""]
(-1) · K_∙(λ-M_z, H^2_d(_2)) [d, "θ_2"] [r,"-_λ"] _2 [r,""] [d,"θ_2(λ)"] 0
(-1)· K_∙(λ-M_z, H^2_d(_1)) [d, "θ_1"] [r,"_λ"] _1 [r,""] [d,"θ_1(λ)"] 0
(-1)· K_∙(λ-M_z, H^2_d()) [d, "q"] [r,"-_λ"] [r,""] [d,"0"] 0
(-1)· K_∙(λ-M_z, H^2_d()/M) [r, ""] [d, ""] 0
0
Then K is a double complex with anti-commuting squares and bounded diagonals, and all but the last column and all but the last row are exact. In this setting,
standard double complex arguments (Lemma A2.6 in <cit.>) show that there are induced vector space isomorphisms
H_p(λ-M_z,H^2_d()/M) ≅ H”_p H'_0(K) ≅ H'_p H”_0(K) = H_p(_∙, θ_∙(λ)),
as we claimed.
As an immediate consequence, we have:
Let be a complex Hilbert space, M ∈(H^2_d()), and θ :_d → L(, ) be an inner multiplier from H^2_d() to H^2_d() with M=M_θ H^2_d. Then we have
σ_r(M_z, H^2_d()/M) ∩_d = {λ∈_d : θ(λ)≠}.
Moreover, if 𝒟 is finite-dimensional, then σ_re(M_z, H^2_d(𝒟) / M) ⊂∂𝔹_d.
Note that we may choose θ_1 = θ and _1 = in the preceeding theorem to obtain
for λ∈𝔹_d that
H_0(λ - M_z, H^2_d(𝒟) / M) ≅ H_0(𝒟_∙, λ_∙(λ))
≅𝒟/(θ(λ) ℰ).
This implies the statement for the right-spectrum, as well
as σ_re(M_z, H^2_d(𝒟) / M) ∩𝔹_d = ∅
if 𝒟 is finite-dimensional.
§ UPPER ESTIMATES FOR THE SPECTRUM
The aim of this section is to provide a proof of both inclusions “⊂” from the statements of our main Theorems <ref> and <ref>.
As a preparatory result, we state the following observation which can be seen as a partial extension of a result of Sz.-Nagy and Foiaş (Theorem VI.5.2 in <cit.>) to the multivariable case.
Let be a finite-dimensional Hilbert space with orthonormal basis (d_1,…,d_N) and
M ∈ Lat(M_z,H^2_d(𝒟)) a closed invariant subspace. Let ℰ be a Hilbert space
and θ: _d → L(ℰ,𝒟) a multiplier from H^2_d()
into H^2_d() with θ H^2_d() ⊂ M. Fix vectors e_1,…,e_N ∈
and denote by Θ = (θ_ij)_1 ≤ i,j ≤ N∈ M_N((H^2_d)) the matrix whose
coefficients are determined by
θ(z)e_j = ∑^N_i=1θ_ij(z)d_i (j=1,…,N, z ∈_d).
Then (Θ) ∈ I(M). If λ∈_d is a point such that θ is surjective at λ,
then there is a multiplier f ∈ I(M) with lim_z→λ
z∈_df(z) = 1.
Choose a matrix R = (r_ij)_1 ≤ i,j ≤ N∈ M_N((H^2_d)) such that
Θ R = (Θ)· 1_N. (It is standard linear algebra that R can be obtained pointwise as the transpose of the so-called cofactor matrix C of Θ, whose components consist – except for the sign – of the determinants of all possible (N-1)× (N-1) submatrices of Θ.)
To prove the first assertion, we may suppose that (Θ) does not vanish identically on _d.
Then the vectors e_1,…,e_N form a basis of their linear span . It is elementary to check that the composition of operators
R: H^2_d(𝒟) → H^2_d(), ∑_i=1^N f_i d_i ↦∑_i=1^N(∑_j=1^N r_ij f_j)e_i
and
θ: H^2_d() → H^2_d(), ∑_i=1^N g_i e_i ↦∑_i=1^N(∑_j=1^N θ_ij g_j)d_i = θ∑_j=1^N g_j e_j.
satisfies (Θ) f = θ R f ∈ M for all f ∈ H^2_d(), i.e., (Θ) ∈ I(M), as desired.
For the remaining part of the assertion, fix λ∈_d and a holomorphic extension θ: U → L(,) of θ to U⊃_d ∪{λ} such that θ(λ) =.
Then there are vectors e_1,…,e_N ∈ with
θ(λ)e_j = d_j (j=1,…,N).
Let Θ=(θ_ij)_1≤ i,j ≤ N be the matrix formed as above with respect to the vectors e_1,…,e_N chosen in this way. Then Θ, viewed as a map _d → M_N(), continuously extends to U and satisfies
lim_z →λ
z∈_dΘ(z) = 1_N.
Hence f = (Θ) defines a function in I(M) as in the statement of the lemma.
Now we prove the announced inclusions. The first one can be deduced from a result of Clouâtre and Timko <cit.> that depends on the corona theorem for H^2_d due to Costea, Sawyer and Wick <cit.>.
Alternatively, we can argue directly with the help of the corona theorem.
Let be a finite-dimensional Hilbert space and
M ∈ Lat(M_z,H^2_d(𝒟)) a closed invariant subspace. Let ℰ be a Hilbert space
and θ: _d → L(ℰ,𝒟) an inner multiplier from H^2_d()
into H^2_d() with θ H^2_d() ⊂ M. Then we have the inclusions
σ(M_z,H^2_d()/M) ⊂⋂_f ∈ I(M)(f) ⊂{λ∈𝔹_d; θλ}
Note that the second inclusion readily follows from the preceding lemma which says that, if λ does not belong to the set on the right-hand side, then there is a function f ∈ I(M) with λ∉AZ(f).
Towards a proof of the first inclusion, let λ∉⋂_f ∈ I(M)(f).
Then there exists h ∈ I(M) with λ∉(h), hence
inf_z ∈𝔹_d∑_i=1^d |λ_i - z_i| + |h(z)| > 0.
By the corona theorem for H^2_d <cit.>, there exist f_1,…,f_d,f ∈ℳ(H^2_d) such that ∑_i=1^d (λ_i - z_i) f_i + f h = 1. Let g = f h ∈ I(M).
From the very definition of I(M) it follows that M_g/M = 0 and hence
∑_i=1^d (λ_i - M_z_i/M) (M_f_i/M) = 1_H^2_d(𝒟)/M.
Lemma 2.2.4 in <cit.> shows that λ∉σ(M_z,H^2_d(𝒟)/M) as desired.
§ LOWER ESTIMATE FOR THE SPECTRUM AND PROOF OF MAIN RESULTS
In view of Proposition <ref>, both Theorems 1 and 2 will follow as soon as we can show the missing inclusion
{λ∈𝔹_d; θλ}⊂σ(M_z/M).
To achieve this, we make use the characteristic function θ_T of the pure row contraction T = P_H M_z|H ∈ L(H)^d where H = M^⊥.
Let us first establish the necessary notations and recall some basic facts.
Let be a complex Hilbert space, and let T∈ L()^d be a commuting row contraction, which means by definition that T_1T_1^* + … + T_dT_d^* ≤ 1_ or, equivalently, that the row operator
T = [ T_1, …, T_d] : ^d →, (x_i)_i=1^d ↦∑_i=1^d T_ix_i,
is a contraction. Note that (modulo the canonical identifications) the row operator T:^d→ is nothing else than the boundary map δ_1,T in the Koszul complex of T. Similarly, the adjoint T^*:→^d acts as δ_d,T^*.
Following <cit.> we define the defect operators
D_T = (1_^d - T^*T)^1/2∈ L(^d) and
D_T^* = (1_ - TT^*)^1/2∈ L(),
and the respective defect spaces as
_T = D_T^d⊂^d and _T^* = D_T^*⊂.
The intertwining relations (Lemma 2.1 in <cit.>)
T D_T = D_T^* T and T^* D_T^* = D_T T^*
yield the inclusions T _T ⊂_T^* and T^* _T^*⊂_T. It is well known (Lemma 2.2 in <cit.>) that the so-called characteristic function of T defined by
θ_T : _d → L(_T, _T^*), θ_T(z) = -T +D_T^* (1_ - ZT^*)^-1 ZD_T
is an analytic function that induces a well-defined contractive multiplier
M_θ_T: H^2_d(_T) → H^2_d(_T^*), f ↦θ_T f.
Here, the symbol Z stands for row operator Z = [ z_1 1_, …, z_d 1_ ] : ^d → associated
with z=(z_1, …, z_d) ∈^d.
Details on characteristic functions of commuting row contractions and their properties can be found in <cit.>.
Fix z∈ρ(T) ∩∂_d. By the poynomial spectral mapping theorem for T^* applied to p(w) = 1- ∑_i=1^d z_iw_i ∈[w], we obtain
0 = 1-|z|^2 ∉{ 1 - ⟨ z, w⟩; w ∈σ(T) } = σ(1_ - ZT^*).
Hence the characteristic function θ_T of T
extends to a holomorphic map θ_T : U → L(_T, _T^*) given by the same formula that defines θ_T on the open set
U = { z ∈^d : 1_ -ZT^* invertible}⊃_d ∪ (ρ(T) ∩∂_d) ⊃_d ∩ρ(T).
If T ∈ L() is a single contraction
with ρ(T) ∩∂≠∅, then the values of the extended characteristic
function are unitary operators θ_T: 𝒟_T →𝒟_T^* for each point
λ∈ρ(T) ∩∂. In particular, _T = _T^* (see
Chapter VI.1 in <cit.>). In the multivariable case the situation is quite different. Nevertheless, we
obtain at least a partial result of the same type.
Let T ∈ L()^d be a commuting row contraction such that _T^* < ∞. Then the characteristic
function
θ_T: _d → L(_T,_T^*)
of T is surjective at every point λ∈_d ∩ρ(T).
Let U and θ_T: U → L(𝒟_T,𝒟_T^*) be defined as above and let
λ∈ U ∩ρ(T) be given. We show that θ_T(λ) is surjective.
Towards this, we first observe that
θ_T(z)^* = -T^* + D_TZ^*(1_ - TZ^*)^-1D_T^*∈ L(_T^*,_T)
for all z ∈ U and hence
D_T θ_T(z)^*
= (-T^* + (1_-T^*T)Z^*(1_-TZ^*)^-1)D_T^*
= (-T^* + Z^*(1_-TZ^*)^-1 - T^*(TZ^*)(1_-TZ^*)^-1)D_T^*
= (-T^* + Z^*(1_-TZ^*)^-1 + T^* - T^*(1_-TZ^*)^-1)D_T^*
= (Z^* - T^*)(1_-TZ^*)^-1D_T^*
for z ∈ U. If y = D_T^*x (x ∈), then
D_T θ_T(z)^*y = (Z^* - T^*)(1_-TZ^*)^-1(1_-T^*T)x
for z ∈ U. In particular, for z ∈ U ∩ρ(T), we have that Z^* - T^* ≅δ_d,z-T^* is injective, so for x∈, y = D_T^*x with
θ_T(z)^*y = 0, we obtain that
y ^2 = ⟨ D_T^*^2x,x⟩ = ⟨(1_-T^*T)x,x ⟩ = 0.
Hence the condition that _T^* < ∞ implies that θ_T(z)^* ∈ L(_T^*,_T)
is injective for z ∈ U ∩ρ(T). But then θ_T(z) ∈ L(_T,_T^*) is
surjective for z ∈ U ∩ρ(T).
Let T be a row contraction. We say that T is pure if the completely positive map
P_T: B() → B(), X ↦∑_i=1^d T_iXT_i^* associated with T satisfies
SOT-lim_m→∞P_T^m(1_H) = 0.
For a pure row contraction T, the map
j: → H^2_d(𝒟_T^*), j(x) = ∑_α∈ℕ^d| α|!/α! (D_T^*T^*αx) z^α
yields an isometry intertwining T^* ∈ L()^d and M^*_z ∈ L(H^2_d(𝒟_T^*))^d componentwise such that
M_θ_TM^*_θ_T+jj^*=1_H^2_d(𝒟_T^*),
see <cit.>.
Since M_θ_T is a partial isometry, this leads to the orthogonal direct sum decomposition
H^2_d(𝒟_T^*) = θ_TH^2_d(𝒟_T) ⊕ jℋ.
We will subsequently refer to the map j from above as the canonical dilation of T.
Let us return to our default setting now:
Define H = H^2_d(𝒟) ⊖ M and T = P_H M_z|H ∈ L(H)^d, which is known to be a pure row contraction.
In view of Proposition <ref>,
the following missing inclusion settles the proof of our main results from Section <ref>,
with the exception of the statement about the right spectrum.
Let 𝒟 be a finite-dimensional Hilbert space, M ∈ Lat(M_z,H^2_d(𝒟)) and
θ:𝔹_d→ L(ℰ,𝒟) an inner multiplier from H^2_d(ℰ) to
H^2_d(𝒟) with M = θ H^2_d(ℰ). Then
σ(M_z,H^2_d(𝒟)/M) ⊃{λ∈𝔹_d; θλ}.
By Corollary <ref>,
it suffices to show that
σ(T)∩∂_d ⊃{λ∈∂_d; θλ},
or equivalently, that θ is surjective at λ for all λ∈ρ(T)∩∂_d.
Towards this, fix such a point λ. Then, Theorem <ref> guarantees that the characteristic function θ_T is surjective at λ. The rest of the proof is about establishing a connection between θ_T and θ.
Let ℛ⊂ H^2_d(𝒟)
be the smallest reducing subspace for M_z with ℛ⊃ H. Then (see <cit.>)
ℛ =⋁_α∈ℕ^d z^α(ℛ∩𝒟) = H^2_d(ℛ∩𝒟).
Since the inclusion map i: H → H^2_d(ℛ∩𝒟) and the canonical dilation
j: H → H^2_d(_T^*) are both minimal dilations for T, there is a unitary operator
U: 𝒟_T^*→ℛ∩𝒟 such that 1 ⊗ U ∘ j = i; see <cit.>.
Define 𝒟 = 𝒟⊖ (ℛ∩𝒟). Then
H^2_d(𝒟) = H^2_d(𝒟) ⊖ H^2_d(ℛ∩𝒟) =
H^2_d(𝒟) ⊖ℛ⊂ M
is the largest reducing subspace for M_z contained in M. Note that
(1 ⊗ U)θ_T H^2_d(𝒟_T) = (1 ⊗ U) (H^2_d(𝒟_T^*)⊖ Imj)
= H^2_d(ℛ∩𝒟)⊖ H = M ∩ H^2_d(𝒟)^⊥.
Hence we obtain the orthogonal decomposition
M = H^2_d(𝒟) ⊕ (M ∩ H^2_d(𝒟)^⊥) =
H^2_d(𝒟) ⊕ (1 ⊗ U)(θ_T H^2_d(𝒟_T)).
The operator-valued map θ: 𝔹_d → L(𝒟⊕𝒟_T,𝒟),
θ(z) = 1_𝒟⊕ (U θ_T(z))
defines an inner multiplier from H^2_d(𝒟⊕𝒟_T) into H^2_d(𝒟) with
θ H^2_d(𝒟⊕𝒟_T) = M.
Since θ_T is surjective at λ, so is θ.
Known uniqueness results about inner multipliers show that there exists a partial isometry V: 𝒟⊕𝒟_T →ℰ such that θ(z) = θ(z) V
and θ(z) = θ(z) V^* for all z ∈𝔹_d;
see <cit.> or <cit.>.
The second equality shows that θ extends to a holomorphic function in a neighborhood of λ,
and the first equality then shows that θ is surjective at λ.
A general result from multivariable spectral theory (Corollary 3.5 in <cit.>) says that, for a commuting tuple T∈ L(H)^d with σ(T) ⊂_d, we have σ_r(T)∩∂_d = σ(T)∩∂_d. Moreover, by Theorem <ref> and Corollary <ref>, we have
σ(M_z, H^2_d(𝒟)/M) ∩𝔹_d = {λ∈𝔹_d; θ is not surjective at λ} = σ_r(M_z, H^2_d(𝒟)/M) ∩𝔹_d.
Therefore, in the setting of Theorem <ref>, we have
σ(M_z, H^2_d(𝒟)/M) = σ_r(M_z, H^2_d(𝒟)/M).
§ APPLICATIONS TO ROW CONTRACTIONS
Since every pure commuting row contraction T ∈ L(ℋ)^d is unitarily equivalent to a quotient tuple of the
form M_z/M ∈ L(H^2(𝒟_T^*)/M)^d, Theorem <ref> yields a description of the Taylor spectrum of
T in terms of its characteristic function.
Let T ∈ L(ℋ)^d be a pure commuting row contraction such that 𝒟_T^* < ∞. Then
σ(T)=σ_r(T)={λ∈𝔹_d; θ_T λ}.
Recall from the discussion following the proof of Theorem <ref> that since T is a pure row contraction, the map
j: ℋ→ H^2_d(𝒟_T^*), j(x) = ∑_α n∈ℕ^d| α|!/α! (D_T^*T^*αx) z^α
is an isometry intertwining T^* ∈ L(ℋ)^d and M^*_z ∈ L(H^2_d(𝒟_T^*))^d componentwise such that
M_θ_TM^*_θ_T+jj^*=1_H^2_d(𝒟_T^*).
Since M_θ_T is a partial isometry, this leads to the orthogonal direct sum decomposition
H^2_d(𝒟_T^*) = θ_TH^2_d(𝒟_T) ⊕ jℋ.
Define M = θ_T H^2_d(𝒟_T) ∈ Lat(M_z,H^2_d(𝒟_T^*)) and H=H^2_d(𝒟_T*)⊖ M. Then
via the unitary operator j: ℋ→ H the given tuple T ∈ L(ℋ)^d and the compression
P_H M_z|H≅ M_z/M ∈ L(H^2(𝒟_T^*)/M)^d are unitarily equivalent. Thus the assertion follows from
Theorem <ref> and Remark <ref>.
In the single variable case d = 1 there is a natural extension of the result stated in Corollary 8 to the case of
completely non-unitary contractions T ∈ L(ℋ) with no restriction on the defect space 𝒟_T^*
(Theorem VI.4.1 in <cit.>). At this moment it remains open whether Corollary <ref> remains true without the condition that
the defect space 𝒟_T^* is finite dimensional.
As a consequence of Corollary <ref> we obtain a dichotomy for pure commuting row contractions
T ∈ L(ℋ)^d with 𝒟_T^* < ∞ whose characteristic function extends to an open
neighbourhood of the closed ball _d.
Let T∈ L(ℋ)^d be a pure commuting row contraction such that 𝒟_T^*< ∞. Suppose that its characteristic
function extends to a holomorphic map θ_T: U → L(𝒟_T,𝒟_T^*) on an open set
U ⊃_d. Then either σ(T) = _d or ℋ < ∞ and
σ(T) ⊂_d is finite.
As seen in the proof of Corollary <ref> there is a closed invariant subspace
M = θ_TH^2_d(𝒟_T) ∈ Lat(M_z,H^2_d(𝒟_T^*)) such that T is unitarily equivalent
to the quotient tuple M_z/M∈ L(H^2(𝒟_T^*)/M)^d. Suppose that σ(T) ≠𝔹_d.
Since σ(T) is closed, Theorem <ref> shows there is
a point λ∈𝔹_d with θ_T(λ) 𝒟_T = 𝒟_T^*. Then Theorem 1.4 in
<cit.> implies that θ_T(λ) 𝒟_T = 𝒟_T^* for all λ∈∂𝔹_d.
Corollary <ref>, now shows that
σ(T) ⊂𝔹_d. On the other hand,
since 𝒟_T^* < ∞, Corollary <ref>
implies that σ_re(T) ⊂∂𝔹_d,
hence σ_re(T) = ∅.
Therefore, ℋ < ∞ (see e.g. Theorems 9 and 17 in <cit.>), and hence σ(T) ⊂𝔹_d is a finite set.
plainurl
|
http://arxiv.org/abs/2306.11187v1
|
20230619224443
|
Structural Gender Imbalances in Ballet Collaboration Networks
|
[
"Yessica Herrera-Guzmán",
"Eun Lee",
"Heetae Kim"
] |
physics.soc-ph
|
[
"physics.soc-ph",
"cs.SI"
] |
1]Yessica Herrera-Guzmán
2]Eun Lee
3,4,*]Heetae Kim
0.8
[1]Research Center for Social Complexity, Universidad del Desarrollo, Chile.
[2]Department of Scientific Computing, Pukyong National University, Republic of Korea.
[3]Department of Energy Engineering, Korea Institute of Energy Technology, Republic of Korea.
[4]Data Science Institute, Faculty of Engineering, Universidad del Desarrollo, Chile
[*]Corresponding Author. Email: [email protected]@kentech.ac.kr
Structural Gender Imbalances
in Ballet Collaboration Networks
[
==============================================================
Ballet, a mainstream performing art predominantly associated with women, exhibits significant gender imbalances in leading positions.
However, the collaboration's structural composition on gender representation in the field remains unexplored.
Our study investigates the gendered labor force composition and collaboration patterns in ballet creations.
Our findings reveal gender disparities in ballet creations aligned with gendered collaboration patterns and women occupying more peripheral network positions respect to men.
Productivity disparities show women accessing 20-25% of ballet creations compared to men.
Mathematically derived perception errors show the underestimation of women artists' representation within ballet collaboration networks, potentially impacting women's careers in the field.
Our study highlights the structural disadvantages that women face in ballet and emphasizes the need for a more inclusive and equal professional environment to improve the career development of women in the ballet industry.
These insights contribute to a broader understanding of structural gender imbalances in artistic domains and can inform cultural organizations about potential affirmative actions towards a better representation of women leaders in ballet.
1
§ INTRODUCTION
One broadly investigated complex socioeconomic problem, is global economic inequality <cit.>.
There is growing evidence that economic inequality affects artists and, more importantly, women artists, to enjoy of economic growth and access to leading positions in their careers <cit.>.
It is therefore becoming increasingly important to understand the social dynamics of gender inequalities in the arts.
In specific, ballet is widely recognized and appreciated around the world, and is assumed as a women-dominated profession <cit.>.
However, recent reports show considerable gender imbalances where men specifically dominate leading positions <cit.>.
The lack women's representation as leaders in ballet has been widely discussed within the dance community, claiming for more equal professional opportunities <cit.>.
For example, data from American dance companies reveals the unequal representation of women (less than 40%) in artistic and executive positions <cit.>, while the overall participation of women in the workforce is about 70% <cit.>.
This difference of women's representation in leadership roles raises the question of whether or not women face a `glass ceiling' barrier in the ballet industry <cit.>.
In our complex society, individual characteristics —such as race, religion, education, or gender— have meaningful effects in social behaviors that shape structural disparities, which may be a result of homophily, the preference of individuals to connect with similar others <cit.>.
Network research reveals that structural properties influence the access to information <cit.>, creativity <cit.>, productivity <cit.>, and career success <cit.>.
Moreover, homophilic behaviours embedded to an imbalanced social structure can negatively affect the ranking of individuals from minority groups by enhancing segregation effects <cit.>.
In an imbalanced social structure, individuals may inaccurately estimate the frequency of the minority group, resulting in perception errors regarding the representation of attributes in a social network <cit.>.
As a result, the importance of the minority group can be over or underestimated respect of what can be expected from the real representation in the network <cit.>.
Since perception errors could reinforce unequal patterns in social connections, such as collaborations, understanding the role of network structure regarding gender imbalances can give an insight to an intervention of equal opportunities in professional positions.
Despite the collaborative nature of ballet creations, previous reports have primarily focused on quantifying the percentage of women and men artists involved, while the role of collaboration structures in contributing to gender imbalances in ballet remains poorly understood.
The existing literature evidences that gendered variations in social network structure contribute to different professional outcomes for men and women <cit.>, highlighting the importance of investigating the gender representation in collaborative structures.
Yet, there is a lack of systematic studies exploring the representation of women and the structural properties of ballet's professional network.
In this work, we investigate the social network structure and collaboration patterns of ballet creations.
We hypothesize that, if the network structure is unbalanced by gender, the imbalanced social structure will align with unequal collaborative behaviors and the existence of perception errors, which could explain why females do not undertake or are overlooked from leading positions in this industry.
This research relies on the stable collaborative structure of ballet, which has remained largely unchanged since its origins, to conduct a network analysis with scientific validity.
We construct collaboration networks from four renowned ballet companies and analyze their gender composition.
The collaboration structures studied here mainly comprise a core structure of ballet creators, such as choreographers, composers, and costumes and light designers.
We compare the real-world collaboration structures with randomized network models.
We specifically explore the structural gendered differences and the labor force composition in highly central positions.
We also measure the formation of perception errors on the women's group to examine a possible relationship between gendered collaboration networks and perceived working environment.
To the best of our knowledge, our study is the first attempt to understand the structural gender imbalances in major ballet companies.
This research will help understand the underlying social mechanisms driving gender inequalities in a highly collaborative performing art.
We hope that this work will shed light for more effective interventions to reduce the segregation of women in creative careers.
§ METHODS
§.§ Network of ballet creators
We construct the collaboration networks of ballet creators from four major ballet companies —the American Ballet Theatre (ABT) <cit.>, the New York City Ballet (NYCB) <cit.>, the National Ballet of Canada (NBC) <cit.>, and the Royal Ballet of the Royal Opera House (ROH) <cit.>— based on their worldwide prestige and the availability of their historical repertoire in their website.
Company data are collected using a Robotic Process Automation method for web scraping <cit.>.
Our data collection and research methods were approved on January 18th, 2023, by the Institutional Research Ethics Committee of Universidad del Desarrollo, in Chile.
The collected data includes original ballet titles, as stated in each company's repository, and refers to ballet works with artistic elements that remain constant across time, performances, and productions (e.g. creators, libretto, music, genre).
When appropriate, ballet companies list revivals (recreated works), and/or company premieres (productions that were originally debuted at a different ballet company, but that are presented for the first time in the company listing the work).
Collaboration networks are formed from the teams of leading artists working together to create a ballet work.
Teams of ballet creators are formed from each record of original ballet titles, which includes the credits of leading artists, such as principal creators (choreographer and composer) and specialized roles (librettist, costumes and lighting designer), and does not include the dancers or any other company members.
In a few occasions, companies report the producer, designer (unspecified), and media editor of a ballet work, and other team structures vary in size by adding multiple collaborators for the same role (e.g. two or more composers).
It is important to note that ballet is strongly recognized for its stable collaborative structure, comprising a core structure of leading artists, such as choreographer, composer, librettist, and costume and light designer.
Hence, in constructing the collaboration network of ballet creators, we consider all listed artists in each ballet title as equal contributors to the ballet creation.
Therefore, a ballet collaboration is defined as the creative and collective effort between choreographers, composers, costume designers, lighting designers, and other artists listed by each company, for the creation of a ballet title.
For further details, please see Section <ref>.
The processing of the data is as follows.
In Fig. <ref>a there is an illustration of the data showing a list of ballet titles (as an example, `Ballet 1' and `Ballet 2') with the names of ballet creators (A, B, C, D, and E), and their roles (e.g. Choreography, Music, and Costumes).
Then, all artists who collaborated in a ballet creation together are part of the same team.
To construct the collaboration network of each company, we first build a bipartite network between ballet creations and artists, as seen in Fig. <ref>b, where left nodes represent ballet titles and right nodes display the artists that created a ballet title.
Next, artists’ collaborations are projected to an undirected graph, as shown in Fig. <ref>c, where each node represents one artist, and a link between two artists denotes their collaboration in the same ballet creation.
An artist who teams up in more than one ballet creation will connect multiple artists in the same company, becoming a connector in the collaboration network.
The resulting empirical networks include about 300–560 ballet works, with a range of 490–850 artists (nodes) and 1900–3100 collaborations (links).
In addition, he time of reported ballet creations ranges from 1930's to 2020's, making the networks comparable in terms of size and longevity.
Basic network properties —such as size of the giant component, average clustering coefficient <cit.>, average shortest path <cit.>, and small-worldness <cit.>— can be seen in Table <ref>.
§.§.§ Gender inference
Artists' names were processed for misspelling, middle names, and initials to distinguish artists' identity.
The names are held constant if reported across multiple companies.
Then, we infer artists' gender by using package for R <cit.>.
This package contains names from various countries and periods, and infer names from standardized databases (, , , and ), making it adequate for this study since the collected data contains names of artists with diverse nationalities and were born in the 19th and 20th centuries.
To estimate an artist's birth year, we assume that each artist was at least 20 years old when they participated in a ballet creation for the first time.
Thus we subtract 20 years from the year of the first ballet production of an artist in our data as a proxy of the minimum age for a productive life in ballet.
This method considers a range of 10 years (± 5 years from the estimated birth date).
Then, the package estimates a probability that a person would have certain gender with the name.
If the probability is larger than or equal to 0.7, the corresponding gender is assigned to each artist.
Here, the assigned `gender' is a binary property (Woman, Man) and does not consider other gender assignments.
Note that the inferred gender does not refer directly the sex of the artist nor the self-assigned gender chosen by each artist., but is used as estimate of the social construction of gender.
The names which were not able to assign gender with this method were manually assigned after a web search of the artist's identity.
§.§ Network analysis and gendered differences in centrality
We measure four network metrics to understand the importance or centrality of artists in the collaboration networks:
* Degree centrality is computed following <cit.> to measure the number of total connections of a node.
This metric can capture the individual access to a richer social capital.
* Harmonic centrality is computed following <cit.>, and is a variant of closeness centrality created to deal with unconnected graphs to measure the distance one node has respect to all other nodes in the network.
In other words, harmonic centrality captures the position of nodes to efficiently reach distant parts of the network.
* Betweenness centrality is computed considering all pair of nodes as described in <cit.> to measure the number of shortest paths between two pairs of nodes that pass through a node in a network.
This metric captures what nodes are best intermediaries or bridges between different parts of the network.
* Eigenvector centrality is computed following <cit.> and measures the importance of a node based on the centrality of its neighboring nodes.
This centrality informs about the nodes who are connected to other influential or central nodes, as these can help gain social prestige in the network.
These metrics are informative on the differential ranking of individuals embedded in the network <cit.>.
For example, one artist with high centrality (e.g. degree) should indicate that the artist has multiple connections in the network, then being well positioned to have more access to information, social connections, and professional opportunities.
In a global sense, these centrality metrics help identify structural patterns within a network, providing insights into the underlying relationships between individuals that ultimately shape the network.
From the centrality metrics, we sort all artists by their centrality in descending order, and selected a group of top 20 artists, referred as Top-Central Artists (TCA) in this study.
Let consider the ranking of centralities C(r), where C denotes a corresponding centrality value of an artist at a given rank r, for r = {1, 2, …, 19, 20 }, so r = 1 represents the most central artist having the highest corresponding centrality (e.g. C(1) = 0.8), and r=20 will have the lowest centrality (e.g. C(20) = 0.2).
We select the top 20 as this fraction captures the largest observed variation of centrality values in the empirical networks, and between and within gender groups.
Then, by analyzing the TCA, we capture the artists with best connected individuals in the network and the differences in network positions across gender categories.
Next, we implement the TCA ranking to form three independent groups: the first group is for all artists in a company's collaboration network, labeled as TCA_Network, and the other two groups are for a company's artists grouped by gender, which results in two separate rankings for TCA_Women and TCA_Men.
All centralities are then normalized by the maximum value of the centrality within company group (Network) and by company gender groups (Women, Men), to have a linear scaling of [0,1] range.
In more detail, the TCA_Network uses a dense rank function, which generates rank ties for observations with the same centrality values, so a variation of the total number of artists is possible if there are artists with equal centrality at each rank.
For TCA_Women and TCA_Men, the tied centrality is not considered to keep an equivalent number of women and men artists (i.e. 20 artists per group).
Separately, we quantify the women ratio R_Women in each TCA_Network, computed as R_Women = ∑_i^N_TCAθ(i)/N_TCA.
Here, i denotes an index for an artist who is in a corresponding TCA, where N_TCA represents the total number of artists in a TCA_Network, and θ(i)=1 when an artist is woman, or 0 for men.
Then, R_Women provides the fraction of women artists who belong to the group of best connected individuals in the collaboration network of a ballet company.
A numerical fraction of women artists at the network level of 0.5 is assumed as a gender-balanced collaborations, and we call this situation as `neutral' composition.
Further, we measure the difference in centrality Δ C(r) between two rank-matched artists from each gender group is measured as Δ C(r) = C_Men,r - C_Women, r.
Here, each woman artist from TCA_Women is matched to their corresponding r pair from TCA_Men.
That is to say, if there is a woman artist ranked 1 in TCA_Women with a centrality value of 0.4, she is at the most central position in the women's group, and it can be written as C_Women, 1 = 0.4.
The counterpart of man artist, who is ranked 1 as well in TCA_Men will be C_Men, 1 = 0.5, if he has a centrality of 0.5.
Then, Δ C(1) = 0.5-0.4 = 0.1.
If Δ C(r) > 0, it means that a man artist is located on more central position than the woman counterpart.
§.§.§ Null model analysis
We compute two different null models by simulating 100 synthetic networks derived from the representation of each company's empirical collaboration network.
With the help of the null models, we remove the collaborator- or gender-preferences by shuffling collaborations (links) or artists' attributes (gender) in the collaboration network.
The overall purpose of the null models is create a baseline of randomly created networks, which would allow us to determine the absence or existence of randomness in the observed patterns respect to the empirical network.
* Edge-shuffled model: In this model, edges are randomly rearranged in the network while preserving artists' degrees.
This means that the total number of collaborations per artists are preserved, as well as the total number of artists (nodes) in the network and artists' gender.
We use the ‘random_reference’ function of <cit.>.
From this shuffling, we remove the gendered correlation from empirical collaboration networks.
Therefore, the resulting synthetic networks show collaboration structures when there is no gender preference.
* Gender-shuffled model: This model shuffles the gender of artists while holding all network properties constant.
Here, the empirical network structure is used as a reference, without nodes’ attributes, over which a dictionary containing the gender of all nodes is used to randomly assign artists' gender, while preserving the real fraction of women and men in the network.
In this way, artists’ network position are preserved, but their gender is randomized in each iteration.
Therefore, the resultant networks display an artificial collaboration pattern without a correlation between an artist's gender and position, as well as a gendered collaboration assortativity.
To test a null hypothesis distribution, we compute the Z-score for a distinction between the centrality values from the empirical networks and those from the null models.
We denote the observed centrality by rank in the real network as C(r)_real, and that of the null model as C(r)_null.
Then, the Z-score for any TCA group uses the centrality from the empirical network, C(r)_real , and the averaged centrality of 100 null models, C̅(r)_null, so that Z(C) = C(r)_real - C̅(r)_null/σ(C(r)_null).
Z-score of Δ C(r) in the empirical network is also measured with the values of the synthetic networks as Z(Δ C) = Δ C(r)_real - ΔC̅(r)_null/σ(Δ C(r)_null).
§.§ Perception error on women artists
To understand more about the implications of the gendered differences in the collaborative environment, we use a mathematical approach to measure the existence of perception errors based on <cit.>.
Perception errors refer to the inaccuracy in the estimation of the frequency of an attribute —usually of a minority group— in a social network, perceived from the frequency of that attribute within the individual local network <cit.>.
In this research, perception errors are the difference in the perceived fraction of women artists from the local network, respect to the fraction in the entire network.
For instance, if in the local network there are mostly women, but in the entire network there are more men, one individual will have a perception error above one that over estimates the size of the women’s group, while the opposite happens for the under-estimation of the women's group, with a value below 1.
Thus when the perception error is equal to 1, that means that the perception of the fraction of women in the network is accurate.
The perception error B of an individual artist i is thus computed as B_i = W_i/R_Women, where W_i denotes the local fraction of women among i's collaborators, and R_Women refers to the real fraction of women in the network, as noted above.
Based on the individual artist's perception error, we measure an averaged perception error by gender group at a network-level, so B̅_Women, Men =∑_i B_i/N_Network, where N_Network represents the total number of artists in a ballet company.
Consequently, when B̅ = 1, it means that the overall perception of women on a company is accurate on average, and when B̅ < 1 (B̅ >1 ), a gender group underestimates (overestimates) the ratio of women artists on average.
In addition, a gendered homophily is measured following the method in <cit.> to see gendered preferences of the collaboration networks.
§ RESULTS
Based on previous reports on the lack of representation of women in leading positions in ballet <cit.>, we explore the general composition of the collaboration networks of ballet creators and the existence of gendered collaboration patterns in the professional environment.
We also look into the composition of the most central network positions and the gender gap between men and women's centrality in the network; in addition, we measure the existence of perception errors of the women's artists group within ballet companies.
We compare network position and perception errors from the empirical network structures with two null model analyses.
§.§ Team structure and collaboration patterns
The most common team size for a ballet creation across companies is three to four (20–40%), followed by five members (20%), as shown in Fig. <ref>a.
This evidences that teams of ballet creators are mostly formed by the typical collaborative structure of leading artists.
Fig. <ref>a shows a sample of the representation of women in a ballet company (ROH), shows that there are about 50% of teams having 100% men artists, and less than 10% of teams have gender neutral ratio of 50%.
Conversely, the majority of teams are composed with less than 50% of women artists, regardless of their sizes, and teams having 100% women artists is almost zero.
Dance communities have specifically reported an overlooking of women in choreographic leads, and our results suggest that women are less represented than men in general leading roles.
Exploring the team composition by artistic role, Fig. <ref> shows that the proportion of women is considerably low for Choreographer, Librettist, and Composer.
Other positions such as Costumes, Lighting, and Design have a relatively larger participation of women.
These results suggest variations between women and men regarding artistic roles are possible.
However, because most teams of ballet creators are formed by a core structure of leading artists, here we focus on the structural representation of women at both team and network-levels, rather than an individual-artistic role level.
Further, in Fig. <ref>b we see that when women collaborate in one team, the frequency of working with other women in the same team is actually very low (< 30%).
These results describe that women artists mostly work in men-dominated environments.
In addition, men-alone teams are rather rare (< 10%), as they tend to collaborate with at least other three to five men (> 20%) and participate in considerably larger teams than women (up to 11 men in one team, at ROH).
In terms of productivity, women artists are less involved in ballet creations than men artists.
One ballet creation refers to the participation of an artist in a team as leading artist for the creation of a ballet work.
In NYCB and ROH, the most productive woman participates in about 20-25% of the creations of the most productive man artist collaborated (ROH's maximum collaborations: Men = 76, Women = 16; NYCB's maximum collaborations: Men = 211, Women = 54, see Figures <ref>c, and <ref>b).
For NBC, the highest productivity is a bit similar for both genders.
Women artists' highest productivity is just 86% of the most productive man (NBC's maximum collaborations: Men=38, Women=33).
Only at the ABT, the most productive woman artist exceeded in 20 collaborations to the most productive man artist (ABT's maximum collaborations: Men=35, Women=55).
Despite the exception, most women artists are less productive than their men counterparts, and the global picture for women is to work in men-dominated creative environments.
Team structures, and collaboration and productivity patterns, are similar across all companies studied here (for more details and figures by company, see Section <ref>).
§.§ Centrality differences by gender
So far, we have observed a less frequent participation of women respect to men in ballet collaborations.
These observations raise the question: Does the low participation of women relate to their network position in the company?
To answer this question, we explore the distribution of artists' collaborations in the network.
We first compute the fraction of women in the network, R_Women, and the proportion of dyadic interactions (see Table <ref>), showing that most companies only have about 20% of women in leading positions.
Figure <ref>a shows a network sample, where men (in yellow) are not only a majority but also with higher connectivity respect to women (in purple).
(See all companies' collaboration networks in <ref>).
Moreover, the man-man connections are more than 60% across companies (yellow links, Fig. <ref>b) and mixed connections are about 30% on average.
On the other hand, woman-woman connections are less than 5% of the total dyadic interactions (purple links, Fig. <ref>c).
These results inform that, for every 4 men, there is only one woman in the network, a collaborative structure in which men artists are densely co-worked with other artists regardless of gender, locating at the center of the collaboration network, while women artists are sparsely distributed in the periphery of the network.
We then evaluate the proportion of women in the group of top-central artists, TCA_Network, by their network centrality rank, C(r), and observe that most companies have a lower representation of women respect to R_Women in the empirical network.
We observe an overall increase of R_Women in the randomized models for all centralities (see all companies in Fig. <ref>).
For example, the Edge-shuffled model improves R_Women for harmonic centrality from 10% to 15%, and Gender-shuffled model raises it up to 19% in the sample of the ROH, a fraction that matches the R_Women of the total empirical network (Fig. <ref>).
Note that the Edge-shuffled model keeps R_Women in TCA regarding degree centrality because the number of collaborations (degree) for an artist and their inferred gender are held constant in this model.
These results suggest that the low representation of women artists in ballet creations could be related to gender assortative collaborations, and the current level of women artists' centrality is not a deterministic outcome of the small fraction of women artists in the company.
That is to say, even when the fraction of women remains small in a network, women artists' representation could be improved if more equal collaborations for women were encouraged.
The Z(C) reveals a general change in artists' centrality with the null models (see ROH's sample in Fig. <ref>, all companies in Fig. <ref>a).
For the Edge-shuffled model, only the harmonic centrality displays a negative Z-score for both women and men (sample in Fig. <ref>a).
Harmonic centrality denotes an extent of an artist's closeness to other artists on average, so small value represents far distance between artists.
The negative Z-score suggests that the distance among artists in empirical collaboration networks falls apart farther than the expected distance from the null models.
In other words, the distribution of TCA in the empirical networks is more central, suggesting that TCA can more efficiently reach other artists in the network respect to the distance observed in the null models.
For the Gender-shuffled model, the negative women artists' Z-scores indicate that their positional importance can be improved in a synthetic network with collaboration imbalances (sample in Fig. <ref>b).
Altogether, our results suggest that differences in centrality among TCA may not be derived by random factors, but there may be underlying systematic social behaviors limiting women artists' collaborations and network position, regardless of their small fraction in the network.
The difference in degree centrality (Δ C) highlights that a man artist locates at more central position than the same-ranked women artist in her TCA group.
Figure <ref>c shows a sample for degree centrality and reveals that the most central man is considerably more central than the most central woman.
This trend of Δ C is observed across centralities with slight variations, confirming that men are considerably better positioned respect to women across companies (see gender gap in centrality for all companies in Fig. <ref>).
Interestingly, all empirical Z-scores for Δ C are several standard deviations away compared to the null models (see all companies in Fig. <ref>b).
Figure <ref>c illustrates the variations by null model, and showcases that a large gender gap is less likely observed when the gender preference (Edge-shuffled) and gendered productivity correlation (Gender-shuffled) are destroyed.
§.§ Perception error of women artists
Given the observed structural imbalances in the ballet collaboration networks, the low participation of women in professional collaborations could affect the perceived frequency on women artists in the entire network.
Perception errors are the distorted frequency estimation of an attribute in a social network by the individual local environment <cit.>.
Here, the perception error is defined as the fraction of the observed frequency of women in an artist's local collaboration network over the real fraction of women in the global network (see Methods).
That is, the perception error denotes a relative difference of women artists in the local collaboration environment of each artist and the actual women artists' frequency in each ballet company.
From the individual-level perception error B_i, a gender group-level error B̅ compares the average perception error for women and men.
If B̅ > 1.0 (B̅ < 1.0), it means a gender group overestimates (underestimates) the global frequency of women artist.
When B̅ = 1.0, it denotes an accurate perception on the women frequency (see Methods).
We complement perception error with a measure of homophily.
Our results show that the empirical collaboration networks of the ABT, NYCB, NBC, and ROH, the women (men) artists' homophily is 0.56 (0.53), 0.47 (0.55), 0.45 (0.57), and 0.56 (0.63), respectively (1 is a perfect homophily, and 0 is a perfect heterophily situation).
The ABT has relatively gender-mixed environment, resulting in both gender groups having relatively accurate perception on the global fraction of women artists, as shown in Fig. <ref>a.
Conversely, the rest of the companies demonstrate a wide difference in the perception error by gender, shown in Fig. <ref>b–d.
For instance, NYCB's men group underestimates women artists about 7%, but their women group underestimates themselves about 27%, showing a 20% difference in the perception of women between the two groups.
Such difference may be related with men artists' strong homophily in NYCB collaborations, and women artists' gender-heterophilic collaborations (woman-man heterophily 0.53 >, and woman-woman homophily 0.47), and indicates a perceived underestimation of women artists by themselves.
In ROH, women artists have a more accurate estimation of women artists respect to men artists, which aligns with their collaborative behaviors, where women artists collaborate more with other women artists than other men artists (woman-man heterophily 0.44 < woman-woman homophily 0.56).
Yet, the difference in perception still exists since men artists collaborate mostly with men artists (man-man homophily 0.63, man-woman heterophily 0.37), and the assortative collaboration widens the difference in perception between gender groups.
Interestingly, the Edge-shuffled model displays a reduction in perception error difference between women and men, even though the reduction is limited.
The reduction of the perception error between women and men suggests the existence of gender-preferred collaborations between gender groups.
Moreover, the Gender-shuffled model not only sensibly reduces the difference in the average perception error for women and men, but also achieves a nearly accurate perception on the fraction of women artists.
This strongly suggests that lowering an extent of imbalanced productivity and gendered preferences altogether boosts the representation of women artists, even considering the small representation of women artists in the company.
§ DISCUSSION
Inequalities have been investigated for different occupations to capture the gender gap in salary and labor force composition <cit.>.
In this context, we find that the representation of women artists in ballet creations is about 18–22%, which is lower than the reported 25% for choreographic leads <cit.>.
These values are far below a gender-neutral ratio (0.5, see Table <ref>) and are lower for highly central artists (in this study, TCA groups).
In general, we find that women artists are underrepresented in the overall collaboration network and all positions as leading artists, not only as choreographers.
In addition to the numerical imbalance, our results suggest that gendered collaboration structures could potentially aggravate gender imbalances in ballet creations.
Crucial roles of individual's social network are associated with the access to information and professional opportunities in creative collaborations <cit.>.
Thus, our results increase the understanding of how gendered collaborations could impact artists' professional experiences and the social perceptions of women as a minority group.
The comparison with null network models gives a hint that the observed gender imbalances in terms of central network positions could be explained by systematic inequalities in collaborative behaviors rather than by random factors.
The social network structures, such as network position and social prestige, play an important role shaping successful careers <cit.>.
Some studies show that men and women utilize different social network structures and behavioral patterns that influence their placement in the job market <cit.>.
Other studies show that the formation of a personal network and social behaviors over time are related to reinforced perception errors <cit.>.
Taken together, these studies suggest the existence of a permanent feedback for the formation of social relationships, social perception errors, and collaboration patterns, which in return, can influence individual career decisions.
For women in ballet, a feedback based on a low representation within creative and men-dominated collaborations could negatively impact their decision to undertake a career as ballet creators or engaging in multiple collaborative projects.
A future study in this line could provide evidence on why women in ballet experience `glass barrier' in the field, and whether womens' network position facilitates their career success and impact in the long term.
In addition, the collaboration structure can be crucial for teams <cit.> and individual performance <cit.> in terms of creativity and success <cit.>.
A study demonstrates that diversity can improve creative performance <cit.>, and emphasizes the participation of women in collaborative environments because they increase the social sensitivity of the group, making the team collectively more intelligent and proficient <cit.>.
In view of this, new policies for more equal collaborations and a more inclusive environment for women as leading creators should be considered in ballet companies.
A more diverse inclusion would boost creative innovation and impact, which ultimately benefits the artistic community in general.
At an individual level, perception errors derived from the network structure in the workplace can affect career decisions.
In specific, our results reveal that most companies experience perception errors on the fraction of women artists participating at the company.
The constant underrepresentation of women could negavitely impact their visibility as a group, undermining the motivation of women artists to look for better professional opportunities.
This interplay among working environment, perceived possibility of career development, and personal decisions could be a pivotal issue to alleviate the low representation of women in ballet and other industries where women are a minority.
Our measure of perception errors is a mathematical approach and can be improved, as there are multiple factors influencing the perception of a local network structure.
That is to say, a local network can be described not only by its structure, but also to its embedded social mechanisms, like the strength of relationships formed over time, access to information, formal and informal norms <cit.>, and individual cognitive processes and preferences <cit.>.
In addition, ballet is strongly influenced by biological constraints, such as the physical demands of the art form, including strength, flexibility, and technical requirements.
These constraints, combined with the distribution of labor in family responsibilities, may be stronger for women <cit.> and may contribute to fewer women to overcome social barriers in the workplace and hinder the professional development of women artists in ballet.
Overall, our results help understand another dimension of gender inequalities in the ballet industry.
Yet we are aware of the limitations of this work.
Our data depends on the archival of the selected ballet companies, which may not be sufficient to generalize the current results to entire ballet industry.
Moreover, artists may hold different types and duration of contracts within a company, which can result in variations in observed professional collaborations.
To overcome this, more comprehensive digitized data collections would be needed.
For instance, with the implementation of computational methods, such as deep learning and network science, it has been possible to objectively measure the impact of individual performance in creative domains <cit.>, and similar methods can open the possibility for future research on the relationship between gender, network centrality, and actual ballet creators' impact in the field.
In summary, our research highlights the low representation of women as ballet creators and sheds light on their peripheral network position and gendered collaboration preferences within the ballet industry.
This investigation can be extended to explore dynamic network factors shaping gender imbalances to propose possible and more adequate interventions for the diversity, equity, and inclusion in cultural organizations.
We hope that this work brings awareness on how social phenomena and inequalities in creative domains can be systematically studied with network science and data driven methods.
§.§ Data availability
The data used in this study is available under reasonable request.
§.§ Abbreviations
ABT, American Ballet Theatre; NBC, National Ballet of Canada; NYCB, New York City Ballet; ROH, Royal Ballet of the Royal Opera House; TCA, Top-Central Artists.
§.§ Availability of data and materials
The data analyzed during the current study are available from the corresponding author on reasonable request.
§ COMPETING INTERESTS
The authors declare that they have no competing interests.
§.§ Funding
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No.NRF-2022R1C1C1005856), the National Agency of Investigation and Development, ANID, through the grant FONDECYT No. 11190096, the KENTECH Research Grant(KRG 2021-01-003), and the Pukyong National University Research Fund in 2022(202203530001).
§.§ Author's contributions
All authors contributed to the research design and writing of the paper.
YHG contributed with art-specific knowledge, constructed the data and networks, developed and performed the models, analyzed the data, and performed data visualizations; EL was mainly responsible for the measurement of perception errors and homophily; and HK contributed to data construction and network analysis.
EL and HK supervised the research.
All authors discussed the results and contributed to writing the manuscript.
All authors read and approved the final manuscript.
§.§ Acknowledgements
YHG acknowledges the Centro de Investigación en Complejidad at Universidad del Desarrollo, Chile, for the financial support to conduct this research.
[heading=subbibintoc]
Supplementary Information for
Structural Gender Imbalances
in Ballet Collaboration Networks
Yessica Herrera-Guzmán, Eun Lee, Heetae Kim
*Corresponding Author. Email: [email protected]@kentech.ac.kr
SIfig
1.5
This file includes:
Supplementary Text
Figures S1 to S7
§ BALLET COLLABORATIONS
We define a ballet collaboration as the creative effort between choreographers, composers, costume designers, lighting designers, and other artists listed by each company, for the creation of a ballet work.
In practice, other types of ballet collaborations are possible, such as those that put the ballet work on stage and are involved in the development of a production (e.g. production managers, technicians, theatre staff).
Also, some ballet creations require the direct collaboration of the choreographers with the ballet dancers.
Due to limited access to company data, we only consider the artistic roles as defined in a ballet collaboration.
§ CENTRALITY DIFFERENCES BY GENDER AND NULL MODELS ANALYSES
|
http://arxiv.org/abs/2306.11979v1
|
20230621021207
|
Qini Curves for Multi-Armed Treatment Rules
|
[
"Erik Sverdrup",
"Han Wu",
"Susan Athey",
"Stefan Wager"
] |
stat.ME
|
[
"stat.ME"
] |
Qini Curves for Multi-Armed Treatment Rules
Erik Sverdrup
Han Wu
Susan Athey
Stefan Wager
June 20, 2023
===========================================================
Qini curves have emerged as an attractive and popular approach for evaluating the benefit of data-driven targeting rules for treatment allocation. We propose a generalization of the Qini curve to multiple costly treatment arms, that quantifies the value of optimally selecting among both units and treatment arms at different budget levels. We develop an efficient algorithm for computing these curves and propose bootstrap-based confidence intervals that are exact in large samples for any point on the curve. These confidence intervals can be used to conduct hypothesis tests comparing the value of treatment targeting using an optimal combination of arms with using just a subset of arms, or with a non-targeting assignment rule ignoring covariates, at different budget levels. We demonstrate the statistical performance in a simulation experiment and an application to treatment targeting for election turnout.
§ INTRODUCTION
The Qini curve, initially proposed in the marketing literature <cit.>, plots the average policy effect of treating the units most responsive to the treatment as we vary the budget. We can then quantify the value of treatment targeting by evaluating a cost-benefit exercise undertaken at a series of distinct budget levels. The Qini curve has been adopted in a variety of practical applications to evaluate the empirical performance of treatment targeting rules subject to resource constraints <cit.>.
The theoretical properties of Qini-like metrics under a binary treatment, and extensions to area under the curve summaries, have recently received attention in the statistics literature by a number of authors, including <cit.>, and <cit.>. These approaches consider the problem of targeting the assignment of a (possibly costly) binary intervention. In this paper, we explore the extension to scenarios where there are multiple treatment arms, and where the benefits and costs of assignment may vary across units. For example, a low-cost drug may be beneficial for a certain group of people, but a high-cost drug may be even more beneficial for a subset of these. Analyzing this setting through separate Qini curves for the two arms can conceal important efficiency trade-offs. For a specific budget, the optimal policy may entail assigning different drugs to different people; a less expensive drug for one group and a costlier drug for another. Determining the optimal treatment assignment policy that maps individual characteristics to one of several treatment arms involves solving a constrained optimization problem.
We develop the theoretical and statistical framework to extend the Qini curve to the case where we have many mutually exclusive and costly treatments. We show that the Qini curve extended to multiple arms retains the desirable ratio-based interpretation of the Qini for a single treatment arm, where it is not the absolute costs that determine the optimal allocation, but rather the incremental efficiency of each arm. This means that it is not necessary to denominate treatment effects and costs on the same scale. An additional unit of budget is allocated to an arm and a set of targeted participants (defined by their characteristics) if the ratio of the benefits to the set of participants relative to the cost is greater than the corresponding ratio for any other arm and set of participants.
To gain an intuition for the generalization of the Qini to multiple arms, recall that the Qini curve for a single arm is an evaluation metric for evaluating a treatment rule induced by a policy. With a single treatment arm, where for simplicity the cost of assignment is the same for each unit, the optimal policy is to allocate treatment in decreasing order of the conditional average treatment effect. Given estimates of these treatment effects, the traditional Qini curve plots the estimated value of assigning treatment to individuals as prioritized by their estimated treatment effects. Figure <ref> shows examples of Qini curves as dashed lines. For example, if we can only use arm 1 and have a total budget of 0.2, then we can achieve a gain of 0.52; whereas if we can only use arm 2 the same budget yields an estimated gain of 0.56. Note that, once we pass a spend-level of 0.3, the arm-1 Qini curve plateaus—this is because, once we've reached this spend level using arm 1, we're already giving treatment to all units believed to benefit from it, and so cannot achieve further gains via increased spending.
The Qini curves for a single treatment arm in Figure <ref> are straightforward to compute, as the underlying policies induce a priority rule that involves sorting units in order of the estimated conditional average treatment effect. Computing the optimal allocation for a multi-armed policy is more complicated, as it involves solving a constrained cost-benefit problem across many arms. We show that, even though the underlying multi-armed policies are more complicated, they still yield an induced treatment rule that can be evaluated with Qini curves, just like the single-armed case. The solid black line in Figure <ref> shows the Qini curve for the estimated multi-armed policy, and highlights that since different arms can be better for different groups, targeting enables the different arms to be assigned accounting for the cost-benefit analysis appropriate for distinct subgroups. For example, with a budget of 0.2, we can now achieve a gain of 0.68, which is better than what we could get with either arm alone.
Incorporating additional arms beyond two improves (i.e., raises) the Qini curve for two reasons. First even in the absence of targeting, expanding the budget leads to greater use of arms that on average are less efficient (lower benefit-cost ratio) but are relatively beneficial. Second, targeting allows the identification of subgroups who particularly benefit from arms that might perform poorly on average, and thus not be prioritized in the absence of targeting.
We characterize the optimal multi-armed policy, showing that when expanding the budget, the optimal assignment selects units to receive more effective treatments according to where the incremental benefit-cost ratio is highest. We further show how, for given characteristics of a unit, the optimal policy can be characterized by a set of budget thresholds where the unit's assignment changes to a more beneficial but less efficient arm. We propose an efficient algorithm for estimating the solution path of the multi-armed policies that underlie the Qini curve, where the algorithm allocates initial budget efficiently, and then makes use of our theoretical characterization to allocate incremental spend to the most incrementally efficient units.
Our main theoretical result quantifies uncertainty for points on the Qini curve via a central limit theorem for the estimated multi-armed policy values. The result takes estimates of conditional average treatment effects (over a control) and expected costs as given, but accounts for the uncertainty from approximating the optimal allocation for each level of budget, and from estimating the policy value for that allocation. The central limit theorem can be used to estimate the difference between two Qini curves at a given budget, for example, alternative Qini curves induced by alternative treatment effect estimators, or Qini curves estimated for subsets of treatment arms, or without targeting.
An open-source software implementation of the proposed method is available at https://github.com/grf-labs/maqgithub.com/grf-labs/maq.
§ THE SOLUTION PATH FOR OPTIMAL MULTI-ARMED TREATMENT ASSIGNMENT
To characterize the optimal multi-armed treatment allocation, we operate under the potential outcomes framework <cit.>. We assume that we observe independent and identically distributed samples (X_i, W_i, Y_i, C_i) P for i=1,…,n, where X_i ∈𝒳 denotes pre-treatment covariates, W_i ∈{0, 1,…, K} denotes the treatment assignment (W_i=0 is the control group), Y_i ∈ℝ denotes the observed outcome, and C_i ∈ℝ denotes the incurred cost of assigning the unit the given treatment. We posit the potential outcomes {Y_i(0),…, Y_i(K)}, {C_i(0), …, C_i(K)} and we assume Y_i = Y_i(W_i) and C_i = C_i(W_i) (SUTVA). Defining costs via potential outcomes is a convenient modeling approach as it can capture settings where costs are not realized until after a particular treatment arm has been assigned <cit.>.
For the mutually exclusive treatment arms k = 1,…, K, let τ(X_i) and C(X_i) denote the vectors of conditional average treatment effects and cost contrasts, i.e. the k-th elements are:
τ_k(x) = Y_i(k) - Y_i(0) | X_i = x,
C_k(x) = C_i(k) - C_i(0) | X_i = x.
We assume that withholding treatment is costless, i.e. we have access to a control arm that does not incur a cost.
C_i(0) = 0 and C_i(k) ≥ C_i(0) almost surely and C_i(k)-C_i(0) | X_i = x > 0 for all k = 1,…,K.
Our goal is to gain insight into how much there is to gain from treatment targeting if treatment is assigned optimally. To do so, denote a policy by π: 𝒳→ℝ^K, a mapping from covariate X_i to a treatment assignment. The policy π(X_i) is a K-dimensional vector where the k-th element is equal to 1 if arm k is assigned, and zero otherwise.[Fractional assignments between 0 and 1 are admissible and can be interpreted as probabilistic assignment between arms.] The associated value of this treatment assignment policy is the expected value:[In the policy learning literature it is sometimes common to define the value of a policy via potential outcome means <cit.>. Had we instead encoded π to take values in the set {1, …, K} then an equivalent formulation of the gain (<ref>) would be V(π) := Y(π(X_i)) - Y(0).]
The expected gain (policy value) of a treatment assignment policy is the expected value it achieves in comparison to assigning each unit the control arm,
V(π) = ⟨π(X_i), τ(X_i) ⟩,
where the notation ⟨ a, b⟩ denotes an inner product between vectors a and b.
Similarly, the cost of this policy is defined as Ψ(π) = ⟨π(X_i), C(X_i)⟩. The optimal policy is the one that, for a given budget level, maximizes the expected gain while incurring costs less than or equal to the budget in expectation. Given a budget B, the optimal unrestricted policy π^*_B that only depends on X_i solves the following stochastic optimization problem:
π^*_B = {V(π): Ψ(π) ≤ B}.
In the case of only a single treatment arm (K=1), but where each unit's cost may be different, (<ref>) is an instance of the fractional knapsack problem <cit.> and the optimal policy induces an appealing treatment rule allocating treatment to units in decreasing order of the cost-benefit ratio Y_i(1) - Y_i(0) | X_i=x / C_i(1) - C_i(0) | X_i=x until the budget runs out <cit.>. The treatment allocation in this induced ranking constitutes the solution path over varying budget levels.
The multi-arm case (K > 1) is more complicated, as (<ref>) then belongs to the class of multiple-choice knapsack problems <cit.>, a type of optimization problem that involves filling a knapsack up to a capacity by selecting at most one item from a set of classes, where each item has an associated “profit” and “weight”. In our formulation, the class is a unit and the item is a treatment arm with the profits and weights corresponding to the conditional average treatment effect and cost of the particular arm. The knapsack capacity is the budget constraint. Allowing for fractional treatment allocation reduces this problem to a linear program with nK choice variables. Using the transformation principles presented in <cit.>, it is possible to recast this into inducing a similar treatment priority rule, but where the priority is based on “incremental” cost-benefit ratios.
§.§ Characterizing the Optimal Polices
The idea behind characterization via incremental cost-benefit ratios is to recast the problem of choosing between both units and treatment arms into thresholding a suitable priority rule that captures both which unit and which arm is optimal to assign at a given budget level. For any given unit i, the only treatment arms that will be active in the optimal solution are the ones that lie on the convex hull of the cost-reward plane <cit.>. For any x ∈𝒳, define the convex hull formed by the points (C_k(x), τ_k(x)), k = 0,…,K to be a set of m_x points with the ordering k_1(x), …, k_m_x(x) such that
0 = C_k_1(x)(x) < ⋯ < C_k_m_x(x)(x)
0 = τ_k_1(x)(x) < ⋯ < τ_k_m_x(x)(x)
ρ_k_1(x)(x) > ⋯ρ_k_m_x(x)(x) > 0
where we define the incremental cost-benefit ratio as
ρ_k_j(x)(x) := τ_k_j(x)(x)-τ_k_j-1(x)(x)/C_k_j(x)(x) - C_k_j-1(x)(x)
and we let ρ_0(x) = ∞ and ρ_k(x) = -∞ if k ∉{k_1(x),…,k_m_x(x)}.
Figure <ref> illustrates the case of optimally assigning treatment for a single unit i. If we have an available budget of 1, it would be optimal to assign arm 3 to the i-th unit. If we increase the available budget to 2, then we have two choices: upgrade to either arm 2 or 4. Since arm 2 lies outside the convex hull, it is strictly sub-optimal to assign this arm, and the optimal assignment is arm 4. For the optimal policy, we are faced with a distribution of convex hulls, one for each realized sample unit, and have to decide whether to assign a new unit a treatment or upgrade an existing unit to a costlier arm. The key insight from <cit.>, which carries over to the stochastic setting, is to realize that what matters in each of these convex hulls are the slopes of the tangent lines between arms, the incremental cost-benefit ratio (<ref>). For a given budget level, when choosing between selecting an arm for unit i or j, the (unit, arm) with the largest tangent slope is optimal. The following theorem formalizes this intuition by characterizing the optimal stochastic policy at a given budget level B, in terms of thresholding of the distribution of incremental cost-benefit ratios.
Under Assumption <ref>, there exists an optimal (stochastic) policy π_B^* that admits the following characterization: There are constants λ_B ∈ℝ and c_B ∈ [0,1] such that
π^*_B,k_j(x)(x) =
1 if ρ_k_j(x)(x) > λ_B > ρ_k_j+1(x)(x),
c_B if ρ_k_j(x)(x) = λ_B,
1-c_B if ρ_k_j+1(x)(x) = λ_B,
0 otherwise.
For generic distributions where X has continuous support, ℙ[ρ_k_j(x)(x) = λ] = 0 for all λ > 0, and so the optimal policy will almost surely be integer-valued.
§ THE QINI CURVE FOR MULTI-ARMED POLICIES
Section <ref> provides a characterization that maps a budget B and the population quantities τ(X_i), C(X_i) to an optimal policy π_B^*(X_i). Given an independent and identically distributed random sample from this population, we can obtain, through appropriate estimation methods, estimates of the functions τ̂(·) and C(·). We refer to the sample used to obtain these estimates as the training sample. These estimates induce a policy:
Let τ̂(·) and C(·) be the estimates of the conditional average treatment effect and cost functions obtained on a training sample. The induced policy π_B is the policy that solves
π_B = _π{⟨π(X_i), τ̂(X_i) ⟩: ⟨π(X_i), C(X_i) ⟩≤ B },
i.e., we are solving (<ref>) but replacing the population quantities τ(·) and C(·) with the estimates τ̂(·) and C(·).
As a metric to evaluate treatment allocation according to an induced policy, we define the Qini curve:
Given a family of policies π_B indexed by (τ̂, C), the Qini curve is the curve that plots the function Q(B) = V(π_B), B ∈ (0, B_max].
The challenge now is, once we have a test sample of independent and identically distributed random sample from the population, how do we form estimates of Q(B)? To keep concepts clear we define the empirical induced policy on the test set:
Consider n independently and identically distributed test samples from the population. Let τ̂(·) and C(·) be the estimates of the conditional average treatment effect and cost functions obtained from a training sample. The test set empirical induced policy π̂_B is the policy that solves
π̂_B = {1/n∑_i=1^n⟨π(X_i), τ̂(X_i) ⟩: 1/n∑_i=1^n⟨π(X_i), C(X_i) ⟩≤ B },
i.e., we are solving (<ref>) over an empirical test sample indexed by units i=1… n.
In order to form an estimate of Q(B) on a test sample, there are three subsequent challenges we need to address: how to handle the budget constraint, how to efficiently express π̂_B, and finally, how to estimate the policy value of π_B. The first issue, we address by satisfying the budget in expectation on the test set as in Definition <ref>.
Expressing π̂_B on the test set. The optimization problem in (<ref>) has a linear program formulation that takes the following form,
max_π_B 1/n∑_i=1^n∑_k=1^Kπ_k(X_i) τ̂_k(X_i)
s.t. 1/n∑_i=1^n∑_k=1^Kπ_k(X_i) C_k(X_i) ≤ B,
∑_k=1^Kπ_k(X_i) ≤ 1, i = 1 … n,
π_k(X_i) ≥ 0, k=1… K, i = 1 … n.
The direct approach of solving (<ref>) via generic LP-solvers is computationally infeasible as this would involve computing a large collection of linear programs with nK choice variables, one for each budget constraint B ∈ (0, B_max]. The feasible approach is to instead directly compute the path of solutions {π̂_B}_B → 0^B_max via an algorithm tailored to the structure (<ref>) embeds. To this end, the characterization of the optimal policy in Theorem <ref> as a thresholding rule of incremental cost-benefit ratios ρ is promising as it suggests the problem can be reduced to a single-dimensional fractional knapsack problem (with some additional bookkeeping). This is exactly the approach taken by <cit.>, to solve (<ref>) via sorting the incremental cost-benefit ratios <cit.>.[Faster algorithms for the LP-relaxation of the multiple-choice knapsack problem exits, <cit.> derive linear-time solutions for a fixed budget level, but these are harder to adapt to a path algorithm.] Figure <ref> illustrates how ρ determines a solution. The vertical axis shows the incremental cost-benefit ratios for each unit's arm on the convex hull (with units indexed by the horizontal axis). A solution to (<ref>) is given by a particular threshold λ on the vertical axis and determines the optimal allocation through a planar separation of unit-arm pairs. A limitation of the algorithm in <cit.> is that it solves (<ref>) at only a single budget level B, as determined by a single planar separation. In order to adapt this algorithm to deliver a path of solutions over budget levels, we can make use of a priority queue ordered by decreasing ρ that acts as a construction that keeps track of which (unit, arm) enters the active set of the solution path, as we lower λ in Figure <ref> in accordance with the budget B we are tracing out.
Estimating the value of π_B. Now that we have a promising strategy to obtain the estimated path {π̂_B}_B → 0^B_max, how do we estimate its value? We show in Section <ref> that the approximation error of the empirical optimization in (<ref>) is asymptotically linear with zero means. This means we can leverage standard policy evaluation arguments for this component. Thus, with a suitable construction Γ_i that satisfies Γ_i | X_i≈τ(X_i) policy evaluation arguments motivates forming an estimate of Q(B) with the plug-in construction[Note that we are using the empirical induced policy π̂_B (Definition <ref>) to estimate the value of the population induced policy π_B (Definition <ref>). Theorem <ref> verifies the validity of this approach.]
Q(B) = V(π_B) = 1/n∑_i=1^n⟨π̂_B(X_i), Γ_i ⟩,
where Γ_i could be obtained with, in the case of known treatment randomization probabilities, inverse propensity weighting <cit.>:
Γ_i,k = 1(W_i=k)Y_i/W_i=k - 1(W_i=0)Y_i/W_i=0.
In the case of a treatment assignment under unconfoundedness, the scores Γ_i can also be constructed via augmented inverse propensity weighting <cit.>, which relies on nuisance estimates in the form of propensity scores e(X_i) = W_i | X_i = x and conditional response surfaces μ_W_i(X_i) = Y(W_i) | X_i = x. In order to construct this score, these components need to be estimated on the test set data.
To ensure that these estimated components are independent of the outcome for each unit, a popular approach is to employ cross-fitting <cit.> where the i-th unit's estimate is obtained without using that unit for estimation, for example via K-fold estimation. The multi-armed score then takes the following form <cit.>
Γ_i =
τ̂^-q(i)(X_i) +
(Y_i - μ̂_W_i^-q(i)(X_i)) ( 1_W_i/ê^-q(i)_W_i(X_i) -
1 ·1(W_i=0)/ê^-q(i)_0(X_i)),
where the super script -q(i) denotes fitting using the data excluding the fold X_i belongs to, 1_W_i denotes a vector with 1 at the W_i-th coordinate and 1 denotes a vector of all ones. This approach for evaluation can yield an efficiency gain over inverse-propensity weighting (see <cit.> for a discussion).
Computing the solution path and values. With all the pieces needed to estimate Q(B) in place, Algorithm <ref> outlines pseudo-code for all the components needed to compute the Qini curve for a multi-armed policy, starting with estimating conditional average treatment effects and costs on a training set. With these, and suitable evaluation scores in place, Algorithm <ref> formalizes the intuition behind Figure <ref> with pseudo-code for computing the induced multi-arm policy and value up to some maximum budget level B_max.
After a reduction to convex hulls, Algorithm <ref> starts by adding each unit's first arm on the convex hull to a priority queue ordered by decreasing ρ̂. The first unit assigned is the unit on top of this queue (red top dot in Figure <ref>). If this unit has remaining arms on its convex hull (i.e., there are arms below the unit's initial allocation in Figure <ref>), then this subsequent arm is added to the queue with priority equal to its incremental cost-benefit ratio. The subsequent assignments might either be upgrades, in which case we move to a costlier arm lower on the vertical plane or a new unit allocation. The exact sequence of upgrade-or-allocate-new-unit decisions is dictated by the priority queue order ρ̂. The time complexity of this algorithm is log-linear in nK, and to give an impression of the practical performance of using this as an evaluation metric, for a sample size of one million, and five treatment arms, our optimized open-source implementation computes the full solution path in around 1.5 seconds on a standard modern laptop.
Depending on the value of B_max, the treatment allocation for the last unit to be assigned might not be integer-valued. By Theorem <ref> there are two such cases. The first case is if the i-th unit has previously not been assigned an arm, and there is not sufficient budget left to allocate the first arm on the convex hull. The second case is if the i-th unit has previously been assigned an arm, but there is not sufficient budget left to upgrade the unit to the next arm on the convex hull. In these cases, we may think of assigning the i-th unit an arm with a certain probability, as given by the fractional allocation c_B. In our intended setup, treatment is assigned to a large number N of units matching the covariate profile of X_i, a fractional solution would simply mean that, in the second case, we assign one arm to c_B N units, and the other arm to the remaining (1-c_B)N units.
Finally, while Algorithm <ref> does not explicitly construct and return the vectors π̂_B(X_i), these are implicitly given by the sequence of (unit, arm) allocations and can be efficiently constructed ex-post, which is the approach taken in the accompanying software.
§.§ A Central Limit Theorem for the Qini Curve
In order to employ the Qini curve for decision-making, we need to form the uncertainty estimate of V(π_B), a point on the curve (we consider functionals such as area under the curves as an interesting extension for future work). In this section, we provide an asymptotic linearity theorem for the policy value estimate, which enables confidence intervals and hypothesis tests via resampling-based methods <cit.>. To this end, it is helpful to introduce some new notations. By the same logic of Theorem <ref>, given conditional average treatment effect and cost function estimates τ̂(·) and C(·), we can characterize the induced policy in terms of the threshold λ and the cost-benefit ratios ρ̂. Throughout this section, we assume that the set of covariates having the incremental cost-benefit ratio exactly λ has measure 0. We can then express the induced policy with varying levels of budget as a family of policies parameterized by the threshold λ,
π_k_j(x)(x;λ) = 1(ρ̂_k_j(x)(x) > λ > ρ̂_k_j+1(x)(x)).
With this new notation, the induced policy with budget B is π_B(x;λ) = π(x;λ) where λ solves ⟨π(X_i;λ), C(X_i) ⟩ = B. We can then express the gain and cost as
V(λ; τ)= ⟨π(X_i; λ), τ(X_i) ⟩,
Ψ(λ; C)=⟨π(X_i; λ), C(X_i) ⟩.
The goal of introducing this notation is to parameterize the policy by the scalar threshold λ, so that the relevant objects of interest are also functions of λ. As mentioned in Section <ref>, there are two components needed to form an estimate of a point on the Qini curve, an empirical induced policy and an evaluation score Γ_i. With the definitions given in the previous paragraph, we have an exact expression for the first component, via an estimated threshold λ̂. This yields a representation of the estimated policy value (<ref>) via an equivalent formulation in terms of λ̂,
V(π_B) = 1/n∑_i=1^n⟨π(X_i; λ̂), Γ_i ⟩.
Our goal is to quantify the uncertainty in estimating (<ref>) through the sampling variability of this plug-in estimate. To derive an inference strategy, note that this construction has two levels of approximation, using an estimated threshold , arising from solving for the empirical induced policy π̂_B via empirical optimization on the test set, and using an estimated score Γ_i constructed on the test set. If we were using a fixed deterministic λ, the asymptotic property follows from the classical doubly robust argument <cit.>. Our idea is to argue its asymptotic linearity by first proving asymptotic linearity of the threshold . Then, we combine with the standard doubly robust argument to prove that V(π_B) is asymptotic linear. We first make some standard identifying assumptions on the population,
[Overlap]
There exists η such that e_k(x) > η for all x and k, where e_k(x) = W_i | X_i = x, W_i = k.
[Unconfoundedness]
Y_i(0), …, Y_i(K) W_i | X_i.
Now, to argue about , we note that we can view as an approximate Z-estimator assuming the empirical threshold approximately makes the cost equal to the budget B on the test set. The following theorem details the argument and proves our result with some further assumptions (the overall architecture outlining where the various estimates come from are in Algorithm <ref>).
Under Assumption <ref>, <ref>, <ref>, let C(·) and τ̂(·) be any estimates of the cost and CATE functions, fitted on an independent training set. Suppose the function Ψ(·; C) is continuously differentiable, and all potential outcomes are bounded. Let π_B be the induced policy with respect to C(·) and τ̂(·), i.e. π_B(x) = π(x;λ) where λ solves the following equation
⟨π(X_i;λ), C(X_i) ⟩ = B.
Let π̂_B be the empirical induced policy obtained on a test sample of n points {X_i, W_i, Y_i}_i=1^n, i.e. π̂_B(x) = π(x; λ̂) where
1/n∑_i=1^n⟨π(X_i; ), C(X_i) ⟩ - B = o_p(n^-1/2).
Let a_i be the arm assigned to unit X_i, and assume further that ρ_a_i(x_i) has continuous density in a neighborhood of λ for any i. Assume that we construct doubly robust scores Γ_i with cross-fitting on the test set using (<ref>), with the following assumptions on the estimates of the nuisance components μ and e:
* The estimates are sup-norm consistent.
* The estimates satisfy the following error bounds
( μ̂_k^-q(i)(X_i) - μ_k(X_i) )^2·( ê_k^-q(i)(X_i) - e_k(X_i) )^2 = o(1)/n, k=0,…, K.
Let ψ_λ(x) = ⟨π(x; λ), C(x) ⟩. Then V(π_B) is asymptotically linear, with the following expansion
n^1/2(V(π_B)- V(π_B)) = n^-1/2∑_i=1^n (⟨π(X_i;λ), Γ_i⟩ - V'(λ; τ)/Ψ'(λ; C)ψ_λ(X_i) - V(π_B) )+ o_p(1)
where Γ is the oracle doubly robust score:
Γ_i =
τ(X_i) +
(Y_i - μ_W_i(X_i)) ( 1_W_i/e_W_i(X_i) -
1 ·1(W_i=0)/e_0(X_i)).
In Theorem <ref> we condition on the training set used to obtain the conditional average treatment effect and cost functions, and consider the randomness on the test set used to evaluate the induced policies. The asymptotic linearity of V(π_B) justifies bootstrap-based inference of the cost-curves, in particular, it makes half-sampling a suitable choice for resampling Algorithm <ref> <cit.>. In particular, to compute one single bootstrap replicate, rerun Algorithm <ref> on a random half-sample of units to obtain a path of policy value estimates, then interpolate this to the grid of spend values on the path computed for the full sample. As only half of the samples are passed to Algorithm <ref>, the evaluation score Γ_j for the j-th drawn unit is given a weight equal to 2.
§ SIMULATION EXPERIMENT
There are a wide variety of strategies available to estimate conditional average treatment effects τ(X_i) that can be extended to the multi-armed setting. Some popular and flexible approaches are so-called meta-learners that adopt machine learning algorithms aimed at prediction, to instead target a counterfactual difference, examples include <cit.>, and <cit.>. These methods target the quantity Y_i(1) - Y_i(0) | X_i = x, where Y_i(1) is the potential outcome in the treatment arm and Y_i(0) the potential outcome in the control arm. In order to estimate multi-armed treatment effects with these strategies, one can employ a one versus-all encoding, defining W_i to be 1 if the k-th arm is assigned, and 0 otherwise. Another approach is to target the vector-valued parameter τ(X_i) directly. In the empirical illustrations, we use a forest-based <cit.> multi-armed treatment effect estimator based on the R-learner <cit.>, available in the package <cit.> via the function , which has built-in functionality to produce the multi-armed evaluation scores (<ref>). This approach estimates τ(X_i) directly using the following forest-weighted loss
τ̂(x) = _τ{∑_i=1^nα_i (x) ( Y_i - μ̂^(-i)(X_i) - c(x) -
⟨ 1_W_i - ê^(-i)(X_i), τ(X_i) ⟩)^2 },
where μ̂ are estimates of the conditional mean function Y_i | X_i = x, ê are estimates of the treatment propensities W_i | X_i = x, and the superscript (-i) indicates that the estimates for the i-th observation is obtained without using unit i for training. The forest weights α(x) are adaptive nearest neighbor weights obtained by a generalized random forest <cit.> searching for heterogeneity in the vector-valued target parameter τ(X_i).
As a synthetic illustration, we adapt the three-armed data generating process in <cit.>, treating the first arm as a zero-cost control, with covariates X_i identically and independently distributed on [0, 1]^10, and potential outcomes distributed according to
Y_i(w_i) | X_i = (3 - w_i)1_0(X_i) + (2 - 0.5|w_i - 1|)1_1(X_i) + 1.5(w_i - 1)1_2(X_i),
where 1_0(X_i), 1_1(X_i), 1_2(X_i) indicate which region a unit belongs to:
1_0(X_i) = 1(X_i5≤ 0.6) 1(X_i7≥ 0.35),
1_1(X_i) = 1(X_i5^2/0.6^2 + X_i7^2/0.35^2 < 1) + 1((X_i5-1)^2/0.4^2 + (X_i7-1)^2/0.35^2 < 1),
1_2(X_i) = 1 - 1_0(X_i) - 1_1(X_i).
We let the assignment probabilities for the different arms be the same,
W_i=0 | X_i = 1/3, W_i=1 | X_i = 1/3, W_i=2 | X_i = 1/3.
We treat the cost for the two treatment arms as known and equal to a unit's observable pre-treatment covariates C_i(1) = X_i1, C_i(2) = 2X_i2. Outcomes are observed with noise N(0, 4).
To study the practical inferential properties of points on the Qini curve for multiple arms, using flexible non-parametric estimators, we calculate coverage of 95% confidence intervals for Q(B). We first fix a τ̂(·) function estimated on a training set with n=10000. We consider ten points B = {0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.4, 0.45, 0.5} on the Qini curve, then on a test set with size n={1000, 2000, 5000, 10000} compute the policy π̂_B, estimate doubly robust scores Γ, then calculate coverage of the estimated Q(B) using bootstrapped standard errors. The results in Table <ref> show the mean empirical coverage of this procedure over 1000 Monte Carlo repetitions.
§ HYPOTHESIS TESTS FOR TREATMENT TARGETING STRATEGIES
Our proposed method can serve two practical use cases. A first use case is as a tool for practitioners to quantify how much benefit there is to treatment targeting. A second use case is as a tool for practitioners to quantify the benefit of employing more arms. When considering only a single treatment arm, analyzing the value of treatment targeting based on covariates is conceptually simple: it reduces to for example comparing outcomes for the treatment-targeted units to those of non-treated units. With more than one treatment arm the analysis becomes more complex, as this brings an additional dimension to the problem. Two natural questions to ask are: 1) how does the multi-armed policy compare against a policy that does not target based on covariates? and 2) what is the value of targeting with more treatment arms? Table <ref> gives an example of the different policy configurations available with three treatment arms. The traditional Qini curve for single treatment arms allows for comparisons between rows and columns in the first three rows. For example, using the simplified notation in Table <ref>, Q_1 - Q_1 is the value of optimally allocating arm 1 based on covariates vs. spending the same budget by allocating arm 1 to a random subset of units; and Q_1 - Q_2 is the value of targeting with arm 1 over arm 2, at a given budget level. Our proposed Qini curve extension to multiple arms facilitates policy value comparisons across all entries in Table <ref>, for example, Q_1,2,3 - Q_1 measures the value of adding the two more treatment arms to the optimal arm selection mix.
The policies in the far right-hand side of Table <ref> indicate a policy that ignores covariates, i.e., a policy that assigns treatment without observing unit-specific characteristics. For a single treatment arm, this policy value is simply some fraction of the average treatment effect of that arm. For the policy that selects among many arms without using covariates, we need to take into account the average treatment effect and average costs of the K arms which motivates the following definition,
For a given budget B, the policy π̅_B which ignores covariates is the policy that solves the problem in Definition <ref> with only access to the average treatment effect τ̅= τ̂(X_i) and average cost estimates C = C(X_i). The Qini curve for this policy is the function Q(B) = V(π̅_B), B ∈ (0, B_max].
Intuitively, this policy collapses all the information from the X_i-specific convex hulls to the single convex hull traced out by τ̅ and C. For a given budget, it assigns an arbitrary fraction of the population to a convex combination of consecutive arms on the hull. Computing the policy value of this allocation is straightforward by using Algorithm <ref> on the single convex hull traced out by (τ̅, C) and evaluating it on Γ_i := 1/n ∑_i Γ_i.
To conduct hypothesis tests for the value of different targeting strategies, we can employ the central limit theorem in Section <ref> to construct asymptotically valid confidence intervals for the difference in policy values:
(Value of targeting).
A 1-α confidence interval for the difference Q(B) - Q(B) is
Q(B) - Q(B) ± z_1-α/2σ̂,
where z are the standard normal quantiles and σ̂ the standard deviation of the difference in bootstrap estimates Q(B) and Q(B).
Figure <ref> provides a stylized example of what Qini curves could look like in the scenario where there is a benefit to targeting based on subject characteristics and there are 3 treatment arms (plus a control) available. For a fixed spend point and policy, the quantity Q(B) - Q(B) measures the vertical difference between the red and one of the remaining lines, which signifies a baseline policy using all or only a single arm without targeting. Since this distance is positive, it signifies a benefit of targeting based on subject characteristics.
To assess the value of targeting with an optimal combination of arms over using only one or a smaller subset of arms, we can employ a similar pairwise test:
(Value of treatment arm).
Let Q(B) be the Qini curve for the policy π_B using all available arms, and let Q_k(B) be the Qini curve for the policy π^k_B using only the k-th arm (or a subset of all available arms, as denoted by the subscripts in Table <ref>). A 1-α confidence interval for the difference Q(B) - Q_k(B) is
Q(B) - Q_k(B) ± z_1-α/2σ̂,
where z are the standard normal quantiles and σ̂ the standard deviation of the difference in bootstrap estimates Q(B) and Q_k(B).
Figure <ref> illustrates how the cost curves may look under the scenario where there, depending on budget, is a benefit to using an optimal combination of arms over just a single arm. For example, at B=2 the difference Q(B) - Q_1(B) is the vertical difference between the red and blue line and indicates that optimally selecting among all available arms can yield an increase in gain of around 1.5 over only targeting with arm 1.
To verify the practical performance of the hypothesis test constructions in this Section, we revisit the simulation setup in Section <ref> and repeat the same exercise as done in Table <ref>, but for five different policy value comparisons with standard errors calculated via a paired bootstrap. The results in Table <ref> indicate these constructions can be justified in practice.
The natural area under the curve counterparts for metrics in this section would be the integrated difference. For example, given some chosen maximum budget B the quantity ∫_0^ B(Q(B) - Q_k(B)) dB would estimate the area between two curves in Figure <ref>. We consider this an interesting extension but leave the development of such a functional central limit theorem to future work.
§ APPLICATION: TREATMENT TARGETING FOR ELECTION TURNOUT
<cit.> conducts a multi-armed randomized controlled trial to study the social determinants of voter turnout in the 2006 US primary election, by mailing out letters of various forms. 180 002 households were randomly assigned one of K=4 treatment arms where arm 1 (“Civic”) tells the recipient to do their civic duty and vote. Arm 2 (“Hawthorne”) informs the recipient that their decision to vote or not is being monitored. Arm 3 (“Self”) informs the recipient about their and similar households' past voting history, and arm 4 (“Neighbors”) will let the recipient's neighbors know about their voting history. The control group receives no letter. The outcome of interest is whether a person in the household votes in the upcoming primary election. <cit.> finds that sending out the “Neighbors” letter is the most effective at increasing voter turnout, with little evidence of heterogeneity.
This treatment arm choice is intrusive, and to characterize the tradeoff between increases in voter turnout and incurred “intrusions”, and to investigate whether targeting with less aggressive options might give similar increases in turnouts, we utilize Qini curves. The publicly available dataset from <cit.> includes variables that are associated with election turnout, and that we use to train a τ̂(·) function using the estimator described in Section <ref>. These covariates include age, year of birth, gender, and household size, as well as six binary variables indicating if the subject voted in the general and primary elections in the years 2000, 2002, and 2004. To evaluate policies we hold out a random half-sample of the households, then use inverse-propensity weighting with the known randomization probabilities W_i = k = 1/9 (k=1…4), where the control arm has assignment probability 5/9. To incorporate costs, there are many modeling approaches one might take. In this example, we denominate costs in “intrusion units” where treatment arm 1 is least intrusive with C_i(1) = 1, then measure costs of the remaining arms as some multiple of this. If we assume the multiples C_i(2)=15, C_i(3)=30, C_i(4)=45, then we get Qini curves as shown in Figure <ref>, that can indicate a trade-off between aggressive treatment targeting and more innocuous options. For example, an optimal combination of the arms can at a budget level B=5 yield an increase in voter turnout of 1.9% (95% CI: [1.1, 2.8]) where only the least intrusive first arm (“Civic”) would yield 0.9% (95% CI: [0.1, 1.8]). A paired test of the difference between the two policies at B=5 yields a 95% CI of [0.5, 1.6]. A closer look at the non-targeting baseline reveals that this benefit is not necessarily due to targeting the most receptive units.
Figure <ref> shows Qini curves using all arms together with the non-targeting baseline policy π̅_B, which in this case will only allocate between the non-intrusive arm 1 (“Civic) and the most intrusive but effective arm 4 (“Neighbors”). At B=5 a random 91% of units are assigned arm 1 and the remaining 9% arm 4 for a gain that is practically the same as for the targeting policy using all arms (95% CI for the difference Q_1,2,3,4 - Q_1,2,3,4: [-1.0, 0.3]) suggesting that there is little benefit to treatment targeting based on heterogeneous treatment effects, rather, the available “intrusion budget” dictates which arm is best to assign uniformly. Recall that the confidence intervals here are conditional on the estimated τ̂(·), i.e. they reflect the sampling uncertainty arising from estimating an induced policy as well as evaluating this policy on the test set.
§ ACKNOWLEDGEMENT
We thank the Golub Capital Social Impact Lab at Stanford Graduate School of Business, and the Office of Naval Research (grants N00014-19-1-2468 and N00014-22-1-2668) for their financial support of this research. We are also grateful to Vitor Hadad for helpful feedback and to James Yang for helpful input on templatization in C++.
plainnat
§ ALGORITHM DETAILS
§.§ Computing the Upper Left Convex Hull
The reduction to convex hulls in Algorithm <ref> in the function can be done using a variant of the Graham scan <cit.>. Consider treatment arms h, j, l sorted according to costs C_h(X_i) < C_j(X_i) < C_l(X_i). To construct the hull, start with the two least costly arms h and j added to the hull, then do a linear scan through the remaining arms in order of increasing cost and determine if the j-th arm should be kept or removed from the hull by checking if the slope (as defined in Figure <ref>) from j to l is larger than the slope from h to j. If the slope is larger, j is removed, otherwise, j is kept. If all elements of τ̂(X_i) are negative, the convex hull for that unit is defined to be empty.
§.§ Time Complexity of Algorithm <ref>
Given n test samples, the run time of computing the multi-armed policy path is O(nK log K + nK lognK). To see this, note that the worst-case run time of Algorithm <ref> occurs when for every unit each arm lies on the convex hull, and the budget exceeds the expected cost of the most costly arm, i.e. B_max > C_k_m_x(X_i)(X_i). The convex hulls then have total size nK, and since the budget constraint will never bind, a total of nK items have to be inserted into the priority queue, which takes time O(nK lognK). Computing the convex hull involves sorting each unit's cost in increasing order, which takes time O(K log K), and this has to be repeated n times, yielding the claimed run time.
§ PROOFS
§.§ Proof of Theorem <ref>
Assume X is a random draw from the covariate distribution and X_i are i.i.d. We first note that in our multi-armed case, the policy π is a vector and the expected cost can be written as
C(π_B^*(X_i)) = ∑_k=1^K π_k(X_i) C_k(X_i)
Consider the following function of λ,
β(λ) = ∑_j=1^m_x1(ρ_k_j(x)(x) > λ > ρ_k_j+1(x)(x))C_k_j(x)(x)
By our assumption, we see it is a non-increasing function of λ. Let
η_B := inf{λ: β(λ) ≤ B}, λ_B = max{η_B, 0}
Then the policy (<ref>) could be rewritten as
π_B, k_j(x)^*(x) =
c_B if ρ_k_j(x) = λ_B,
1-c_B if ρ_k_j-1(x) = λ_B,
1 if ρ_k_j(x)(x) > λ_B > ρ_k_j+1(x)(x)
where
c_B =
0 if ∑_j=1^m_x1(ρ_k_j(x)(x) = λ_B )C_k_j(x)(x) = 0,
B-∑_j=1^m_x1(ρ_k_j(x)(x) > λ > ρ_k_j+1(x)(x))C_k_j(x)(x)/∑_j=1^m_x1(ρ_k_j(x)(x) = λ_B )C_k_j(x)(x) if ∑_j=1^m_x1(ρ_k_j(x)(x) = λ_B )C_k_j(x)(x) > 0
Now we prove the above rule is in fact optimal. Let π'(x) denote any other stochastic treatment rule that satisfies the budget constraint. We want to argue
∑_k=1^Kπ_k(X)τ_k(X)≥∑_k=1^Kπ'_k(X)τ_k(X)
To prove this, we have
∑_k=1^Kπ_k(X)τ_k(X) - ∑_k=1^Kπ'_k(X)τ_k(X)
= ∑_k=1^K (π_k(X)-π_k'(X))τ_k(X) | X
= ∫∑_j=1^K (π_k_j(x)(x) -π'_k_j(x)(x) )τ_k_j(x)(x) dP(x)
where we define k_m_x+1(x),…,k_K(x) to be any ordering of points not in the convex hull.
Now we have
∑_j=1^K (π_k_j(x)(x) -π'_k_j(x)(x) )τ_k_j(x)(x)
= ∑_j=1^K(π_k_j(x)(x) -π'_k_j(x)(x) ) ∑_l=1^j(τ_k_l(x)(x)-τ_k_l-1(x)(x))
= ∑_l=1^K (τ_k_l(x)(x)-τ_k_l-1(x)(x)) ∑_j=l^K(π_k_j(x)(x) -π'_k_j(x)(x))
= ∑_l=1^K (C_k_l(x)(x)-C_k_l-1(x)(x)) τ_k_l(x)(x)-τ_k_l-1(x)(x)/C_k_l(x)(x)-C_k_l-1(x)(x)∑_j=l^K(π_k_j(x)(x) -π'_k_j(x)(x))
= ∑_l=1^Kρ_k_l(x)(x)(C_k_l(x)(x)-C_k_l-1(x)(x)) ∑_j=l^K(π_k_j(x)(x) -π'_k_j(x)(x))
where we use the fact that we assume τ_0(x) = 0. Note that by our characterization of the optimal policy, there exists k ∈{1,…,K} such that, ρ_k_l(x)(x) ≥λ if l ≤ k. In these cases by the definition of our policy π, we either have ∑_j=l^Kπ_k_j(x)(x) = 1 ≥∑_j=l^Kπ'_k_j(x)(x) or ρ_k_l(x)(x) = λ. If l > k then ∑_j=l^Kπ_k_j(x)(x) = 0 ≤∑_j=l^Kπ'_k_j(x)(x) and ρ_k_l(x)(x) < λ. Combining the two cases we see
∑_j=1^K (π_k_j(x)(x) -π'_k_j(x)(x) )τ_k_j(x)(x)
≥∑_l=1^Kλ(C_k_l(x)(x)-C_k_l-1(x)(x)) ∑_j=l^K(π_k_j(x)(x) -π'_k_j(x)(x))
= λ∑_j=1^K C_k_j(x) (π_k_j(x)(x) - π'_k_j(x)(x))
Hence, we have
∑_k=1^Kπ_k(X)τ_k(X) - ∑_k=1^Kπ'_k(X)τ_k(X)≥λ∑_k=1^K(π_k(X) -π'_k(X))C_k(X)
Now we consider two cases: Either λ > 0 or λ = 0. If λ > 0, we have consumed all budget then obviously (<ref>)≥ 0 and if λ = 0, then we are done as well.
§.§ Proof of Theorem <ref>
We proceed with three steps. First we argue that is consistent, i.e. λ. Second, we argue that n^1/2( - λ) is asymptotically linear. Finally, we argue that n^1/2((π̂_B)- V(π_B) is asymptotic linear.
Step 1: λ.
We use Theorem 5.9 of <cit.>. We need to verify the uniform convergence of
Ψ_n(λ) - B = n^-1∑_i=1^n⟨π(X_i; λ), C(X_i) ⟩ - B
to Ψ(λ; C(·)) - B. We first prove a lemma.
Suppose g_1, g_2 and h are measurable functions from ℝ^d to ℝ such that for any x, h(x) ≤ M and g_1(x) > g_2(x), then the function class {f_λ(x):= 1(g_1(x) > λ > g_2(x))h(x), λ∈ [0, L]} is P-Donsker for any law P on 𝒳.
We note that
f_λ(x) = (1(g_1(x) > λ) - 1(g_2(x) > λ))h(x)
The indicator functions are a VC class hence Donsker and h(x) is uniformly bounded. Hence f_λ is also Donsker.
By Lemma <ref> and the fact that the finite sum of a Donsker class is also Donsker, we know ψ indexed by λ forms a Donsker class. In particular, it is Glivenko-Cantelli and the uniform convergence holds. Now we verify the second condition in the theorem. By our assumption, Ψ is continuously differentiable, and also by our definition of π and assumptions, we know Ψ is monotonically decreasing. In particular, it has a well-defined inverse. This verifies the second condition in the theorem. Finally, our solves the estimating equation approximately, and by Theorem 5.9 of <cit.>, is consistent.
Step 2: n^1/2( - λ) is asymptotic linear.
We use Theorem 5.21 of <cit.>. To verify the convergence (5.22) in the proof, we use Lemma 19.24 and the following additional lemma.
Suppose λ, and f_λ is defined as in Lemma <ref>, then f_ - f_λ_2^2 0.
We note by dominated convergence theorem, if the sequence λ_n →λ almost surely, then f_λ_n - f_λ_2^2 → 0 almost surely. Now fix a subsequence n_k, since λ̂_n_kλ, we know there is a further subsequence n(m_k) such that λ̂_n(m_k)→λ almost surely. Then by the above argument, f_λ_n(m_k) - f_λ_2^2 → 0 almost surely, which establishes the convergence in probability since every subsequence has a further subsequence that converges almost surely.
Since the function ψ is a finite sum of functions of the form f_λ, the above lemma also holds for ψ. By Lemma 19.24 of <cit.>, we know
𝔾_nψ_ - 𝔾_nψ_λ 0.
To apply Theorem 5.21 we also need to show that Ψ(·;C) is differentiable at λ and the derivative is nonzero. To prove this, for simplicity, we assume there is only one action in addition to the control arm, then we have
∂Ψ(λ;C)/∂λ = p(λ)C_i(1) - C_i(0) |ρ_i = λ
where p is the density function of the incremental ratio ρ_i. By our assumption on the density of ρ, we know this is greater than zero. Finally by our boundedness assumption, ψ_λ is L_2, by (<ref>), approximately solves the estimation equation by o_p(n^-1/2) and is consistent by step 1. By Theorem 5.21 of <cit.>, we have
n^1/2(-λ) = -1/Ψ'(λ;C)n^-1/2∑_i=1^nψ_λ(X_i) + o_p(1)
Step 3: n^1/2((π_B)- V(π_B)) is asymptotic linear.
Define (λ; τ) = n^-1∑_i=1^n⟨π(X_i; λ), τ(X_i)⟩ and recall V(·; τ) = ⟨π(X_i; ·), τ(X_i) ⟩, V(π_B) = V(λ; τ), we have the following decomposition
n^1/2((π_B)- V(π_B)) = n^1/2((π_B) - (; τ))
+ n^1/2((; τ) - V(; τ))
+ n^1/2(V(;τ) - V(π_B)).
We will deal with the three terms one by one. For (<ref>), we have
n^1/2((π_B) - (; τ)) = n^-1/2∑_i=1^n ⟨π(X_i;), Γ_i - τ(X_i)⟩
= n^-1/2∑_i=1^n ⟨π(X_i;), Γ_i - τ(X_i)⟩ + o_p(1)
= n^-1/2∑_i=1^n ⟨π(X_i;λ), Γ_i - τ(X_i)⟩ + o_p(1)
where (<ref>) follows from the boundedness of π and the usual analysis on doubly robust scores which gives for any k,
n^-1/2∑_i=1^n (Γ_i,k - Γ_i,k) = o_p(1).
To get (<ref>), we note that we only need to prove
n^-1/2∑_i=1^n ⟨π(X_i;)-π(X_i, λ), Γ_i - τ(X_i)⟩ = o_p(1)
To this end, note that (<ref>) is mean 0 and we argue that the variance goes to 0.
(n^-1/2∑_i=1^n ⟨π(X_i;)-π(X_i, λ), Γ_i - τ(X_i)⟩)^2
= (n^-1/2∑_i=1^n ⟨π(X_i;)-π(X_i, λ), Γ_i - τ(X_i)⟩)^2 | X_train, X_test
= n^-1∑_i=1^n⟨π(X_i;)-π(X_i, λ), Γ_i - τ(X_i)⟩^2 | X_train, X_test
≤ C n^-1∑_i=1^nπ(X_i;)-π(X_i, λ)_2^2
= C π(X_i;)-π(X_i, λ)_2^2
Either by dominated convergence theorem or lemma <ref>, we know
π(X_i;)-π(X_i, λ)_2^2 = o(1)
For (<ref>), we can use the same machinery (Lemma 19.24 of <cit.>) as in step 2 to argue that
n^1/2((;τ) - V(;τ)) = n^1/2((λ;τ) - V(λ;τ)) + o_p(1)
Finally for (<ref>), by differentiability of V and step 2, we can use the delta method to get
n^1/2(V(; τ) - V(λ; τ)) = n^1/2V'(λ;τ)( - λ) + o_p(1)
Now combine all three terms we have the following
n^1/2((π_B)- V(π_B)) = n^-1/2∑_i=1^n ⟨π(X_i;λ), Γ_i - τ(X_i)⟩
+ n^-1/2∑_i=1^n (⟨π(X_i; λ), τ(X_i)⟩ - V(λ; τ))
-V'(λ; τ)/Ψ'(λ; C)n^-1/2∑_i=1^nψ_λ(X_i) + o_p(1)
= n^-1/2∑_i=1^n (⟨π(X_i;λ), Γ_i⟩ - V'(λ;τ)/Ψ'(λ;C)ψ_λ(X_i) - V(λ;τ) )+ o_p(1)
|
http://arxiv.org/abs/2306.07299v1
|
20230611024209
|
Additive Multi-Index Gaussian process modeling, with application to multi-physics surrogate modeling of the quark-gluon plasma
|
[
"Kevin Li",
"Simon Mak",
"J. -F Paquet",
"Steffen A. Bass"
] |
nucl-th
|
[
"nucl-th",
"cs.LG",
"hep-ph",
"stat.ML"
] |
apalike
stepsenumerate1
[steps, 1]label = Step *:
ParForEndParFor
for
do parallel
end parallel for
ParFor[1] #1
EndParFor
-@thistlm
theoremTheorem
conditions[1][for:]
#1 t]>l<@= l
proposition[theorem]PropositioncorollaryCorollary
|
http://arxiv.org/abs/2306.14905v1
|
20230615025250
|
PRISMA-DFLLM: An Extension of PRISMA for Systematic Literature Reviews using Domain-specific Finetuned Large Language Models
|
[
"Teo Susnjak"
] |
cs.CL
|
[
"cs.CL",
"cs.AI"
] |
Searching for the Fakes: Efficient Neural Architecture Search
for General Face Forgery Detection
Xiao Jin, Xin-Yue Mu, Jing Xu
College of Artificial Intelligence, Nankai University
38 Tongyan Road, Jinnan District, Tianjin, P.R.China 300350
{jinxiao,xujing}@nankai.edu.cn
===========================================================================================================================================================================================
With the proliferation of open-sourced Large Language Models (LLMs) and efficient finetuning techniques, we are on the cusp of the emergence of numerous domain-specific LLMs that have been finetuned for expertise across specialized fields and applications for which the current general-purpose LLMs are unsuitable. In academia, this technology has the potential to revolutionize the way we conduct systematic literature reviews (SLRs), access knowledge and generate new insights. This paper proposes an AI-enabled methodological framework that combines the power of LLMs with the rigorous reporting guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). By finetuning LLMs on domain-specific academic papers that have been selected as a result of a rigorous SLR process, the proposed PRISMA-DFLLM (for Domain-specific Finetuned LLMs) reporting guidelines offer the potential to achieve greater efficiency, reusability and scalability, while also opening the potential for conducting incremental living systematic reviews with the aid of LLMs. Additionally, the proposed approach for leveraging LLMs for SLRs enables the dissemination of finetuned models, empowering researchers to accelerate advancements and democratize cutting-edge research. This paper presents the case for the feasibility of finetuned LLMs to support rigorous SLRs and the technical requirements for realizing this. This work then proposes the extended PRISMA-DFLLM checklist of reporting guidelines as well as the advantages, challenges, and potential implications of implementing PRISMA-DFLLM. Finally, a future research roadmap to develop this line of AI-enabled SLRs is presented, paving the way for a new era of evidence synthesis and knowledge discovery.
systematic literature reviews, living literature reviews, PRISMA, large language models, GPT, transfer learning, literature review automation, evidence synthesis, artificial intelligence
Searching for the Fakes: Efficient Neural Architecture Search
for General Face Forgery Detection
Xiao Jin, Xin-Yue Mu, Jing Xu
College of Artificial Intelligence, Nankai University
38 Tongyan Road, Jinnan District, Tianjin, P.R.China 300350
{jinxiao,xujing}@nankai.edu.cn
===========================================================================================================================================================================================
§ INTRODUCTION
The rapid expansion of academic literature across various fields presents a significant challenge for researchers seeking to perform evidence synthesis over the vast body of available knowledge <cit.> (refer to Figure <ref>). Systematic literature reviews (SLRs) have emerged as indispensable tools for evidence-based research, providing comprehensive overviews, synthesizing existing knowledge, and identifying gaps. However, the traditional manual approach to conducting SLRs is not only labor-intensive and resource-draining but also prone to biases. Furthermore, it represents a standalone piece of work that is not easily reusable, incrementally updated, or extended by other researchers as new literature emerges. Given the growing volume of literature, there is an urgent need for more efficient and scalable methods for conducting robust literature reviews and knowledge syntheses.
SLRs are rigorous research methodologies designed to identify, evaluate, and synthesize existing studies on a specific research question. They are highly valued for their structured and comprehensive approach, which aims to minimize bias and promote transparency and replicability. Reporting standards, such as the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Statement and its recent 2020 extensions <cit.>, provide evidence-based guidelines for achieving high-quality SLRs that maximize transparency and comprehensive reporting. The roadmap <cit.> it provides assists authors in accurately describing their research methodology, findings, and planned approach for review protocols. The increasing number of SLRs and PRISMA reviews (Figures <ref> and <ref>) underscores their utility and indicates a need for more efficient strategies to assist researchers.
While the PRISMA framework has greatly contributed to enhancing the transparency and reporting quality of systematic reviews, it is important to acknowledge certain limitations, difficulties, and constraints associated with its implementation.
Conforming to the PRISMA suggestions and conducting evidence syntheses demand a considerable investment of time, effort, and resources<cit.>. Researchers need to carry out comprehensive literature searches, apply strict study selection criteria, extract data, and synthesize findings, which can be a resource-intensive process. This poses challenges, particularly for researchers with limited time or funding. Furthermore, once the work is completed, continuous and incremental updates are often not carried out, and a new review in a given field is usually conducted several years later, even by the same team of authors. Subsequent reviews by the same teams only gain minor efficiencies from their previous work, as new research questions require each study to be revisited thoroughly once again.
Despite the rigorous reporting methodology outlined by PRISMA, like any study, systematic reviews are susceptible to bias, errors and selective reporting <cit.>. Publication bias, where studies with positive or statistically significant results are more likely to be published, can lead to an overestimation of treatment effects in medical contexts. Additionally, the selective inclusion of studies based on specific criteria can introduce bias and compromise the review's comprehensiveness. <cit.> note that PRISMA, being a reporting guideline for systematic reviews, is most valuable when consulted during the development of a review rather than as a mere checklist for journal submission. It assesses the completeness of reporting but does not evaluate the quality or performance of the review itself. Therefore, cannot be assumed that strictly following PRISMA guidelines alone guarantees a rigorous systematic review.
Systematic reviews employing PRISMA guidelines aim to objectively synthesise available evidence. However, the interpretation of the evidence and drawing accurate conclusions can be complex <cit.>.
Additionally, <cit.> note that as both research methodologies and technological advancements continue to evolve rapidly, the PRISMA framework needs to remain up to date with these changes to ensure its relevance and applicability. The authors state that regular updates and adaptations to incorporate emerging methods, such as machine learning or network meta-analyses, are necessary to address the evolving needs of systematic reviews. Given the enormously disruptive effect that the recent advancements and releases of generative AI technologies have had both on academia and beyond, the time has come to consider how PRISMA and systematic literature reviews can be empowered by AI to enable and accelerate future research.
§.§ Living Systematic Reviews
Living systematic reviews (LSRs) are a relatively new approach to literature reviews that are designed to provide a more dynamic and up-to-date synthesis of evidence. They are particularly useful for topics where the evidence base is rapidly evolving, and frequent updates are likely to result in changes in effect size or direction. This makes them highly relevant for policy requirements that demand regular updates due to shifting knowledge needs <cit.>.
In an LSR, the literature search and data extraction processes are continually updated to incorporate new evidence as it becomes available. This contrasts with traditional systematic reviews, which provide a snapshot of the evidence at a specific point in time. LSRs, therefore, offer a more current and comprehensive overview of the evidence base, which can be particularly valuable in fast-moving research fields or in response to urgent policy or practice decisions <cit.> requiring new tools and technologies to enable this. The importance of LSRs is increasingly being recognized in the research community where they are regarded to be of most value in fields where there is a high level of uncertainty, and new research is likely to influence the conclusions of the review <cit.>. For instance, the Cochrane Collaboration, a global network of researchers known for producing high-quality systematic reviews, has begun to explore the concept of LSRs in response to the rapidly evolving evidence base in many areas of health care <cit.>. Since conducting an LSR is resource-intensive that requires ongoing search and screening of literature, continuous data extraction and analysis, and regular updates to the review manuscript, entirely new approaches, tools and technologies are needed to enable this.
§.§ AI and Academic Research Automation
Recent machine learning advancements in Large Language Model (LLM) developments, and specifically with GPT-class (Generative Pre-trained Transformer) models such as GPT-3.5 and GPT-4 from OpenAI <cit.>, have demonstrated unprecedented AI capabilities in natural language understanding and generation of human-like text. These models have exhibited an unprecedented ability to generate human-like text, demonstrating a profound understanding of academic literature across a diverse array of fields. As these models continue to evolve, they are becoming increasingly adept at mitigating the phenomenon of hallucination[In the context of LLMs, hallucination refers to the generation of outputs or responses that are incorrect, misleading, or unrelated to the given input or prompt.], thereby enhancing their reliability and potential to assist in academic research where accuracy is paramount. The integration of plugins and web capabilities has further augmented these technologies, enabling them to access and incorporate the latest research findings.
However, despite these advancements, the most capable and publicly accessible LLM models remain general-purpose and do not yet possess sufficiently specialized and in-depth knowledge across a wide range of disciplines to support the conduct of an SLR. The finetuning of the publicly accessible models (particularly GPT models) for domain-specific knowledge required for SLRs has not been possible, primarily due to the proprietary and confidential nature of the model parameters for these LLMs. Moreover, the computational demands associated with finetuning models comprising billions of parameters for new task capabilities and new knowledge has also until recently has been prohibitively expensive <cit.>.
However, the landscape is rapidly changing with the recent release of several large open-sourced LLMs (examples of the GPT models are Falcon, MosaicML, LLaMA<cit.>) and the development of techniques that facilitate the finetuning (for example Low-Rank Adaptation (LoRA) <cit.> and Quantized LoRA<cit.>) of LLMs on modest computational and memory resources without compromising accuracy. Work by <cit.> has shown that QLoRA can perform as well as classical full-model finetuning while demonstrating this on the largest publicly available models while performing the finetuning on a single GPU marking a significant shift in the accessibility of LLM finetuning. Thus, these advancements pave the way for the natural evolution towards the proliferation of finetuned domain-specific LLMs across research teams and industries, enhancing these models' expert capabilities and potentially revolutionising how we approach and conduct academic research.
§.§ Contribution
Taking advantage of the recent technological progress, this paper proposes both a novel methodology and an AI-integrated PRISMA framework for domain-finetuned LLMs (PRISMA-DFLLM), which combines the power of generative AI language models with the rigorous reporting guidelines of PRISMA. The aim of the proposed reporting extension is to offer effective reporting guidelines for LLM-based SLRs and foster the development of domain-specific LLMs. This will expedite and broaden the reach of research, especially with the growing number of publications and the support of generative AI technologies, ensuring scalability in the research environment.
The main focus of this work is to improve the process of conducting SLRs by addressing the challenges associated with manual methods. The aim is to introduce a more efficient and scalable approach to conducting SLRs, LSRs and other forms of knowledge syntheses.
Additionally, this work proposes a research roadmap for developing LLMs that can be finetuned to support academic research. The roadmap outlines key steps such as selecting domain-specific knowledge, adjusting model parameters, and evaluating model performance. The paper also identifies future research questions that need to be explored, such as how to integrate domain-specific knowledge into LLMs effectively and how to ensure the reliability and validity of finetuned LLMs.
Finally, this study delves into the benefits and obstacles of developing domain-specific LLMs for research purposes. On one hand, these models can potentially revolutionize how we approach and conduct academic research by automating the literature review process, improving research scalability, and generating unique insights that conventional methods may not uncover. On the other hand, there are a few challenges that need addressing, such as the confidential nature of LLM parameters, the computational demands associated with refining large models, and the importance of thorough evaluation to verify the accuracy and dependability of refined models.
§ BACKGROUND
Over the past few decades, several initiatives have aimed to enhance the quality and transparency of reporting in meta-analyses and systematic reviews. Notable reporting standards and guidelines include the Quality of Reporting of Meta-analyses (QUOROM) <cit.> Statement, the Meta-analyses Of Observational Studies in Epidemiology (MOOSE) <cit.> guidelines which were superseded by PRISMA and its recent update PRISMA 2020. The QUOROM Statement, published in 1999, introduced a checklist for meta-analysis reports, covering aspects such as eligibility criteria, search methods, data extraction, and statistical analysis. In 2000, the MOOSE guidelines specifically addressed the reporting of meta-analyses of observational studies, providing recommendations on study design, search strategy, data collection, and study quality assessment.
notably, the revised PRISMA 2020 now acknowledges the importance of methodological quality and risk of bias by incorporating considerations from AMSTAR-2 and ROBIS in its development.
The newly added reporting of methodological quality and especially the risk of bias in assessment into PRISMA 2020 is a welcome addition with respect to integrations with LLMs and the potential of bias.
In addition to PRISMA, two other significant approaches <cit.> that support evidence-based practice with a rigorous methodology and transparency are AMSTAR-2 <cit.> and ROBIS <cit.>. AMSTAR-2 provides a comprehensive checklist for assessing methodological quality, while ROBIS focuses on evaluating the risk of bias within systematic reviews. In contrast, PRISMA 2020 is a reporting guideline that emphasizes complete and transparent reporting of systematic reviews. Both AMSTAR-2 and ROBIS enhance the reviewers' ability to appraise the overall quality and validity of systematic reviews. Notably, the revised PRISMA 2020 now incorporates considerations from AMSTAR-2 and ROBIS to acknowledge the importance of methodological quality and risk of bias. This inclusion of methodological quality and risk of bias reporting in PRISMA 2020 is a welcome addition that is particularly relevant to integrations with LLMs and potential bias.
§.§ The PRISMA 2020 Statement
PRISMA is a guideline based on evidence that outlines the essential items required for reporting systematic reviews and meta-analyses <cit.>. Created in 2009, PRISMA aims to improve the quality and transparency of systematic reviews by ensuring researchers report crucial information necessary for the critical evaluation and replication of their work. Although it was initially intended for reviews that assess randomized trials, PRISMA has been widely adopted and can serve as a basis for reporting systematic reviews of other study designs, such as observational studies, diagnostic accuracy studies, and qualitative research.
The PRISMA statement was updated in 2020 to reflect the latest developments and challenges in conducting systematic reviews <cit.>. The 2020 version includes a checklist and a flow diagram, like its predecessor. However, the updated checklist now includes 27 items that cover essential components of a systematic review, such as the title, abstract, introduction, methods (such as eligibility criteria, search strategy, study selection, data extraction, and data synthesis), results, discussion, and funding. Each checklist item provides guidance to researchers on what to include in their systematic review report. The flow diagram, also known as the PRISMA flow diagram, visually represents the study selection process, making it easier for readers to understand the flow of studies from identification to the final included studies.
PRISMA has had a significant impact on the field of systematic reviews and meta-analyses since its introduction. It has become widely recognized and endorsed by leading journals and organizations in the field of evidence-based medicine. Researchers, reviewers, and journal editors often refer to PRISMA as a critical tool for improving the transparency, rigor, and completeness of systematic reviews. Adhering to PRISMA guidelines facilitates the assessment of the risk of bias and the replication of studies, enabling the integration of evidence into practice and policy-making.
The PRISMA 2020 update demonstrates a continued commitment to improving the reporting standards in systematic reviews, further enhancing its applicability and usability <cit.>. As research methodologies and technological advancements continue to evolve rapidly, the PRISMA framework must remain up to date with these changes to ensure its relevance and applicability. This commitment to continuous improvement underscores the importance of PRISMA in advancing evidence-based practice and decision-making.
§.§ Extensions to PRISMA
There is already a well-established precedent for customizing the original PRISMA guidelines. Diverse research and methodological challenges have led to the proposal of several extensions to the PRISMA framework. These extensions augment the versatility and adaptability of the PRISMA guidelines, empowering researchers to conduct and report systematic reviews in specific domains. By tailoring guidance to address context-specific needs, these extensions bolster transparency, reproducibility, and the overall efficacy of systematic review methods. The extensions discussed in this section include PRISMA-P (Protocols), PRISMA-ScR (Scoping Reviews), PRISMA-NMA (Network Meta-Analyses), PRISMA-IPD (Individual Patient Data), PRISMA-Harms (Harms Reporting), and PRISMA-RR (Rapid Reviews). Each extension equips researchers with a comprehensive checklist and elucidation of critical components, thereby optimizing the quality, transparency, and comprehensibility of systematic reviews within distinct research contexts.
*PRISMA-P (for Protocols)
PRISMA-P <cit.> is an extension of the PRISMA framework that specifically focuses on the development and reporting of systematic review protocols. A systematic review protocol outlines the objectives, methods, and analysis plan that will be followed in a systematic review. It serves as a blueprint for conducting the review and ensures transparency, reproducibility, and consistency in the review process. The PRISMA-P extension provides a checklist and explanation for the key items that should be included in a systematic review protocol, covering aspects such as the rationale, eligibility criteria, search strategy, data extraction, and synthesis methods. By adhering to the PRISMA-P guidelines, researchers can enhance the quality and transparency of their systematic review protocols, facilitating a better understanding and assessment of the planned review.
*PRISMA-ScR (for Scoping Reviews)
Scoping reviews aim to map the literature on a particular topic or research area and provide an overview of the available evidence. Unlike systematic reviews, scoping reviews are typically broader in scope and focus on identifying the main concepts, theories, sources, and gaps in the existing literature. The PRISMA-ScR <cit.> extension adapts the original PRISMA guidelines to the unique characteristics of scoping reviews, providing a checklist and explanation to guide researchers in conducting and reporting scoping reviews. It covers key aspects such as the research question, search strategy, study selection process, data extraction, and presentation of findings. By adhering to the PRISMA-ScR guidelines, researchers can enhance the transparency and rigor of their scoping reviews, enabling better understanding and utilization of the synthesized evidence.
*PRISMA-NMA (for Network Meta-Analyses)
Network meta-analysis (NMA), also known as multiple-treatments meta-analysis or indirect treatment comparison, is a statistical method that allows for the simultaneous comparison of multiple interventions in a single analysis. It enables the estimation of relative treatment effects even in the absence of head-to-head comparisons between all treatments. The PRISMA-NMA <cit.> extension provides guidelines for the reporting of network meta-analyses, ensuring transparency and clarity in the reporting of methods, results, and interpretations. The checklist covers key aspects such as the study design, search strategy, study selection criteria, data extraction, risk of bias assessment, statistical analysis methods, and presentation of results. By following the PRISMA-NMA guidelines, researchers can improve the quality and comprehensibility of their network meta-analyses, facilitating the synthesis and interpretation of evidence on comparative treatment effects.
*PRISMA-IPD (for Individual Patient Data)
Meta-analyses that utilize individual patient data (IPD) provide a more detailed and potentially accurate analysis compared to those relying on aggregate data. IPD allows for the examination of patient-level characteristics and the exploration of treatment effects across subgroups. The PRISMA-IPD <cit.> extension focuses on reporting guidelines for systematic reviews and meta-analyses that use individual patient data. The checklist covers items such as the study design, data collection methods, participant characteristics, outcomes of interest, statistical methods used for the analysis, and interpretation of results. By adhering to the PRISMA-IPD guidelines, researchers can enhance the transparency, accuracy, and comparability of their systematic reviews and meta-analyses based on individual patient data.
*PRISMA-Harms (for Harms Reporting)
The PRISMA-Harms <cit.> extension focuses on the comprehensive reporting of harmful outcomes in systematic reviews and meta-analyses. While systematic reviews traditionally focus on the effectiveness and benefits of interventions, it is equally important to examine and report the potential harms associated with those interventions. PRISMA-Harms provides guidelines to systematically identify, extract, and report data on adverse events and harms in the included studies. The extension emphasizes the need for transparency, completeness, and consistency in reporting adverse events and facilitates a more balanced assessment of the benefits and risks associated with interventions.
*PRISMA-RR (for Rapid Reviews)
Rapid reviews are a form of knowledge synthesis that aims to produce timely information while streamlining or omitting certain components of the systematic review process. The PRISMA-RR <cit.> extension provides a reporting checklist specifically tailored to the unique characteristics and requirements of rapid reviews. It covers key aspects such as the review question, search strategy, study selection process, data extraction, and synthesis methods. By following the PRISMA-RR guidelines, researchers can ensure transparency and rigor in reporting rapid reviews, enabling readers to better understand the strengths and limitations of these expedited knowledge syntheses.
§.§ AI-Enabled SLRs: Current State and Future Directions
The surge in academic literature has led to the integration of AI technologies in systematic reviews, aiming to streamline the review process and enhance efficiency <cit.>. These technologies have been utilized in various stages of the review process, including literature screening and data extraction from included studies. However, the implementation of AI in systematic reviews is not without challenges, such as the need for extensive labeled data for AI model training and the complexity of automating subjective tasks like quality assessment <cit.>.
Recent studies have explored the potential of machine learning algorithms and they have shown promise in reducing the workload of reviewers during citation screening <cit.>. Similarly, text mining techniques have been employed for data extraction from included studies, although the complexity and variability of academic literature data present significant challenges <cit.>. Knafou et al. <cit.> demonstrated the efficacy of deep learning language models in classifying COVID-19-related publications, suggesting the potential of AI in supporting epidemiological curation and review. Forsgren et al. <cit.> discussed the utility of text-mining functions in facilitating the screening process for topics with diffuse conceptual boundaries, highlighting the potential of AI in improving the workflow of comprehensive reviews. Muller et al. <cit.> proposed a retrospective study to quantify the effect of machine learning adoption on resource use and time-to-completion in systematic reviews. The study underscores the potential benefits of machine learning in evidence synthesis and highlights the need for further research to address concerns about quality and automation.
However, limitations still exist, and further research is needed to address concerns about quality, automation, and the current limitations of various AI models in evidence synthesis. Qureshi et al. <cit.> explored the potential of ChatGPT <cit.>, a general-purpose LLM developed by OpenAI, in aiding systematic reviews. While the model showed promise in certain areas, the study highlighted the need for further exploration and testing to understand its current limitations and capacity in the context of evidence synthesis. Only very recently, are we beginning to witness the emergence of domain-specific LLMs, which were created as a result of pretraining on large datasets requiring enormous compute resources. In the domain of healthcare, Singhal et al. <cit.> demonstrated the potential of LLMs in medical question answering. Their research on Med-PaLM 2 showcased remarkable advancements towards physician-level performance in this domain, along with the introduction of benchmarks and evaluation frameworks specific to LLMs in medical question answering. Additionally, Stanford CRFM developed PubMedGPT 2.7B <cit.>, a language model exclusively trained on biomedical abstracts and papers, which achieved impressive results across various biomedical NLP tasks <cit.>. Similarly, in the field of finance, Wu et al. <cit.> presented BloombergGPT, a 50 billion parameter language model trained on a diverse range of financial and general-purpose data. This model surpassed existing models in financial tasks while maintaining strong performance on general-purpose benchmarks, highlighting the successful training of LLMs on both domain-specific and general data sources. Meanwhile, Lehman et al. <cit.> investigated the suitability of LLMs trained on general web text for specialized and safety-critical domains like clinical text through extensive analysis of 12 language models, and their study reveals that relatively small specialized clinical models outperform larger LLMs trained on general text, even when finetuned with limited annotated data. Their findings suggest the viability of finetuning smaller LLMs for domain-specific tasks which may be suitable for SLRs.
Despite these advancements, there is a pressing need for an integrated AI-based guideline that can fully support the conduct of systematic reviews, and there is specifically a gap in leveraging LLMs to automate parts of the review process and generate insights directly from the literature <cit.>. This approach could not only save time and resources but also enhance the comprehensiveness and quality of systematic reviews. The recent advancements in AI, particularly in the development of LLMs, underscore the potential of developing such an integrated AI framework and beginning to experiment with customized LLMs that have been finetuned for domain-specific knowledge which holds promise for revolutionizing the conduct of systematic reviews.
§.§ AI Advances in LLM Finetuning
The advent of LLMs has revolutionized the field of natural language processing, offering unprecedented capabilities in understanding and generating human-like text. However, the sheer size of these models presents significant challenges in terms of computational cost and memory footprint, especially during the finetuning process. To address these challenges, a range of Parameter-Efficient Fine-Tuning (PEFT) strategies have been developed <cit.>. These strategies aim to update a small subset of parameters while keeping the rest of the model frozen, thereby achieving comparable performance to full finetuning while significantly reducing computational cost and memory footprint.
One of the pioneering PEFT strategies is Low-Rank Adapters (LoRA). LoRA introduces low-rank matrices into each layer of the pre-trained model. These matrices are the only parameters that are updated during finetuning, while the original pre-trained parameters are kept frozen. This approach significantly reduces the number of parameters that need to be updated, making finetuning more efficient and less prone to overfitting. However, it's worth noting that LoRA may not offer the same level of flexibility as full finetuning, as it can only modify the model's behavior in a limited way <cit.>.
Building on the concept of LoRA, Quantized LoRA (QLoRA) was developed to further reduce memory usage by quantizing the pre-trained model's weights. This involves representing the weights with a smaller number of bits, which can significantly reduce the memory footprint of the model. QLoRA also introduces several other innovations to save memory without sacrificing performance, such as double quantization and paged optimizers. However, this approach can be more complex to implement and may introduce additional computational overhead <cit.>. Adapters, for instance, introduce small, task-specific parameter matrices into the model, which are trained while the original parameters are frozen <cit.>. This approach allows for task-specific adaptations without the need to finetune the entire model, thereby maintaining the benefits of the original pre-training while adapting the model to new tasks. Compared to LoRA and QLoRA, Adapters can offer greater flexibility as they allow for task-specific modifications. However, the introduction of task-specific parameters can increase the complexity of the model and may require more careful management of the training process to avoid overfitting. Prefix tuning, on the other hand, adds a task-specific prefix to the input sequence, which can modify the model's behavior in a more flexible way than LoRA or Adapters <cit.>. This approach can be particularly effective for tasks that require a significant change in the model's behavior, as the prefix can guide the model towards the desired output. Compared to LoRA and QLoRA, prefix tuning can offer greater flexibility and can be more effective for tasks that require a significant change in the model's behavior. However, it may also require more careful tuning of the prefix and can potentially introduce more computational overhead due to the need to process the additional input.
PEFT strategies offer a promising approach to finetuning LLMs in a more computationally and memory-efficient manner. However, the effectiveness of these strategies can depend on the specific method used and the characteristics of the task. As such, it is crucial to carefully consider the trade-offs between different PEFT strategies when finetuning LLMs for specific applications. Future research in this area is likely to yield even more efficient and effective strategies for finetuning LLMs.
In order to underscore the feasibility of finetuning, <cit.> recently concluded that only limited instruction finetuning data is necessary to teach models to produce high-quality output which they demonstrated by training LIMA, a 65B parameter LLaMa language model, with only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling. While the authors note that almost all knowledge in an LLM is learned during pretraining where general-purpose representations are learned from raw text, it remains possible to encode additional knowledge and necessary downstream behaviours necessary for conducting an SLR with only a small finetning dataset comprising target academic papers.
§ OUTLINE OF ADDITIONAL PRISMA REPORTING COMPONENTS
The proposed framework aims to augment the existing PRISMA methodology for SLRs by explicitly incorporating several key components related to LLM finetuning. These additional components address the reporting topics of the finetuning dataset, technical LLM finetuning details, and the evaluation of the finetuned LLM. By integrating these components, the PRISMA-DFLLM methodology expands upon the established PRISMA guidelines without replacing any existing ones. The first step of the proposed guideline is to conduct a traditional PRISMA search criteria for paper collection and filtering. This ensures that the systematic and comprehensive nature of the original PRISMA framework is maintained, resulting in a rigorous selection of relevant literature that is essential and reportable. Subsequently, the identified papers form the basis for the subsequent steps in the PRISMA-DFLLM methodology.
§.§ Reporting the Finetuning Dataset Details
The dataset used for finetuning LLMs is of paramount importance and necessitates meticulous processing and preparation and is constructed from the pertinent academic papers. During the data preparation phase, the raw text extracted from these papers undergoes cleaning and preprocessing to ensure uniformity and facilitate smooth processing by the model. This involves tasks such as removing or replacing specific characters, addressing encoding issues, and standardizing formats like dates and numbers.Furthermore, the metadata associated with each paper, encompassing details such as authors, publication date, journal, and keywords, is seamlessly integrated with the paper's text. This integration enables the LLM to discern connections between content and metadata, thereby enhancing its comprehension of the paper and its context. It is crucial to explicitly outline the strategy employed in constructing the dataset, including any automated or manual steps taken to represent information from the academic papers.
In addition to the dataset construction strategy, it is imperative to clearly specify the format of the input and output data for the LLM, along with any domain-specific preprocessing steps and text encoding methods utilized. If supplementary datasets were incorporated for instruct-finetuning or to augment the LLM's general domain knowledge, these should be reported as well. Another important aspect is reporting the size attributes of the final finetuning dataset, as this information impacts the LLM's ability to generalize and the computational resources required for training.
Since the dataset used for finetuning LLMs necessitates meticulous processing and preparation a structured approach needs to be followed and communicated. Cleaning and preprocessing steps ensure a uniformly structured dataset, while the incorporation of metadata enhances the model's comprehension of the paper and its context. Therefore it is crucial to provide detailed reporting of the dataset construction strategy, format specifications, and any additional datasets utilized to ensure reproducibility and a comprehensive understanding of the LLM's training process and any extensions of PRISMA need to set guidelines that accommodate this.
§.§ Reporting the LLM Finetuning Process Details
The selection of a suitable base LLM model for finetuning is of utmost importance in the LLM finetuning process. Different LLM models exhibit variations in terms of their architecture, capacity, and performance on language tasks. When choosing a base LLM model, it is essential to consider these factors and evaluate their alignment with the goals and requirements of the finetuning task. In the context of the PRISMA-DFLLM framework, considerations of various base LLM models, their suitability and the assessment their performance need to be communicated. This ought to be reported since options range from raw LLM models, which have been trained on extensive text corpora but are yet to be finetuned for downstream tasks, to models that have already undergone a degree of finetuning for specific instructions or domains.
The choice of a base LLM model depends on factors such as its architectural features, capacity to capture complex relationships in data, and its performance on language tasks relevant to the finetuning objective. Before commencing the finetuning process, it is essential to understand the status of the chosen base LLM model—whether it is raw or has already undergone some degree of finetuning, and to report this. Raw models offer a broader understanding of language patterns but lack domain-specific knowledge. On the other hand, models that have undergone prior finetuning for specific instructions or domains might already have some domain-specific knowledge embedded which assists the subsequent finetuning. Evaluating the suitability of a base model in terms of its rawness or prior finetuning ensures that the finetuning process aligns with the specific requirements of the task at hand.
Once an appropriate base LLM model has been selected, the finetuning process involves training the model on a specific academic corpus. This is achieved through techniques like adjusting hyperparameters such as learning rate, batch size, and the number of training epochs. An optimization algorithm, such as Adam or Stochastic Gradient Descent, is then utilized to update the model's parameters based on a loss function. The goal is to minimize errors and improve the accuracy of predictions. To prevent overfitting, techniques like dropout, weight decay, or early stopping can be used during the finetuning process. Post-processing steps, such as temperature scaling, can further optimize the LLM's performance by controlling the level of randomness in the generated text outputs. Once the finetuning process is complete, the final LLM model is prepared for deployment by packaging it along with any necessary support files, such as tokenizers or preprocessing tools. This is done in a format that allows researchers to easily load and utilize the model for their intended tasks.
The choice of a suitable base LLM model is therefore consequential in the finetuning process, considering factors such as architectural features, model capacity, and performance on language tasks that need to be reported. Assessing whether the model is in a raw form or has undergone prior finetuning assists in aligning the finetuning process with the specific requirements of the task. The subsequent finetuning process involves adjusting hyperparameters, employing optimization algorithms, and applying techniques to prevent overfitting, all of which ought to be documented as part of the extended PRISMA guideline. Post-processing steps can optimize the model's performance, and the final model is prepared for deployment by packaging it with the necessary support files. Clear reporting of these steps ensures transparency as well as a potential for reproducibility.
§.§ Reporting the Evaluation of Finetuned LLMs Details
The evaluation of the finetuned PRISMA-DFLLM model is a multifaceted process designed to provide a robust and comprehensive assessment of its performance. This process is tailored to the model's intended use case, ensuring that the evaluation metrics match the specific tasks the model is designed to perform and that overall it achieves alignment[Alignment, in the context of LLMs, pertains to the degree of concordance or harmony between the generated outputs of the model and the intended or expected outputs.]. In the context of information retrieval tasks, which are central to SLRs, metrics such as accuracy, precision, recall, and F1-score can be employed. For instance, precision (the proportion of retrieved documents/information that are relevant) and recall (the proportion of relevant documents/information that are retrieved) provide a balanced view of the model's performance in identifying relevant literature. The F1-score, the harmonic mean of precision and recall, gives an overall performance metric.
When the model is used for document summarization tasks, the ROUGE metric can be utilized. This metric compares the overlap of n-grams, word sequences of n words, between the generated summaries and human-written abstracts. For example, a high ROUGE-2 score would indicate a significant overlap of two-word sequences between the model's output and the reference summary, suggesting a high-quality summary.
Human evaluation can also be a valuable part of the evaluation process. It provides a qualitative assessment of factors such as coherence, completeness, and fidelity to the original paper. In generative tasks, human evaluators assess the model's responses for coherence, relevance to the prompt, novelty, and factual accuracy. For instance, evaluators might rate the model's responses on a Likert scale for these factors, providing a more nuanced understanding of the model's performance. Comparative evaluations form another key component of a possible suite of assessments. Here, the performance of the finetuned model is compared against baseline models, such as the original LLM before finetuning or other LLMs finetuned on different corpora. This comparison helps to quantify the added value of the finetuning process and the specific corpus used.
Ensuring the stability and reproducibility of the evaluations is paramount. This can be achieved by running the model multiple times with different random seeds and varying the dataset and the model's initial parameters. Techniques such as data splitting into training, validation, test sets, and cross-validation can be used to enhance the reliability of the evaluations. Additionally, a qualitative analysis of the model's outputs can be conducted, examining case studies of both successful and less successful examples. Errors made by the model can be categorized and analyzed, providing valuable insights for potential improvements. For example, if the model consistently struggles with a certain type of prompt, adjustments to the finetuning process to better handle such prompts can be made.
There are a raft of possible evaluation metrics that can be applied to finetuned LLMs. Each has their strengths. Given the central importance of a finetned LLM in the proposed SLR process, a robust evaluation and comprehensive process covering quantitative metrics, comparative assessments, reproducibility checks, and qualitative analyses need to be performed and reported. A thoroughly documented approach ensures transparency and reliability of the LLM-based SLRs that deliver high-quality outputs meeting the requirements of SLR processes.
§.§ Reporting the Considerations of Ethical and Legal Aspects
The process of finetuning LLMs on a corpus of academic papers necessitates a careful navigation of ethical, legal, and privacy considerations. The potential harm that could arise from the model's outputs, such as the propagation of biases, generation of inappropriate content, or violation of privacy, is a concern that needs to be considered <cit.>.
A primary legal concern is compliance with copyright laws. These laws restrict the reproduction, distribution, and public display of copyrighted works, including substantial portions of a work, even if they are transformed. For instance, finetuning a language model on copyrighted text could be viewed as a transformative use, potentially falling under the legal doctrine of "fair use" in certain jurisdictions like the United States. However, the application of fair use is subjective and evaluated based on several factors, including the purpose and nature of the use, the characteristics of the copyrighted work, the amount and substantiality of the portion used, and the impact on the potential market for the copyrighted work. To mitigate the risk of copyright infringement, it is advisable to seek permission from copyright holders, use open-access academic papers when possible, or consult with legal experts <cit.>. These considerations ought to be reported as part of the SLR.
Data privacy is another significant consideration, especially when handling sensitive information.
For instance, in fields like medical research, academic papers may contain sensitive patient data. In such cases, it is crucial to ensure compliance with data protection laws and ethical guidelines. This process should be documented, and measures should be taken to protect this information during the finetuning process and especially if the intention is to make the finetuned available to the public.
Lastly, the finetuning process can introduce or amplify biases present in the training data. As <cit.> discuss, language generation applications like LLMs can exhibit a variety of biases and consequently it is important to monitor and mitigate these where possible. For example, techniques such as bias mitigation algorithms or fairness-aware machine learning methods could be employed. The ethical application of the PRISMA-DFLLM framework requires a commitment to responsible AI use, respect for intellectual property rights, and a proactive approach to monitoring and mitigating potential biases and reporting guidelines of these issues need to accommodate these aspects.
§ EXTENDED PRISMA REPORTING GUIDELINES
In light of the previous section, the extension to the original PRISMA 2020 checklist includes several new categories. This section presents a proposed extension to the PRISMA 2020 checklist, tailored specifically for studies involving the finetuning and application of LLMs for conducting SLRs. The following subsection presents a reporting checklist that covers the specifics of the finetuning dataset preparation, the finetuning process of the LLM, the evaluation of the model's performance, and the ethical and legal considerations associated with the use of LLMs.
§.§ The PRISMA-DFLLM Checklist
The revised checklist below highlights new items.
* TITLE
1. Title: Identify the report as a systematic review.
* ABSTRACT
2. Abstract: See the PRISMA 2020 for Abstracts checklist.
* INTRODUCTION
3. Rationale: Describe the rationale for the review in the context of existing knowledge.
4. Objectives: Provide an explicit statement of the objective(s) or question(s) the review addresses.
* METHODS
5. Eligibility criteria: Specify the inclusion and exclusion criteria for the review and how studies were grouped for the syntheses.
6. Information sources: Specify all databases, registers, websites, organisations, reference lists and other sources searched or consulted to identify studies. Specify the date when each source was last searched or consulted.
7. Search strategy: Present the full search strategies for all databases, registers and websites, including any filters and limits used.
8. Selection process: Specify the methods used to decide whether a study met the inclusion criteria of the review, including how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process.
9. Data collection process: Specify the methods used to collect data from reports, including how many reviewers collected data from each report, whether they worked independently, any processes for obtaining or confirming data from study investigators, and if applicable, details of automation tools used in the process.
10 Data items:
10a. List and define all outcomes for which data were sought. Specify whether all results that were compatible with each outcome domain in each study were sought (e.g. for all measures, time points, analyses), and if not, the methods used to decide which results to collect.
10b. List and define all other variables for which data were sought (e.g. participant and intervention characteristics, funding sources). Describe any assumptions made about any missing or unclear information.
11. Study risk of bias assessment: Specify the methods used to assess risk of bias in the included studies, including details of the tool(s) used, how many reviewers assessed each study and whether they worked independently, and if applicable, details of automation tools used in the process.
12. Effect measures: Specify for each outcome the effect measure(s) (e.g. risk ratio, mean difference) used in the synthesis or presentation of results.
13 Synthesis methods:
13a. Describe the processes used to decide which studies were eligible for each synthesis (e.g. tabulating the study intervention characteristics and comparing against the planned groups for each synthesis (item #5)).
13b. Describe any methods required to prepare the data for presentation or synthesis, such as handling of missing summary statistics, or data conversions.
13c. Describe any methods used to tabulate or visually display results of individual studies and syntheses.
13d. Describe any methods used to synthesize results and provide a rationale for the choice(s). If meta-analysis was performed, describe the model(s), method(s) to identify the presence and extent of statistical heterogeneity, and software package(s) used.
13e. Describe any methods used to explore possible causes of heterogeneity among study results (e.g. subgroup analysis, meta-regression).
13f. Describe any sensitivity analyses conducted to assess robustness of the synthesized results.
14. Reporting bias assessment: Describe any methods used to assess risk of bias due to missing results in a synthesis (arising from reporting biases).
15. Certainty assessment: Describe any methods used to assess certainty (or confidence) in the body of evidence for an outcome.
* FINETUNING DATASET
16 Details of the finetuning dataset:
16a. Dataset preprocessing: Describe the procedures used for processing and preparing academic papers for information extraction.
16b. Dataset format: Specify the structure of the final dataset, including the format of the input and output data for the LLM.
16c. Data augmentation: Report any additional datasets used for instruct-finetuning or for increasing the LLM's general domain knowledge.
16d. Dataset curation: Detail the strategy used to construct the finetuning dataset, including the automation or manual steps used to represent the information from the academic papers.
16e. Dataset composition: Report the size attributes of the final finetuning dataset.
* LLM FINETUNING
17 Technical finetuning details:
17a. LLM specifications: Justify the choice of LLM for finetuning, considering model capacity, architectural features, and reported performance on language tasks.
17b. Finetuning strategy: Discuss the finetuning strategy used, whether classical approach is used requiring a full-model parameter update, or a partial update is employed.
17c. Finetuning settings: Explain the finetuning procedure, including the adjustment of hyperparameters and the optimization algorithm used for adjusting the model's parameters. Include any techniques used to prevent overfitting, such as dropout, weight decay, or early stopping.
17d. Post finetuning: Outline any post-finetuning processing steps performed to optimize the LLM's performance.
* FINETUNED LLM EVALUATION
18 Validation of the domain-specific LLM:
18a. LLM benchmarking: Discuss the initial performance of the LLM before finetuning, on a set of benchmark tasks for which the model will be finetuned.
18b. Evaluation stability and reproducibility: Report any measures taken to ensure the robustness of the evaluation, such as multiple runs with different random seeds and variations in the dataset and the model's initial parameters.
18c. Qualitative analysis: Include an analysis of the types of errors made by the model and potential reasons for these errors.
18d. Alignment: Report the model's performance on the task as well as its ability to produce outputs that are coherent, relevant, and ethically acceptable.
18e. Evaluation metrics: Justify the choice of evaluation metrics based on the task requirements.
18f. Qualitative analysis: Discuss the qualitative analysis of the model's outputs, including case studies of successful and less successful examples.
* RESULTS
19a. Study selection: Describe the results of the search and selection process, from the number of records identified in the search to the number of studies included in the review, ideally using a flow diagram.
19b. Cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded.
20. Study characteristics: Cite each included study and present its characteristics.
21. Risk of bias in studies: Present assessments of risk of bias for each included study.
22. Results of individual studies: For all outcomes, present, for each study: (a) summary statistics for each group (where appropriate) and (b) an effect estimate and its precision (e.g. confidence/credible interval), ideally using structured tables or plots.
23 Results of syntheses:
23a. Briefly summarise the characteristics and risk of bias among contributing studies.
23b. Present results of all statistical syntheses conducted. If meta-analysis was done, present for each the summary estimate and its precision (e.g. confidence/credible interval) and measures of statistical heterogeneity. If comparing groups, describe the direction of the effect.
23c. Present results of all investigations of possible causes of heterogeneity among study results.
23d. Present results of all sensitivity analyses conducted to assess the robustness of the synthesized results.
24. Reporting biases: Present assessments of risk of bias due to missing results (arising from reporting biases) for each synthesis assessed.
25. Certainty of evidence: Present assessments of certainty (or confidence) in the body of evidence for each outcome assessed.
* DISCUSSION
26a. Discussion: Provide a general interpretation of the results in the context of other evidence.
26b. Discuss any limitations of the evidence included in the review.
26c. Discuss any limitations of the review processes used.
26d. Discuss implications of the results for practice, policy, and future research.
26e. Discuss processes to enable an ongoing and incremental living systematic review.
* OTHER INFORMATION
27a. Registration and protocol: Provide registration information for the review, including register name and registration number, or state that the review was not registered.
27b. Indicate where the review protocol can be accessed, or state that a protocol was not prepared.
27c. Describe and explain any amendments to the information provided at registration or in the protocol.
28. Support: Describe sources of financial or non-financial support for the review, and the role of the funders or sponsors in the review.
29. Competing interests: Declare any competing interests of review authors.
30. Availability of data, code and other materials: Report which of the following are publicly available and where they can be found: template data collection forms; data extracted from included studies; data used for all analyses; analytic code; any other materials used in the review; availability of the finetuning dataset and the finetuned LLM.
31 LLM Legal and Ethical information:
31.a LLM Ethical implications: Address the potential for the LLM's outputs to cause harm, either through the propagation of biases, the generation of inappropriate content, or the violation of privacy.
31.b LLM Legal implications: Discuss the legal implications of finetuning LLMs on academic papers, including considerations of copyright laws, fair use, and obtaining permissions from copyright holders where necessary.
31.c LLM Compliance: Document the process of ensuring compliance with data protection laws and ethical guidelines.
§ DISCUSSION
The development of domain-specific finetuned LLMs for individual disciplines, capable of supporting robust SLRs, presents a promising avenue; however, this pursuit also introduces unique challenges that necessitate further exploration and research.
§.§ Potential Benefits of PRISMA-DFLLM
Enhanced Efficiency
The integration of domain-specific finetuned LLMs into SLRs could significantly enhance the efficiency of the review process. By automating labor-intensive tasks such as data extraction and evidence synthesis, researchers can drastically reduce the time and resources traditionally required for these tasks. For instance, a domain-specific LLM trained on medical literature could automatically extract relevant data from a large number of clinical trials, such as patient demographics, intervention details, and outcome measures, thereby expediting the production of systematic reviews. This increased efficiency could enable researchers to stay abreast of the rapidly expanding body of literature and respond more promptly to emerging research questions.
Scalability and Living Systematic Reviews
Domain-specific finetuned LLMs offer scalability, facilitating the review of a large volume of literature within a condensed time frame. The capabilities of finetuned language models could be leveraged to analyze a more extensive number of papers at a comprehensive level. This scalability is particularly beneficial in research fields with a high publication rate or when conducting frequent updates of systematic reviews. For example, in the field of infectious diseases, where new research on diseases like COVID-19 is published at a rapid pace, a domain-specific LLM could help researchers keep up with the latest findings. Furthermore, the scalability of domain-specific finetuned LLMs paves the way for the realization of LSRs that incrementally update the knowledge base of the underlying domain-specific LLM. Once the training hyperparameters and data extraction processes have been optimized, the return on the invested effort is then realized through the ability to update the LLM on an ongoing basis.
Discovery of Novel Insights
The application of PRISMA-DFLLM can potentially uncover novel insights and patterns within the literature. By utilizing specilized LLMs, researchers could identify new connections, trends, and relationships across studies that may not be immediately apparent through traditional review methods. This ability to generate novel insights could enrich the understanding of a research topic, generate new theories and stimulate further investigation.
Dissemination and Collaboration
A significant benefit of the underlying ideas behind PRISMA-DFLLM is the potential for disseminating finetuned LLMs across different research teams and institutions. Researchers could potentially share the trained models, allowing others to utilize and benefit from their expertise and findings. This dissemination fosters transparency, collaboration, and the building of cumulative knowledge within the research community and sharing the resource overheads in updating the models with new research. For example, teams of researchers could distribute the finetuning tasks among themselves where some focus on dataset curation and others on the LLM finetuning process thus accelerating research.
§.§ Potential Challenges of Implementing PRISMA-DFLLM
The implementation of the PRISMA-DFLLM framework, while promising, presents several challenges. These include the automation of data extraction from raw academic articles, the identification of optimal PEFT strategies, ensuring alignment, and the evaluation of finetuned models.
Automating Data Extraction from Raw Academic Articles
Automating the extraction of data from raw academic articles for the purpose of finetuning LLMs is complex. Academic articles, while generally adhering to certain conventions such as the IMRaD (Introduction, Methods, Results, and Discussion) structure, can vary greatly in their organization and presentation of information. This variability can make it difficult for an LLM to consistently locate and extract the necessary information for systematic reviews, such as study design, participant characteristics, and results. Moreover, academic articles often contain valuable information in non-textual formats such as tables, figures, and images. Extracting and encoding this information as text for LLM training presents another layer of complexity and will likely require additional AI components to solve adequately. Current LLMs are primarily designed to process text and may struggle to interpret and integrate information from these non-textual sources. Developing new methods or tools to automate the extraction and encoding of data from these formats is a pressing research need.
Additionally, the raw data from academic articles are often in PDF format, which is not readily usable for model training. Converting PDFs into a machine-readable format introduces additional steps into the data preparation process, each of which can potentially introduce errors or distortions that require human oversight and verification. Furthermore, the creation of finetuning datasets from academic articles is not a straightforward task. A balance must be struck between providing the model with raw data, which gives it a broader understanding of the content and context of the articles, and providing it with structured data in the form of question-answer pairs, which can guide the model's learning in more specific ways. Determining the optimal balance and integration of these different data types is an open research question.
Optimal PEFT Strategies
Choosing the best PEFT strategies for developing domain-specific LLMs is a multifaceted task. The decision hinges on the task's nature, the available training data, and computational resources. Task-specific finetuning with PEFT eliminates the need for pretraining but places a premium on effective data curation and optimal hyperparameter training settings. Techniques like Low-Rank Adaptation (LoRA) and Quantized LoRA (QLoRA) offer resource-efficient alternatives, but choosing between them depends on the task's specifics and available resources. An ensemble of specialized models, each finetuned on a task subset, can enhance performance, but coordinating these models and integrating their outputs can be complex and computationally costly. Active learning approaches optimize the finetuning process but require continuous human annotator interaction, introducing potential data privacy, quality control challenges as well as additional overheads. In essence, selecting the most effective finetuning approach requires careful consideration of various factors and potential trade-offs, highlighting the need for continued research in LLM finetuning.
Evaluating Finetuned Models for Alignment
Securing alignment between the finetuned LLM and the specific task requirements, as well as ethical guidelines, is a complex yet critical aspect of model development. This process extends beyond the technicalities of model training to encompass wider ethical and societal considerations. For instance, the model should be designed to avoid potential biases in its outputs, which requires careful curation and examination of the training data. Additionally, the model should respect privacy norms, especially when dealing with sensitive data or topics. This might involve implementing mechanisms to prevent the model from generating inappropriate or sensitive content. Testing for alignment is a multifaceted process that can employ both quantitative and qualitative methods. Quantitative methods might involve performance metrics such as accuracy, precision, recall, or F1 score, as well as custom benchmarks tailored to the specific task. Qualitative methods, on the other hand, might involve a detailed examination of the model's outputs. For instance, expert reviewers could assess whether the model's responses are contextually appropriate, coherent, and free from bias. They could also evaluate the model's ability to handle complex queries and its sensitivity to the input prompt. Furthermore, alignment testing should also consider the model's interpretability and transparency. For instance, can the model provide explanations for its outputs? Is it clear how the model is using the input data to generate its responses? These are important questions for ensuring the model's alignment with ethical guidelines and user expectations.
Data Availability
Securing a comprehensive and diverse corpus of academic papers for finetuning LLMs can be a formidable task. The availability of open-access resources can be limited, and the accessibility of certain journals or papers may be hindered by paywalls or licensing agreements. For instance, a researcher aiming to finetune a model on AI literature might discover that key journals in the field are not only concealed behind paywalls, but also explicitly prohibit the use of their content for LLM finetuning. This restriction could significantly limit the diversity and representativeness of the training dataset, thereby impacting the model's ability to specialize sufficiently in a given field and to conduct a full PRISMA review. Furthermore, the issue of data availability is not static but evolves. As new research is published, the training data needs to be updated to ensure the model remains current. This requires ongoing access to new publications, which may not always be guaranteed due to changes in access policies or licensing agreements.
Domain-specific Vocabulary
Different research domains often employ their own specialized vocabulary and terminology. For example, the term "cell" has different meanings in biology and the context of mobile communication technology. Adapting the language model to accurately understand and generate domain-specific language is crucial, and may require additional finetuning or the use of domain-specific corpora.
§.§ Future research directions for advancing LLM-enabled literature reviews
In light of the challenges and opportunities discussed in the previous section, the following table list future research directions under several key categories and attempts to label them according to both the difficulty of the undertaking and its urgency.
7pt10pt[htbp]p2.5cm p9cm c c
Future research directions for the PRISMA-DFLLM Framework, highlighting the key categories, difficulty and priority levels of various undertakings.
Category Research Area Difficulty Priority
4c – Continued from previous page
Category Research Area Difficulty Priority
4cContinued on next page
52.5cmAutomating Data Extraction from Raw Academic Articles for Finetuning Datasets Develop and evaluate specialized LLMs for summarization and extracting target textual information from academic articles across various disciplines Low Immediate
2-4
Investigate the optimal balance and integration of raw data and question-answer pairs in the finetuning dataset Medium Immediate
2-4
Develop general-purpose tools/libraries for handling data extraction challenges found in the heterogeneity in the structure and content of academic articles across different journals/disciplines Medium Immediate
2-4
Explore the use of advanced AI agents for extracting textual information from visual and tabular representations in academic articles for automating the curation of finetuning datasets Difficult Medium
2-4
Assess the feasibility and effectiveness of using unsupervised and semi-supervised learning methods for assisting in data extraction from academic articles Medium Long-term
92.5cmOptimizing Finetuning Strategies and Performance
Assess the benefits and limitations of various finetuning strategies, including full-model finetuning and partial finetuning Medium Immediate
2-4
Develop methods for automating the selection and optimization of finetuning strategies based on the specific task and data characteristics Medium Medium
2-4
Investigate the use of meta-learning and autoML techniques for optimizing the PEFT process Difficult Medium
2-4
Investigate the impact of different prompt engineering strategies on the performance of finetuned LLMs Low Immediate
2-4
Explore the use of iterative and multi-stage finetuning approaches for improving model performance Medium Medium
2-4
Investigate the impact of different finetuning strategies on LLM performance, including variations in hyperparameters and training approaches Low Immediate
2-4
Explore techniques for transfer learning and domain adaptation to improve model generalizability across different domains and tasks Medium Medium
2-4
Assess the benefits of ensemble methods and the combination of multiple finetuned LLMs within the PRISMA-DFLLM framework Medium Medium
2-4
Investigate strategies to optimize the computational resources required for finetuning and inference processes, ensuring scalability and efficiency Difficult Long-term
52.5cmEvaluating Finetuned Models
Compare PRISMA-DFLLM reviews with previous PRISMA reviews in parallel on the same data to benchmark the capabilities of the LLMs Low Immediate
2-4
Develop and evaluate task-specific evaluation metrics for assessing the performance of finetuned LLMs in the context of systematic reviews Low Immediate
2-4
Investigate the use of qualitative and user-centric evaluation methods for assessing model performance Medium Medium
2-4
Assess the impact of different evaluation strategies on the perceived performance and utility of finetuned LLMs Medium Medium
2-4
Develop methods for evaluating the robustness and reliability of finetuned LLMs under varying conditions and data characteristics Medium Long-term
42.5cmEnsuring and Testing for Alignment
Develop and evaluate methods for testing the alignment of finetuned LLMs with task requirements and ethical guidelines Low Immediate
2-4
Investigate the use of adversarial testing and bias audits for assessing model alignment Low Medium
2-4
Develop methods for incorporating feedback from users and stakeholders into the finetuning process to improve model alignment for different disciplines Difficult Long-term
2-4
Investigate the use of active learning and human-in-the-loop approaches for ensuring model alignment during the finetuning process Difficult Long-term
42.5cmInterpretability and Explainability
Enhance the interpretability of finetuned domain-specific LLMs by developing methods to provide insights into the model's decision-making process Difficult Immediate
2-4
Investigate techniques for uncertainty estimation and sensitivity analysis to quantify the robustness and reliability of the model's predictions Difficult Medium
2-4
Address biases and misinformation within the framework by exploring methods to identify, mitigate, and provide transparency around potential biases in the training data Difficult Immediate
2-4
Investigate methods for integrating user feedback loops, allowing researchers to provide input and refine the models based on their domain expertise and preferences Medium Medium
42.5cmIntegration, Collaboration and Legal Implications
Explore legal ramifications of finetuning LLMs on non-Open Access academic content as well as the options to disseminate both the finetuning datasets and the finetuned LLMs both across different research teams as well as publicly Medium Immediate
2-4
Explore the integration of external knowledge sources, such as domain-specific ontologies or expert systems, to enhance the model's understanding and reasoning capabilities Medium Medium
2-4
Investigate methods for collaborative finetuning, knowledge sharing, and model exchange among researchers to foster collaboration and accelerate advancements Medium Medium
2-4
Examine approaches for integrating the framework with existing research management systems and platforms, ensuring seamless integration with scholarly databases and knowledge repositories Difficult Long-term
§ CONCLUSION
The advent of open-sourced Large Language Models (LLMs) and efficient finetuning techniques heralds a new era in academic research, particularly in the realm of systematic literature reviews (SLRs). The potential of these technologies to revolutionize the way we access knowledge, conduct SLRs, and generate new insights is immense. The proposed methodology and the PRISMA-DFLLM (Domain-specific Finetuned LLMs) framework, which combines the power of expert LLMs with the reporting guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), represents a significant stride towards realizing this potential.
These guidelines have been extended from a checklist of 27 to 31 items.
The combination of finetuned LLMs with the PRISMA-DFLLM framework offers the promise of greater efficiency, reusability, and scalability in conducting SLRs. By finetuning LLMs on domain-specific academic papers selected through a principled SLR process, we can create models that are not only more adept at handling specialized fields and applications but also capable of conducting incremental living systematic reviews. This approach democratizes cutting-edge research, empowering researchers across the globe by enabling them to leverage these finetuned models to accelerate advancements in their respective fields.
However, the journey towards fully realizing the potential of PRISMA-DFLLM is not without challenges. From ensuring data availability and automating data extraction from raw academic articles to identifying optimal PEFT strategies and ensuring alignment with task requirements and ethical guidelines, each step presents unique hurdles. Overcoming these challenges requires innovative solutions, ongoing research, and careful consideration of the ethical and societal implications of this work.
This paper has laid out the case for the feasibility of expert, finetuned LLMs to support rigorous SLRs and has outlined the technical requirements for realizing this vision. The proposed extended PRISMA-DFLLM checklist of reporting guidelines provides a roadmap for researchers seeking to implement this approach. As we move forward, it is crucial that we continue to explore, validate, and refine this approach, paving the way for a new era of evidence synthesis and knowledge discovery. The future of academic research is on the horizon, and it is one where AI-enabled SLRs play a new and significant role.
same
40
urlstyle
[ope()]openai
OpenAI: About.
<https://openai.com/about/>.
Accessed on 14 June 2023.
[Bolton et al.(2022)Bolton, Hall, Yasunaga, Lee, Manning, and
Liang]bolton2022stanford
E. Bolton, D. Hall, M. Yasunaga, T. Lee, C. Manning, and P. Liang.
Stanford crfm introduces pubmedgpt 2.7b.
<https://hai.stanford.edu/news/stanford-crfm-introduces-pubmedgpt-27b>,
2022.
Accessed: 13 June 2023.
[Brown et al.(2020)Brown, Mann, Ryder, Subbiah, Kaplan, Dhariwal,
Neelakantan, Shyam, Sastry, Askell, et al.]Brown2020
T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal,
A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al.
Language models are few-shot learners.
arXiv preprint arXiv:2005.14165, 2020.
[Chen et al.(2023)Chen, Zhang, Shi, Li, Smola, and
Yang]chen2023parameter
J. Chen, A. Zhang, X. Shi, M. Li, A. Smola, and D. Yang.
Parameter-efficient fine-tuning design spaces.
arXiv preprint arXiv:2301.01821, 2023.
[Curcic()]wordsrated2023
D. Curcic.
Number of academic papers published per year.
<https://wordsrated.com/number-of-academic-papers-published-per-year>.
Accessed on 14th June 2023.
[Dettmers et al.(2023)Dettmers, Pagnoni, Holtzman, and
Zettlemoyer]dettmers2023qlora
T. Dettmers, A. Pagnoni, A. Holtzman, and L. Zettlemoyer.
Qlora: Efficient finetuning of quantized llms, 2023.
[Elliott et al.(2014)Elliott, Turner, Clavisi, Thomas, Higgins,
Mavergames, and Gruen]elliott2017living
J. H. Elliott, T. Turner, O. Clavisi, J. Thomas, J. P. Higgins, C. Mavergames,
and R. L. Gruen.
Living systematic reviews: an emerging opportunity to narrow the
evidence-practice gap.
PLoS Med, 110 (2):0 e1001603, 2014.
[Forsgren et al.(2023)Forsgren, Wallström, Feldthusen, Zechner,
Sawatzky, and Öhlén]forsgren2023use
E. Forsgren, S. Wallström, C. Feldthusen, N. Zechner, R. Sawatzky, and
J. Öhlén.
The use of text-mining software to facilitate screening of literature
on centredness in health care.
Systematic Reviews, 120 (1):0 73, 2023.
[Gui et al.(2023)Gui, Ye, and Xiao]gui2023g
A. Gui, J. Ye, and H. Xiao.
G-adapter: Towards structure-aware parameter-efficient transfer
learning for graph transformer networks.
arXiv preprint arXiv:2305.10329, 2023.
[Hu et al.(2021)Hu, Shen, Wallis, Allen-Zhu, Li, Wang, Wang, and
Chen]hu2021lora
E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and
W. Chen.
Lora: Low-rank adaptation of large language models, 2021.
[Hu et al.(2023)Hu, Lan, Wang, Xu, Lim, Lee, Bing, and
Poria]hu2023llm
Z. Hu, Y. Lan, L. Wang, W. Xu, E.-P. Lim, R. K.-W. Lee, L. Bing, and S. Poria.
Llm-adapters: An adapter family for parameter-efficient fine-tuning
of large language models.
arXiv preprint arXiv:2304.01933, 2023.
[Hutton et al.(2015)Hutton, Salanti, Caldwell, Chaimani, Schmid,
Cameron, Ioannidis, Straus, Thorlund, et al.]hutton2015prisma
B. Hutton, G. Salanti, D. M. Caldwell, A. Chaimani, C. H. Schmid, C. Cameron,
J. P. Ioannidis, S. Straus, K. Thorlund, et al.
The prisma extension statement for reporting of systematic reviews
incorporating network meta-analyses of health care interventions: checklist
and explanations.
Annals of internal medicine, 1620 (11):0
777–784, 2015.
[Knafou et al.(2023)Knafou, Haas, Borissov, Counotte, Low, Imeri,
Ipekci, Buitrago-Garcia, Heron, Amini, et al.]knafou2023ensemble
J. Knafou, Q. Haas, N. Borissov, M. Counotte, N. Low, H. Imeri, A. M. Ipekci,
D. Buitrago-Garcia, L. Heron, P. Amini, et al.
Ensemble of deep learning language models to support the creation of
living systematic reviews for the covid-19 literature.
Systematic Reviews, 120 (1):0 94, 2023.
[Kolaski et al.(2023)Kolaski, Logan, and Ioannidis]Kolaski2023
K. Kolaski, L. R. Logan, and J. P. A. Ioannidis.
Guidance to best tools and practices for systematic reviews.
Systematic Reviews, 120 (1):0 96, 2023.
ISSN 2046-4053.
10.1186/s13643-023-02255-9.
URL <https://doi.org/10.1186/s13643-023-02255-9>.
[Landhuis(2016)]landhuis2016scientific
E. Landhuis.
Scientific literature: Information overload.
Nature, 5350 (7612):0 457–458, 2016.
[Lehman et al.(2023)Lehman, Hernandez, Mahajan, Wulff, Smith, Ziegler,
Nadler, Szolovits, Johnson, and Alsentzer]lehman2023we
E. Lehman, E. Hernandez, D. Mahajan, J. Wulff, M. J. Smith, Z. Ziegler,
D. Nadler, P. Szolovits, A. Johnson, and E. Alsentzer.
Do we still need clinical language models?
arXiv preprint arXiv:2302.08091, 2023.
[Marshall and Wallace(2019)]marshall2019toward
I. J. Marshall and B. C. Wallace.
Toward systematic review automation: a practical guide to using
machine learning tools in research synthesis.
Systematic reviews, 8:0 1–10, 2019.
[Moher et al.(1999)Moher, Cook, Eastwood, Olkin, Rennie, and
Stroup]moher1999improving
D. Moher, D. J. Cook, S. Eastwood, I. Olkin, D. Rennie, and D. F. Stroup.
Improving the quality of reports of meta-analyses of randomised
controlled trials: the quorom statement.
The Lancet, 3540 (9193):0 1896–1900, 1999.
[Moher et al.(2009)Moher, Liberati, Tetzlaff, and
Altman]moher2009preferred
D. Moher, A. Liberati, J. Tetzlaff, and D. G. Altman.
Preferred reporting items for systematic reviews and meta-analyses:
the prisma statement.
PLoS medicine, 60 (7):0 e1000097, 2009.
[Moher et al.(2015)Moher, Shamseer, Clarke, Ghersi, Liberati,
Petticrew, Shekelle, and Stewart]moher2015preferred
D. Moher, L. Shamseer, M. Clarke, D. Ghersi, A. Liberati, M. Petticrew,
P. Shekelle, and L. A. Stewart.
Preferred reporting items for systematic review and meta-analysis
protocols (prisma-p) 2015 statement.
Systematic reviews, 40 (1):0 1–9, 2015.
[Muller et al.(2023)Muller, Berg, Meneses-Echavez, Ames, Borge, Jardim,
Cooper, and Rose]muller2023effect
A. E. Muller, R. C. Berg, J. F. Meneses-Echavez, H. M. Ames, T. C. Borge,
P. S. J. Jardim, C. Cooper, and C. J. Rose.
The effect of machine learning tools for evidence synthesis on
resource use and time-to-completion: protocol for a retrospective pilot
study.
Systematic Reviews, 120 (1):0 1–8, 2023.
[O’Connor et al.(2019)O’Connor, Tsafnat, Thomas, Glasziou, Gilbert,
and Hutton]o2019question
A. M. O’Connor, G. Tsafnat, J. Thomas, P. Glasziou, S. B. Gilbert, and
B. Hutton.
A question of trust: can we build an evidence base to gain trust in
systematic review automation technologies?
Systematic reviews, 80 (1):0 1–8, 2019.
[Page et al.(2021)Page, McKenzie, Bossuyt, Boutron, Hoffmann, Mulrow,
Shamseer, Tetzlaff, Akl, Brennan, et al.]page2021prisma
M. J. Page, J. E. McKenzie, P. M. Bossuyt, I. Boutron, T. C. Hoffmann, C. D.
Mulrow, L. Shamseer, J. M. Tetzlaff, E. A. Akl, S. E. Brennan, et al.
The prisma 2020 statement: an updated guideline for reporting
systematic reviews.
International journal of surgery, 88:0 105906, 2021.
[Qureshi et al.(2023)Qureshi, Shaughnessy, Gill, Robinson, Li, and
Agai]qureshi2023chatgpt
R. Qureshi, D. Shaughnessy, K. A. Gill, K. A. Robinson, T. Li, and E. Agai.
Are chatgpt and large language models “the answer” to bringing us
closer to systematic review automation?
Systematic Reviews, 120 (1):0 72, 2023.
[Sarkis-Onofre et al.(2021)Sarkis-Onofre, Catalá-López,
Aromataris, and Lockwood]sarkis2021properly
R. Sarkis-Onofre, F. Catalá-López, E. Aromataris, and C. Lockwood.
How to properly use the prisma statement.
Systematic Reviews, 100 (1):0 1–3, 2021.
[Shea et al.(2017)Shea, Reeves, Wells, Thuku, Hamel, Moran, Moher,
Tugwell, Welch, Kristjansson, et al.]shea2017amstar
B. J. Shea, B. C. Reeves, G. Wells, M. Thuku, C. Hamel, J. Moran, D. Moher,
P. Tugwell, V. Welch, E. Kristjansson, et al.
Amstar 2: a critical appraisal tool for systematic reviews that
include randomised or non-randomised studies of healthcare interventions, or
both.
bmj, 358, 2017.
[Sheng et al.(2021)Sheng, Chang, Natarajan, and
Peng]sheng2021societal
E. Sheng, K.-W. Chang, P. Natarajan, and N. Peng.
Societal biases in language generation: Progress and challenges.
arXiv preprint arXiv:2105.04054, 2021.
[Singhal et al.(2022)Singhal, Azizi, Tu, Mahdavi, Wei, Chung, Scales,
Tanwani, Cole-Lewis, Pfohl, et al.]singhal2022large
K. Singhal, S. Azizi, T. Tu, S. S. Mahdavi, J. Wei, H. W. Chung, N. Scales,
A. Tanwani, H. Cole-Lewis, S. Pfohl, et al.
Large language models encode clinical knowledge.
arXiv preprint arXiv:2212.13138, 2022.
[Singhal et al.(2023)Singhal, Tu, Gottweis, Sayres, Wulczyn, Hou,
Clark, Pfohl, Cole-Lewis, Neal, et al.]singhal2023towards
K. Singhal, T. Tu, J. Gottweis, R. Sayres, E. Wulczyn, L. Hou, K. Clark,
S. Pfohl, H. Cole-Lewis, D. Neal, et al.
Towards expert-level medical question answering with large language
models.
arXiv preprint arXiv:2305.09617, 2023.
[Stevens et al.(2018)Stevens, Garritty, Hersi, and
Moher]stevens2018developing
A. Stevens, C. Garritty, M. Hersi, and D. Moher.
Developing prisma-rr, a reporting guideline for rapid reviews of
primary studies (protocol).
Equator Network, 2018.
[Stewart et al.(2015)Stewart, Clarke, Rovers, Riley, Simmonds, Stewart,
and Tierney]stewart2015preferred
L. A. Stewart, M. Clarke, M. Rovers, R. D. Riley, M. Simmonds, G. Stewart, and
J. F. Tierney.
Preferred reporting items for systematic review and meta-analyses of
individual participant data: the prisma-ipd statement.
Jama, 3130 (16):0 1657–1665, 2015.
[Stroup et al.(2000)Stroup, Berlin, Morton, Olkin, Williamson, Rennie,
Moher, Becker, Sipe, Thacker, et al.]stroup2000meta
D. F. Stroup, J. A. Berlin, S. C. Morton, I. Olkin, G. D. Williamson,
D. Rennie, D. Moher, B. J. Becker, T. A. Sipe, S. B. Thacker, et al.
Meta-analysis of observational studies in epidemiology: a proposal
for reporting.
Jama, 2830 (15):0 2008–2012, 2000.
[Touvron et al.(2023)Touvron, Lavril, Izacard, Martinet, Lachaux,
Lacroix, Rozière, Goyal, Hambro, Azhar, et al.]touvron2023llama
H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix,
B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al.
Llama: Open and efficient foundation language models.
arXiv preprint arXiv:2302.13971, 2023.
[Tricco et al.(2018)Tricco, Lillie, Zarin, O'Brien, Colquhoun, Levac,
Moher, Peters, Horsley, Weeks, et al.]tricco2018prisma
A. C. Tricco, E. Lillie, W. Zarin, K. K. O'Brien, H. Colquhoun, D. Levac,
D. Moher, M. D. Peters, T. Horsley, L. Weeks, et al.
Prisma extension for scoping reviews (prisma-scr): checklist and
explanation.
Annals of internal medicine, 1690 (7):0
467–473, 2018.
[Tsafnat et al.(2014)Tsafnat, Glasziou, Choong, Dunn, Galgani, and
Coiera]tsafnat2014systematic
G. Tsafnat, P. Glasziou, M. K. Choong, A. Dunn, F. Galgani, and E. Coiera.
Systematic review automation technologies.
Systematic reviews, 3:0 1–15, 2014.
[Wang et al.(2022)Wang, Agarwal, Mukherjee, Liu, Gao, Awadallah, and
Gao]wang2022adamix
Y. Wang, S. Agarwal, S. Mukherjee, X. Liu, J. Gao, A. H. Awadallah, and J. Gao.
Adamix: Mixture-of-adaptations for parameter-efficient model tuning.
arXiv preprint arXiv:2210.17451, 2022.
[Whiting et al.(2016)Whiting, Savović, Higgins, Caldwell, Reeves,
Shea, Davies, Kleijnen, Churchill, et al.]whiting2016robis
P. Whiting, J. Savović, J. P. Higgins, D. M. Caldwell, B. C. Reeves,
B. Shea, P. Davies, J. Kleijnen, R. Churchill, et al.
Robis: a new tool to assess risk of bias in systematic reviews was
developed.
Journal of clinical epidemiology, 69:0 225–234, 2016.
[Wu et al.(2023)Wu, Irsoy, Lu, Dabravolski, Dredze, Gehrmann, Kambadur,
Rosenberg, and Mann]wu2023bloomberggpt
S. Wu, O. Irsoy, S. Lu, V. Dabravolski, M. Dredze, S. Gehrmann, P. Kambadur,
D. Rosenberg, and G. Mann.
Bloomberggpt: A large language model for finance.
arXiv preprint arXiv:2303.17564, 2023.
[Zhou et al.(2023)Zhou, Liu, Xu, Iyer, Sun, Mao, Ma, Efrat, Yu, Yu,
et al.]zhou2023lima
C. Zhou, P. Liu, P. Xu, S. Iyer, J. Sun, Y. Mao, X. Ma, A. Efrat, P. Yu, L. Yu,
et al.
Lima: Less is more for alignment.
arXiv preprint arXiv:2305.11206, 2023.
[Zorzela et al.(2016)Zorzela, Loke, Ioannidis, Golder, Santaguida,
Altman, Moher, Vohra, et al.]zorzela2016prisma
L. Zorzela, Y. K. Loke, J. P. Ioannidis, S. Golder, P. Santaguida, D. G.
Altman, D. Moher, S. Vohra, et al.
Prisma harms checklist: improving harms reporting in systematic
reviews.
bmj, 352, 2016.
|
http://arxiv.org/abs/2306.04832v1
|
20230607232307
|
Stationary rotating and axially symmetric dust systems as peculiar General Relativistic objects
|
[
"Matteo Luca Ruggiero"
] |
gr-qc
|
[
"gr-qc"
] |
#1#1
#1#1
#1#̇1̇
#1#̈1̈
#1#̂1̂
#1#̂1̂
#1#1
#1#1
#1
d∮ B · d l#1#2d #1/d#2èàùìòé#1d #1/dT#1d ^2#1/dT^2#1d #1/dt#1d ^2#1/dt^2#1d #1/dx^0#1d ^2#1/dx^0^2#1d #1/dτ#1d ^2#1/dτ^2sin(ω T )cos(ω T )#1#2D #1/d#2
β⊓⊔□=
@style=<ref>jnl#1@style#1<ref>jnlAJ
<ref>jnlActa Astron.
<ref>jnlARA&A
<ref>jnlApJ
<ref>jnlApJ
<ref>jnlApJS
<ref>jnlAppl. Opt.
<ref>jnlAp&SS
<ref>jnlA&A
<ref>jnlA&A Rev.
<ref>jnlA&AS
<ref>jnlAZh
<ref>jnlBAAS
<ref>jnlBull. astr. Inst. Czechosl.
<ref>jnlChinese Astron. Astrophys.
<ref>jnlChinese J. Astron. Astrophys.
<ref>jnlIcarus
<ref>jnlJ. Cosmology Astropart. Phys.
<ref>jnlJRASC
<ref>jnlMmRAS
<ref>jnlMNRAS
<ref>jnlNew A
<ref>jnlNew A Rev.
<ref>jnlPhys. Rev. A
<ref>jnlPhys. Rev. B
<ref>jnlPhys. Rev. C
<ref>jnlPhys. Rev. D
<ref>jnlPhys. Rev. E
<ref>jnlPhys. Rev. Lett.
<ref>jnlPASA
<ref>jnlPASP
<ref>jnlPASJ
<ref>jnlRev. Mexicana Astron. Astrofis.
<ref>jnlQJRAS
<ref>jnlS&T
<ref>jnlSol. Phys.
<ref>jnlSoviet Ast.
<ref>jnlSpace Sci. Rev.
<ref>jnlZAp
<ref>jnlNature
<ref>jnlIAU Circ.
<ref>jnlAstrophys. Lett.
<ref>jnlAstrophys. Space Phys. Res.
<ref>jnlBull. Astron. Inst. Netherlands
<ref>jnlFund. Cosmic Phys.
<ref>jnlGeochim. Cosmochim. Acta
<ref>jnlGeophys. Res. Lett.
<ref>jnlJ. Chem. Phys.
<ref>jnlJ. Geophys. Res.
<ref>jnlJ. Quant. Spec. Radiat. Transf.
<ref>jnlMem. Soc. Astron. Italiana
<ref>jnlNucl. Phys. A
<ref>jnlPhys. Rep.
<ref>jnlPhys. Scr
<ref>jnlPlanet. Space Sci.
<ref>jnlProc. [email protected] di Matematica “G.Peano”, Università degli studi di Torino, Via Carlo Alberto 10, 10123 Torino, ItalyINFN - LNL , Viale dell'Università 2, 35020 Legnaro (PD), Italy
We study an exact solution of Einstein's equations describing a self-gravitating system, made of dust, distributed with axial symmetry and in stationary rotation, and we prove that this type of system has no Newtonian analogue. In a low-energy limit, its existence depends on the solution of a Grad-Shafranov equation in vacuum which can be interpreted as a Laplace equation for the toroidal component of the gravitomagnetic potential; in particular, in this system the relativistic rotational effects are of the order of magnitude of Newtonian ones. We therefore argue that this exact solution should contain singularities and discuss the possible consequences of using such a system as simplified models for galactic dynamics.
Stationary rotating and axially symmetric dust systems as peculiar General Relativistic objects
Matteo Luca Ruggiero
July 31, 2023
===============================================================================================
§ INTRODUCTION
General Relativity (GR) is the best model that we have to describe gravitational interactions: one century after its birth, we know that it passed with great success numerous tests and helped to greatly improve our knowledge of the near and far Universe <cit.>. The Einsteinian picture drastically changed our understanding of the structure of spacetime, however GR effects can be often considered small corrections with respect to the Newtonian theory of gravitation, at least in regions where the gravitational field is weak and the speeds are small if compared to the speed of light, which is reasonably true in the terrestrial environment and in the Solar System. Nonetheless, extreme astrophysical events exist in which spacetime is greatly deformed by the presence of very compact objects that are fast moving or rotating. In these cases, new phenomena arise which do not possess at all a Newtonian analogue: just to mention few of them, we can think of the existence of neutron stars, black holes and of the emission of gravitational waves.
Actually, also in condition of weak-gravitational field there are GR effects without Newtonian analogue: this is the case of the so-called gravitomagnetic effects <cit.> which, roughly speaking, are determined by mass currents. As a matter of fact, it is not true that GR effects are always (much) smaller than the corresponding Newtonian ones, since the latter at times simply do not exist; but even if they do exist, the situation is not always straightforward. In fact, if we consider the bending of a light a ray by a source like the Sun, we know that it can be calculated using a Newtonian approach, but the result differs by a factor 2 from the general relativistic one <cit.>: hence, both the Newtonian and the GR effect are of the same order of magnitude.
The purpose of this paper is to discuss another situation where, surprisingly enough, GR and Newtonian effects are expected to be of the same order of magnitude and, in addition, the very existence of the system under consideration could not be possible in a classical, i.e. Newtonian, framework. The motivations derive from this simple question: “Do there exist general relativistic self-gravitating systems, made of dust, in stationary and axially symmetric rotation?”. After analyzing the question in the context of the exact solutions of Einstein's equations, we suggest that if such systems can be used as a model for a galaxy, its dynamics, i.e. the rotation curves, are also influenced by peculiar relativistic effects.
§ THE EXACT SOLUTION
We use cylindrical coordinates (ct,ϕ,r,z ) and the signature is (-1,1,1,1); due to the symmetry of the system, we know that matter is allowed to flow along the Killing vectors ∂_t and ∂_ϕ: as a consequence, all functions considered will depend on the coordinates (r,z) only. Accordingly, we may write the energy momentum tensor T^μν = ρ u^μ u^ν, where ρ is the energy density (ρ=ρ_mc^2, where ρ_m is the matter density) and u^μ is the fluid four-velocity. Given these symmetries and matter distribution, Einstein's equations can be integrated up to quadratures, using techniques that can be traced to the work of <cit.>,<cit.>, <cit.>, <cit.>, <cit.>: a summary of the approaches to these kinds of exact solutions can be found in the textbook by <cit.>, where it is shown that the solution of Einstein's equations is completely determined by the choice of a negative function H(η), on which the physical properties depend: the meaning of η will be clarified below. Accordingly, the fluid velocity can be written as u^μ= 1/√(-H)(1,Ω,0,0 ), where Ω= dϕ/dt= u^ϕ/u^t is the angular velocity of the fluid as seen by observers at rest with respect to the given set of coordinates. The function H(η) depends on the existence of the auxiliary function ℱ(η)[We use the following notation: for any function of one argument, like H(η), with a prime we mean the derivative with respect to its argument; in addition, we use a comma to indicate partial derivative with respect to a given coordinate.]
ℱ=2η+r^2∫H'/Hdη/η-∫H'/Hη dη,
which needs to identically satisfy the equation
ℱ_,rr-1/rℱ_,r+ℱ_,zz=0.
Once that H(η),ℱ(η) have been determined, it is possible to obtain the fluid angular velocity
Ω=1/2∫ H'dη/η.
In summary, the metric components read
g_tt = (H-ηΩ)^2-r^2Ω^2/H,
g_tϕ = η^2-r^2/(-H)Ω+η,
g_ϕϕ = r^2-η^2/(-H).
and the remaining metric components g_zz=g_rr=:e^Ψ can be obtained using the following equations
Ψ_,r = 1/2r[ (g_tt)_,r (g_ϕϕ)_,r-(g_tt)_,z (g_ϕϕ)_,z - ((g_tϕ)_,r )^2+((g_tϕ)_,z )^2 ] ,
Ψ_,z= 1/2r[ (g_tt)_,z (g_ϕϕ)_,r + (g_tt)_,r (g_ϕϕ)_,z - 2 (g_tϕ)_,r (g_tϕ)_,z] .
Moreover, the energy density is given by
8π Gρ=η^2 r^-2(2-η l)^2-r^2l^2/4g_rrη_,r^2+η_,z^2/η^2,
where l= H'/H.
We can learn more about the meaning of this solution if we consider the Zero Angular Momentum Observers (ZAMO) <cit.>: as we discussed in <cit.>, the metric can be written in the form
ds^2 = Hγ^2 c^2dt^2 - r^2 1/Hγ^2(dϕ-χ dt )^2+ e^Ψ(dr^2+dz^2),
where γ=1/√(1-v^2/c^2), being v the velocity of the dust fluid as measured by the ZAMO, and χ≡ - g_tϕ/g_ϕϕ= H η/(r^2-η^2)+Ω is the angular velocity of the ZAMO as seen by asymptotic inertial observers at infinity. In addition, η=vr, so this function is related to the angular momentum per unit mass of a dust element. It is possible to get the following relation
r Ω = r χ -v γ^2 H
between the coordinate velocity rΩ of the dust, its corresponding ZAMO expression v and the ZAMO's velocity rχ.
The metric (<ref>) is non time-orthogonal, because g_0i≠ 0: this is an expected feature, since these off-diagonal terms are generally related to the rotational features of the reference frame and to the rotation of the sources of the gravitational field <cit.>. In particular, from g_0i it possible to formally introduce a gravitomagnetic potential
A_i=-c^2g_0i/2
which, in the weak-field and slow-motion approximation, enables to describe the motion of free test particles in terms of the action of a Lorentz-like force equation, exploiting the gravitoelectromagnetic analogy <cit.>. In our case the gravitomagnetic effects are related to the function χ, since g_0ϕ=r^2χ/Hγ^2 and A= A_ϕ e_ϕ.
§ THE EQUILIBRIUM CONDITIONS
The exact solution considered describes the motion of a dust fluid; from the conservation law of the energy-momentum tensor T^μν_ ;ν=0, we deduce that the dust elements are in geodesic motions, which by construction are circular trajectories in planes at constant z. Let us see a first consequence of this hypothesis. For simplicity, we define the function a=-Hγ^2 in the metric (<ref>), and then we write the Lagrangian
ℒ=1/2[(-a+r^2/aχ^2)c^2-2r^2χ/aϕ̇+r^2/aϕ̇^2+ e^Ψ(ṙ^2+ż^2) ],
where dot means derivative with respect to the coordinate time. Now, we are interested in the components of the geodesics in the z direction: we get ∂ℒ/∂ż=e^Ψż and, on setting z=const, from the Euler-Lagrange equation we get ∂ℒ/∂ z=0, or
∂/∂ z(-a+r^2/aχ^2)c^2+∂/∂ z(-2χ/a) r^2ϕ̇+∂/∂ z(1/a)r^2ϕ̇^2=0.
If we suppose that χ=0, we get
∂ a/∂ z(c^2+r^2ϕ̇^2/a^2)=0.
So, in this case circular geodesics at z=const are realizable only if the system has cylindrical symmetry, which means that it is not possible to obtain a compact structure. Actually, this is what happens in Newtonian gravity, where no compact or finite dust object can exist, as <cit.> pointed out.
Until now, we made no assumptions on the nature of the system we are considering. If we suppose that we refer to an actual physical system, it is reasonable to expect that this solution can be used to describe some low-energy limits and, in this condition, the exact metric (<ref>) can be expanded in negative powers of c, as it is customary in the post-Newtonian development <cit.> . Accordingly, we may write a=1-2U/c^2+O(c^-4), where U is the gravitoelectric or Newtonian potential[Notice that U is defined in analogy with electromagnetism and differs by a minus sign from the standard definition of the Newtonian potential.]; to simplify the results, we introduce the function ψ=χ r^2. Now, we consider the Euler-Lagrange equations for the coordinates r,z and, by hypothesis, we set z=const, r=const to describe the geodesic motions of the dust fluid. We get
0 = ∂ U/∂ r+ψ/r∂/∂ r(ψ/r)-∂ψ/∂ rϕ̇+rϕ̇^2
0 = ∂ U/∂ z+ψ/r^2∂ψ/∂ z-∂ψ/∂ zϕ̇
We see from the above equations that, in order to get the equilibrium for the geodesic motions, the effects determined by ψ needs to be of the same order as the Newtonian ones.
In addition, using relations (<ref>) and (<ref>), we can calculate the expression of the function ℱ from (<ref>), and we get ℱ=-2ψ. Accordingly, the function ψ satisfies the equation
ψ_,rr-1/rψ_,r+ψ_,zz=0.
which is in the form of the homogenous Grad-Shafranov equation <cit.>. The latter equation is often used to describe the equilibrium of a two dimensional plasma, in magnetohydrodynamics; in particular, it is easy to show that using the definition of the gravitomagnetic potential (<ref>), the above equation (<ref>) can be regarded as a Laplace equation for the gravitomagnetic vector potential A=A_ϕ e_ϕ:
∇^2 A=0
If we compare the above equation with the corresponding one obtained when we consider the weak-field and slow-motion approximation of Einstein's
equations <cit.>∇^2 A = -8π G/c j,
where j is the mass-energy currents, we see that the gravitomagnetic potential determined by ψis not originated by the local mass distribution, rather its sources should be located elsewhere and, as we are going to show below, they have a singular behaviour at infinity or along the siymmetry axis.
To avoid misunderstandings, it is important to give the right meaning to words. Even if there are various gravitoelectromagnetic analogies that arise in GR <cit.>, gravitomagnetic effects are generally understood as the solutions of Eq. (<ref>), while the gravitoelectric ones are the solutions
of the corresponding equation for the gravitoelectric or Newtoninan potential. Properly, in the Newtonian limit, U reduces to the Newtonian gravitational potential, while A_i = O(c^-1) <cit.>.
But what we are focusing on here is different: in fact the solutions of Eqs (<ref>) or (<ref>) by no means go to zero as far as c→∞. From now on, we will call them rotation effects (or homogenous solutions as we did in our previous works <cit.>) to distinguish them from the popular gravitomagnetic ones.
The fact that the Grad–Shafranov equation in vacuum coincides with the Laplace equation for the toroidal component of the vector potential <cit.> suggests that the solutions can be found in analogy with electromagnetism. For instance, since A= m ∧ x/| x|^3 is the solution of the Laplace equation describing the vector potential of a magnetic-dipole m, we see that
ψ =m r^2/(r^2+z^2)^3/2
is an exact solution of the above equation (<ref>). More generally speaking, it is possible to obtain the solution of the above equation (<ref>) as a multipole expansion (see e.g. <cit.>): using spherical coordinates R,Θ,φ, the solutions are in the form
ψ(R,θ)=∑_n=2^∞(α_nR^n+β_nR^1-n) sinθ P^1_n-1(cosθ)
where P^1_n-1(cosθ) are the Legendre functions, and α_n,β_n are arbitrary constants. Notice that the solutions which multiply α_n are regular along the symmetry axis, while the others are regular at infinity. In particular, the solution (<ref>) corresponds to α_2=0 and β_2=m and the other terms are null.
We remark that if suppose that our system has a finite extension, the solutions that are regular at the origin do not necessarily give singularities at infinity, because it is expected that the internal solution described by (<ref>) should be matched to an external solution that extends to infinity.
§ DISCUSSION AND CONCLUSIONS
We considered a self-gravitating system, made of an axially symmetric dust fluid in stationary rotation: the metric elements of the corresponding exact solution of Einstein's equations are given by Eqs. (<ref>)-(<ref>). These elements are completely determined by the choice of the negative function H(η), taking into account the auxiliary function ℱ(η) which satisfies the condition expressed by Eq. (<ref>). Since, by hypothesis, the system is made of dust particles, their motion is geodesic: accordingly, the solution of the geodesic equations must give circular spatial trajectories at constant z coordinate.
A first point that needs to be stressed is that the very existence of this system rests on the presence of the rotation effects determined by the solution of Eq. (<ref>) in the low-energy limit, or of Eq. (<ref>) in the exact solution: in fact, if they were absent, the system would be cylindrical symmetric, i.e. with infinite extension along the symmetry axis. This is what happens in Newtonian gravity, where it is impossible to build a limited system, stationary rotating with axial symmetry: so, the system that we are considering is peculiar since it has no Newtonian analogue.
A second important point is that an inspection of the geodesic equation (<ref>) reveals that to have an equilibrium along the symmetry axis, the rotation effects determined by ψ and deriving from the off-diagonals terms in the spacetime metric, cannot be negligible with respect to the Newtonian ones, represented by U.
The rotation effects stem from the solution of the vacumm Grad-Shafranov equation which can be interpreted as a Laplace equation for the toroidal component of the gravitomagnetic potential. Consequently, what we have shown suggests that, if they exist, these exact solutions of Einstein's equations should have singularities. This is not surprising: in fact, a particular case of this class is represented by the Balasin and Grumiller solution <cit.>, which describes a rigidly rotating (i.e. Ω=const) dust <cit.>. A recent analysis by <cit.> shows that this solution contains singularities along the axis, namely a pair of NUT rods and a cosmic string; we remember that a Newman–Unti–Tamburino (NUT) spacetime is a solution of Einstein's equations that generlises the Schwarzschild solution since, in addition to the mass parameter, it contains a second parameter, the so-called NUT charge, that can be interpreted as gravitomagnetic monopole (see e.g. <cit.> and references therein).
Seemingly, the solution of Einstein's equation describing a self-gravitating system, made of an axially symmetric dust fluid in stationary rotation, requires a vacuum solution for the rotation term ψ. Notice that, as discussed by <cit.>, in a low-energy limit, these vacuum solutions become sources of the Poisson equation for the Newtonian potential:
[∇^2 U+ (∂_zψ)^2+(∂_rψ-2 ψ/r)^2/2 r^2]=-4π Gρ_m
Actually, this is not surprising since the same happens for the exact axially symmetric solutions of Einstein equations in vacuum (see e.g. <cit.>). Differently speaking, our approach naturally suggests that spacetime curvature, through the rotation term ψ, modifies the interplay between the sources of the gravitational field and the Newtonian potential U: the key point is that these additional sources are not necessary small. The ψ term contributes with an effective matter density in the form
ρ_ψ=1/4π G((∂_zψ)^2+(∂_rψ-2 ψ/r)^2/2 r^2).
In particular, we get for the dipole solution (<ref>), ρ_ψ=1/8π G( 9mr^2/(r^2+z^2)^4), which is rapidily increasing at the origin. On the other hand, if we consider a solution in the form ψ=α_3zr^2, we get ρ_ψ=1/8π G(α_3 r^2), which is smooth at the origin and has cylindrical symmetry.
Eq. (<ref>) can be interpreted in a Machian sense, since the state of the system, i.e. its rotation with respect to asymptotical inertial observers, determines the local effective mass distribution which is the source of the Poisson equation.
Our analysis is quite general and does not depend on the choice of a specific system, which can be defined only when a given mass distribution is taken into account. Conversely, it shows that the existence of such a system is determined by rotation effects which are of the same order of magnitude of Newtonian ones: in other words, this is a purely relativistic system, which cannot be studied in analogy with the Newtonian case, but only using the framework of General Relativity.
The simple question: “are there any self-gravitating systems, made of dust, stationary rotating with axial symmetry?” seemingly leads us to the key role of rotation effects, that are naturally incorporated in GR, but absent in pre-relativistic gravity. We also emphasize that the system under consideration is by no means exotic, but it is made of the simplest kind of matter: dust. We argue that the existence of this type of systems can be intended as a test of GR and we suggest that astrophysics is a natural scenario to look for possible candidates.
In particular, in previous works <cit.>
we focused on the relevance of such systems in the study of galactic dynamics; in doing so, we assumed that a galaxy can be modelled as a rotating dust system: the present analysis suggests that for such a model to exist it would have to contain singularities. It is important to emphasize that the application of the solution considered in this and our previous papers to the galactic dynamic problem is different from the approach proposed by <cit.>, who considered the gravitomagnetic effects originating from mass currents into the solution of Einstein equations in weak-field and slow-motion approximation. In fact, as <cit.> pointed out, in this regime, mass currents produce post-Newtonian effects and their impact is negligibly small with respect to the dominant Newtonian ones.
A recent work by <cit.> focuses on an exact vacuum solution of GR equations, describing two rotating massive black holes of equal masses carrying opposite NUT charges along the symmetry axis; in this work, the possibility is suggested that the flattening of the galactic rotation curves <cit.> could be a consequence of these singular energy-momentum distributions, positioned along the rotation axis at a distance much larger than the visible spatial extent of the galaxy. In particular, the rotation effects are in the form of a dipole contribution like (<ref>). The author points out that his results are just preliminary but, in view of our analysis, they appear intriguing.
We are aware that there is no guarantee that a galaxy can be described as a rotating dust fluid: however, if this can be done, at least for a very simplified model, our analysis suggests that singularities can play a role on its dynamics. It is relevant to point out that there are suggestions that collapsed objects could be described by a Kerr-Taub-NUT <cit.> spacetime, instead of a Kerr spacetime (see <cit.> and references therein): in other words, the debate about the nature of the singularity hosted by a galaxy is indeed open. As a matter of fact, after one century of relativity, we learned that exact solutions of Einstein's equations must be taken seriously, even if they denote a strange behaviour: in fact, we have now a concrete evidence that a black hole exists at the center of a galaxy <cit.> and, accordingly, the Schwarzschild or, more generally, the Kerr metric can be a faithful description of natural phenomena.
In conclusion, we have shown that the presence of ψ introduces a richer geometric structure, whose impact on the effective sources of the Newtonian potential U is not trivial; furthermore the geodesic equations are greatly influenced by the presence of the rotation terms ψ. As a result, we expect rotational effects
can have a twofold impact on the system dynamics. In this regard, we emphasize once again that we do not want to claim that GR solutions can explain rotation curves without dark matter:
rather, we suggest that there are hints that if a galaxy (or at least a limited region) can be modeled in this way, the curvature of spacetime may play a role on its dynamics. This fact should be studied to better understand the geometric structure and, if applicable, the impact of dark matter.
In any case, we believe that these systems of self-gravitating dust in stationary rotation, due to their peculiar relativistic nature, deserve further attention to understand if they can be considered a model of real astrophysical objects.
§ ACKNOWLEDGMENTS
The author thanks Dr. Davide Astesiano for his collaboration on these topics and acknowledges the contribution of the local research project Modelli gravitazionali per lo studio dell'universo (2022) - Dipartimento di Matematica “G.Peano”, Università degli Studi di Torino; this work is done within the activity of the Gruppo Nazionale per la Fisica Matematica (GNFM).
|
http://arxiv.org/abs/2306.09610v1
|
20230616035842
|
CHORUS: Foundation Models for Unified Data Discovery and Exploration
|
[
"Moe Kayali",
"Anton Lykov",
"Ilias Fountalis",
"Nikolaos Vasiloglou",
"Dan Olteanu",
"Dan Suciu"
] |
cs.DB
|
[
"cs.DB",
"cs.LG"
] |
firststyle
Multi-MeV electrons from above-threshold ionization of the neon K-shell
T. Ditmire
July 31, 2023
=======================================================================
§ INTRODUCTION
Data discovery and exploration are major components of the workflow of analysts and data scientists. A survey conducted by the Anaconda data-science platform in 2021 found that analysts spend 40% of their working hours on data loading and cleaning <cit.>. Even with this colossal effort, 60-70% of data within an enterprise still goes unused for analytics <cit.>, remaining as dark data <cit.>.
Recent developments in large language-models (llms) have unlocked human-level performance on diverse domain tasks. The discovery that these models can generalize to diverse domain-specific tasks that they have not been trained on <cit.> has led to emergence of the term foundation models <cit.>.
Despite their promise, serious risks have hampered the reception of foundation models. These include: spurious generation (including “hallucination”) <cit.>, factual recall limitations <cit.>, bias <cit.>, dataset contamination <cit.>, logical shortcuts <cit.> and fallacies <cit.>. Naïve deployment can lead to unanticipated problems: it has already led to legal action <cit.> and recalls by major corporations <cit.>. These risks are now acknowledged by even the creators of these models <cit.>.
The goal of this paper is to demonstrate the utility of foundation models to the data discovery and exploration domain while mitigating the aforementioned risks. We select three representative tasks to show the promise of foundation models: 1 table-class detection, 2 column-type annotation and 3 join-column prediction. An outline of our approach is shown in Figure <ref>. We call this approach chorus.
Preprint: not yet peer-reviewed. June 2023.
Contributions We summarize our contributions:
– The first work to use foundation models for the data discovery tasks of table-class detection, column-type annotation and join-column prediction;
– Propose a novel system, chorus, whose flexible architecture enables the synthesis of multiple data discovery tasks and deploying risk mitigations
– Design task-specific approaches that exploit zero- and few-shot strategies and allow information flow between tasks;
– Introduce novel mitigations, including nearest-neighbor matching and anchoring, to reduce foundation-model risks specific to this domain;
– Empirically validate chorus, comparing its performance with the state-of-the-art baselines across three individual tasks.
Discussion Prior work has addressed these tasks individually. Landmark approaches like Sherlock <cit.> trained deep model architectures for a specific task, requiring 100K-1M labeled data points. More recent work such as DoDuo <cit.> and TaBERT <cit.> has focused on representation learning, learning embeddings for structured data by improving their performance on one or more downstream tasks.
Foundation models allow a substantially different approach: rather than the classical architecture where the outputs of the model are task-specific, the inputs and outputs of the model are natural language text. Training occurs not on tables or data management tasks specifically, but on general text. Performance on domain-specific tasks is solely by generalization.
This results in a high degree of flexibility. Novel tasks can be specified in natural text, without need for expensive data collection—shown with the example prompts in Figure <ref>. Another advantage of this approach is a unified architecture: tasks can utilize the overall context and previous outputs. For example, in Figure <ref> the the table class of can help with deducing the outputs of and in the next task.
Outline Section <ref> defines the three tasks investigated in this paper. Section <ref> describes the architecture of chorus and key approaches. We evaluate the performance of chorus in Section <ref>'s experiments and offer a discussion of those results in Section <ref>. This includes a discussion of promising directions in Section <ref>. Finally, we place this work within the literature in Section <ref>, discussing related works.
§ BACKGROUND
§.§ Tasks
We assume to be given a data collection consisting of a number of relational tables T_1, T_2, …. Each table T_i consists of a number of columns, or attributes, A_1, A_2, … and a number of rows, or tuples, r_1, r_2, … The name of a table T_i is, in general, non-informative, for example it may be simply a sequential id. The columns may optionally have a name H_1, H_2, … or consist only of values.
In addition to the data collection, we are also given a reference ontology of table classes C_1, C_2, …, and a reference ontology of column types τ_1, τ_2,…. For example, the DBPedia.org types for the table classes include <https://dbpedia.org/ontology/Lake>, <https://dbpedia.org/ontology/Actor> and <https://dbpedia.org/ontology/Continent> and column types include <https://dbpedia.org/ontology/areaTotal> and <https://dbpedia.org/ontology/birthDate>.
We consider three tasks of interest to perform over this data collection.
For each table T_i, determine its appropriate class C_j, such that every row r_1, r_2, … represents an instance of the C_j type. See <cit.> for more information.
For example, table-class detection on the table given in Figure <ref> could output , since each row of that table is an instance of that class. Alternatively stated, the table is about s.
For each table T_i, find a mapping from its attributes (columns)
A_1, A_2, … to the reference column types
τ_1, τ_2, …, such that each value in A_i is an instance of the τ_i type. See <cit.>.
For example, column-type annotation on the first column in Figure <ref> could output , since the values are the respective manufacturers of each .
Assume an execution log L, which maps many (T_i, T_j) → (A_k, A_l) where A_k ∈ T_i, A_l ∈ T_j. Given two tables T and T', with columns A_1, … and A'_1, … respectively, the join-column prediction task is to suggest a pair (A_k, A'_l) of columns such that the equality condition A_k=A'_l, which can be used to join the the tables, matches with the choice in the execution log L. For more discussion, see <cit.>.
For example, given the table in Figure <ref> and another table , join-column prediction could output . The correctness of the prediction depends on the ground truth of which columns the user did in-fact join on.
§.§ Foundation Models
We discuss some properties of foundation models relevant to this project. For an overview of foundation models and their capabilities, we recommend this comprehensive treatment <cit.>. For the following sections, we examine the capabilities of the GPT-3.5 model <cit.>.
Relational data We find that foundation models have the capability of parsing relational data. Consider for example the input table in Figure <ref>. We serialize the table into a comma-separated values (csv) format. Inputting this into a foundation model, in this case GPT-3.5, easily shows that the model can reason about the relational structure of the data, as seen in Figure <ref>.
From the English utterances "header row" and "second column of the third data row", the model is able to reason about the provided table and output the intended values. This requires understanding of schemas, relations, tuples, attributes and values. Thus all the basic blocks of the relational model are present in the model.
Ontologies Foundation models contain knowledge of ontologies such as DPBedia.org, Freebase and Wikidata. We focus on universal ontologies, that is, ontologies that aim to represent all entities in general. This is in-line with findings that foundation models encode highly technical knowledge, such as clinical reasoning <cit.> or electrical engineering principles <cit.>.
As earlier, we demonstrate with an example in Figure <ref>.
This shows that the model encodes information about popular ontology classes and properties. Note that this information is not necessarily complete nor correct. We emphasize the current generation of foundation models does not have access to data lookup abilities, despite them occasionally generating output claiming to have done precisely that <cit.>.
§ APPROACH
We outline the structure of chorus in this section. First, we explore the core idea of ingesting relational data with foundation models and performing data exploration tasks in Subsection <ref>. Next, we describe the necessary post-processing and mitigations we develop in Subsection <ref>.
Figure <ref> shows the architecture of the system. Chorus has a unified architecture which runs multiple tasks in the same context, allowing for information flow. Each task is run sequentially, with the output of one task fed as context into future tasks.
For each task instance, Chorus generates a prompt by concatenating six inputs: a combination of instructions, context, task-specific knowledge, data samples and metadata to form the model input. These are colored-coded in Figure <ref>. This natural langauge input is then fed to the foundation model. The output is then subject to post-processing: checks of parsability and feasibility are conducted. If these pass, the output is extracted. Otherwise, we activate a mitigation we denote anchoring to repair the error and prevent its propagation.
§.§ Model Inputs
We discuss what inputs are provided to the foundation model and how they are pre-processed and synthesized. We discuss the six components of the Model Inputs module in Figure <ref>, individually. These correspond to the six color-coded prompt components in Figure <ref>. Once generated, all the above inputs are concatenated to into a single prompt provided to the model.
Instructions A description of the specific task (table-class detection, column-type annotation or join-column prediction) is provided to the foundation model in natural language. These are shown in yellow in Figure <ref>. For example, we translate the formal Definition <ref> of the first task, table-class detection, into the English sentence “For the following CSV sample, select one DBpedia.org ontology that represents the dataset.” For the third task, join-column prediction, we utilize a code-completion approach. We frame the task as completing Pandas code that performs a join. We choose Pandas because it is a very popular framework, with more than millions of example lines of code on the web. This is the zero-shot prompt setting: the model can be provided with instructions for a novel task and performs them directly.
Demonstration For the first two tasks, we use the foundation models with additional inputs: this the few-shot prompt setting. The model is given a few demonstrations of task completion, including inputs and outputs. This is shown in Figure <ref> as green text.
Data sample Foundation models can understand relational data. By serializing the input tables, we can input them into foundation models. For example, consider the example table from Figure <ref> in the introduction. Serializing the table allows the foundation model to ingest the data. In this paper, we use the comma-seperated values (csv) format. This gives us the representation in Figure <ref>.
Because the models have a limited context window size—typically in the few thousands of tokens—tables cannot always be ingested as a whole. Instead, we always serialize a sample of the rows. Intuitively, this is fine because the tasks we consider can be completed without reviewing all the rows, i.e. it is sufficient to consider a few values when determining the type of a column.
Metadata Schema information including column names (headers) and keys can be incorporated into the input, above the serialized data sample. We found that foundation models can adaptively infer whether the first column of the input is a header or data row, with no modification of the input required. This is shown in orange in Figure <ref>. Due to the flexibility of the input format, we add supplemental information to the prompt where available. For example, contextual information about the data source is added, such as “this dataset is from a page titled Washington State Open Government.”
Task-specific knowledge For some tasks, additional information can be used to guide the model. For example, if only certain output classes are desired, these can be listed to the model. An example of such additional constraints for the table-class detection task are shown in Figure <ref>.
Prefixes We also provide the model with prefixes with which to complete. This includes the DBPedia format for the table-class detection task and a Pandas code fragment for the join-column prediction task. Both prefixes are highlighted in pink in Figure <ref>. Prefixes increase the likelihood the model will provide the output in a parsable format rather than deviating into a natural language description.
§.§ Post-processing
Once the foundation model has run and provided its natural language output, we perform post-processing to parse into a symbolic representation and mitigate common errors.
Constraint checks Because the model is not constrained in
its outputs, it may not always output a feasible answer. In this
setting we impose three constraints: table types must belong to the
ontology classes, column types must belong to the ontology properties
and joins must be on existing columns. An output may be infeasible if
in particular, it is not parsable or if it violates any of the
three constraints. To guard against this, we parse the output and
check the constraint corresponding to the current task.
If the constraint fails, the chorus performs anchoring.
Anchoring If the constraint is violated, we do not simply
reissue the prompt. The reason is that an insidious risk in our
setting is hallucination snowballing <cit.>. This
recently formalized phenomena describes how once a foundation model
makes a spurious generation, subsequent generations are more likely to
be erroneous: after a misstep, the system tends to make mistakes it
would otherwise be able to avoid. We provide an example in
Figure <ref>: once nonexistent class
is suggested, another nonexistent class
follows. Because we maintain context across tasks,
we are particularly vulnerable to this.
Instead, we resolve this by novel domain-specific mitigation we call
anchoring, shown in Figure <ref>. Chorus
maintains the list of embeddings of all feasible answers, e.g. all
table classes from DBPedia. The embeddings are computed by running
open-source foundation model and extracting the final layer weights.
We use for this purpose an alternative foundation model
(GPT-3) that allows us access to its weights. Then, for each output
that violates the constraints, we conduct the following repair
process: (1) We extract the embedding for the incorrect output, using
the same process as above. (2) We conduct a nearest-neighbor search on
the pre-computed embeddings of the feasible answers. (3) We replace
the infeasible output by the nearest-neighbor, transformed into the
correct answer format.
To the best of our knowledge, our anchoring technique is novel, and
only applies to our specific problem where the output is constraints.
In contrast, for general-purpose natural language processing the
domain is unconstrained.
In the example in Figure <ref>, we “erase” the model's history of outputting non-existent class , replacing it with the correct class. After this correction (anchoring), the model is able to generate the correct class for the next column ().
§ EXPERIMENTS
We empirically evaluate chorus on the three tasks defined in Section <ref>. For each task, we select a task-specific benchmark and compare with baselines representing the state of the art. Table-class detection 1 in evaluated in Section <ref>, 2 column-type annotation in Section <ref>, and 3 join-column prediction in Section <ref>.
We find that on the table-class detection and join-column prediction tasks, chorus exceeds all tested baselines by a clear margin of 0.169 and 0.072 F_1 points, respectively. On the column-type annotation task, chorus exceeds all models that are not pretrained on the benchmark dataset (by 0.05 F_1 points) and performs comparably to those which are. We also compare chorus with expert-labels for the table-class detection task, finding exact matches for 81% of labels and rate the chorus labels as better than the expert-labels when they disagree.
Baselines Relevant systems include Tabert <cit.>, DoDuo <cit.>, Sato <cit.>, TURL <cit.>, TaBBIE <cit.>, Auto-suggest <cit.>, Trifecta Wrangler <cit.>, Paxata, Tableau Prep, and Sherlock. DoDuo is reported to outperform TURL and Sherlock on column-type annotation <cit.>, so we select it for evaluation. Sato and Sherlock are similar, with Sato utilizing additional signals not found in our benchmarks, so we evaluate the better-established Sherlock. TaBBIE can embed tables but is not trained on column-type annotation unlike DoDuo and Tabert, so we avoid it for the column-type annotation task. Tabert is a work similar to DoDuo and TURL, but from the NLP community rather than the data management community, so we also test it too. For join-column prediction, Trifacta Wrangler outperforms Paxata and Tableau Prep <cit.>. Auto-Suggest is reported to outperform Trifacta Wrangler, but is a proprietary research project not released publicly. Thus we select Trifacta Wrangler for testing.
For the evaluated prior works Tabert, DoDuo, Trifecta Wrangler and Sherlock <cit.>, we utilize each tool if applicable to the task. If the baseline is not designed for a particular task, but can be straightforwardly adapted, we do so. We describe all modifications in the task subsection and always use established adaptations if available. If the modifications required would be extensive enough to become their own research project, we consider that task unsupported. In all cases, we use the pretrained embeddings without modification, as provided by the authors. Table <ref> outlines the systems we tested and tasks they support.
DoDuo provides two embedding variants: one trained on the WikiTables dataset and another on VizNet. When using DoDuo as a baseline we test against both, labelling them DoDuo-Wiki and DoDuo-Viz respectively.
Datasets Table <ref> outlines the three experiment benchmarks we use. For the table-class annotation task, the T2Dv2 dataset <cit.>, a “gold standard” corpus of approximately one thousand tables, manually annotated by experts. Of those, 237 are annotated with one of 33 DBPedia.org classes. These tables were in turn selected from the Common Crawl corpus of web tables <cit.>. For column-type annotation, we sample a subset of the VizNet dataset <cit.>, extracted by the Sherlock team <cit.> comprising 1 000 columns from approximately 330 tables. For the join-column prediction task, we use a dataset we call GitNotebooks, extracted by the Auto-suggest team <cit.>. We select 300 tables from that dataset for which we have join data. For the first two tasks, which require defining a type system for classes and properties, we use the DBPedia ontology <cit.> for our experiments. This is a community-sourced ontology and is the standard in previous studies.
Setup We use the GPT-3.5 model <cit.> as it is the most widely-available large model with api access at the time of writing. All other code was run on a commodity laptop with 8 physical arm cores and 16GB of main memory. Running all experiments, including development, came to $2 in api costs.
We evaluate using the metrics precision, recall and F_1 score. Precision is the proportion of true positive results out of the total predicted positive results, while recall is the proportion of true positive results out of the total actual positive results in the dataset. The F_1 score is the harmonic mean of precision and recall. Since we deal with a multiclass setting, we calculate these metrics for each class separately then aggregate by taking the mean, weighted by the class size. Class-weighted precision, recall and F_1 are the standard metrics in prior work <cit.>.
§.§ Table-class detection
In this task, we tag each table with the DBPedia ontology entry that represents the row-type of the data. Of the 1 000 datasets that comprise the T2Dv2 dataset, 237 tables have table-class correspondences available while 763 do not. We denote this subset T2D-class v2. We note that only 40 classes are utilized in this “gold standard” mapping, while DBPedia ontology has 769 classes.
We compare against the baselines DoDuo and TaBert. No approach in the prior work provides out-of-the-box capabilities on this task, so add a classification layer on top of the embedding layer. After computing the column embeddings, predictions are extracted by adding a pooling layer, fed to a multi-layer perceptron, and then finally taking the soft-max. This is a straightforward method of adapting the embeddings to our multi-class setting, used in prior benchmarks for table-class detection <cit.>. We fix the embeddings to their pretrained values and learn the weights of the classification layer using five-fold cross-validation.
Supervised variant To allow for comparisons with prior work, we initially restrict our system to picking out of the 33 classes. This is because all other approaches require training on labelled instances—the baselines cannot predict outside those classes. We test 33 classes rather than 40 because the classes that occur only once cannot be meaningfully predicted by the baselines that require training labels.
Table <ref> shows the results. Chorus improves on the three baselines—DoDuo-Viz, DoDuo-Wiki and TaBERT—on all metrics. F_1 score is improved by 0.169 points, precision by 17.5 percentage points and recall by 15.5 percentage points. Of the baselines, DoDuo-Wiki provides the best F_1 and precision, while TaBERT provides the comparable recall. The best performing models, TaBERT and DoDuo-Wiki are trained on CommonCrawl, a superset of the T2Dv2 benchmark. DoDuo-Viz which is trained on the VizNet, a dataset disjoint from T2Dv2, has the weakest performance. The numbers for TaBERT are in line with prior replications <cit.>, while to the best of our knowledge this is the first benchmarking of DoDuo on this task.
Unsupervised variant Next, we relax the classification domain, allowing the foundation model to choose any class in the ontology. We then compare the quality of the classes with that of the human-expert labels.
For 93% of tables, our system produces correct results. Of that portion, 83 percentage points are comprised of exact matches, while 10 percentage points are better-than-correct results. This means the predicted labels are clearly and unambiguously better than those selected by the expert. Because this is a strong claim, we list all such datasets in Table <ref>.
For the final 6% the answer is incorrect: this can mean the answer is completely wrong or simply worse than the label provided by the expert. This means that on the relations where chorus and the expert-label disagree, our system is 1.6× more likely to be correct.
§.§ Column-type annotation
Next, we compare the ability of our system to assign classes to table columns.
VizNet is collection of tables, extracted by the Sherlock <cit.> team from the VizNet repository <cit.> of data visualizations and open datasets. VizNet comprises 31 million datasets in total. We selected 10 challenging and mutually exclusive DBPedia.org classes to test: . We then used stratified sampling to select 1 000 columns of these types to predict.
Baselines We compare against TaBERT <cit.>, DoDuo <cit.> and Sherlock <cit.> on this task. Since Sherlock is designed for column annotation, we use the out-of-the-box model provided by the original team. We restrict Sherlock to the ten target classes. For TaBERT we train an additional classification layer on top of the pre-trained embeddings that these frameworks provide. DoDuo provides a classification layer, however unlike Sherlock the current api does not expose raw probabilities, so we cannot restrict it to the target classes. Instead, we chose to add a multilayer perceptron classification layer, as for TaBERT, to give DoDuo fair odds. We fix the embeddings to their pretrained values and learn the weights of the classification layer using five-fold cross-validation.
Results Table <ref> contains the results for the VizNet dataset. Our fm-based approach improves performance on the measured metrics of F_1-score, precision and recall. The best performing method is Sherlock, narrowly beating DoDuo-VizNet, with a 0.930 F_1 score. If we consider methods which are not specifically pretrained on VizNet (recall, which is also the test set) chorus is the best performing on all three metrics. It has comparable F_1 and precision to Sherlock, but 6 percentage points lower recall.
Note in particular DoDuo-Wiki, which does not have access to VizNet at pretraining time, has a large regression in performance compared to DoDuo-Viznet, losing 0.085 F_1 points. We sanity-checked the low scores of TaBERT by replicating its previously reported column-type annotation scores from prior work and were successful.
§.§ Join-column Prediction
Finally, we evaluate our approach's ability to suggest which columns are the correct choice for a join, the join-column prediction task. We use the GitNotebooks dataset from <cit.>, a collection of 4 million Python notebooks (and their associated relational tables) including 24 thousand joins collected from Github. As one of the baslines, Trifacta Wrangler, requires manual execution and recording of each prediction, we restrict this benchmark to 300 randomly sampled tables.
Baselines For this task, we compare with three baselines. Jaccard similarity, J, is the first. Two columns are selected such that *argmax_c ∈ C^T, c' ∈ C^T' J(c, c') where J(X, Y) = |X ∩ Y| / |X ∪ Y|. This is a commonly used approach in the literature <cit.>. Another baseline is Levenshtein distance <cit.>, which selects the pair of column names with the smallest edit distance between them. The final baseline is Trifacta Wrangler <cit.>, a commercial product spun off from the Wrangler research line <cit.>. When joining two tables in this product, it suggests the keys on which to join them. As no api was available, we obtain all Trifacta predictions by running the joins manually.
Results Table <ref> shows the quality of estimates for our approach and the baselines. We measure the quality of the predictions by the same criteria as the previous tasks. By these metrics, our approach improves the quality of predictions and beats the next-best approach by a clear margin: F_1 score is improved by 0.072, precision by 8.4 percentage points and recall by 6.0 percentage points.
§.§ Dataset contamination
Here we perform an experiment to validate whether any of the testing data occurred in the training corpus of the large-language model, an issue called dataset contamination or data leakage. Because these models are trained on terabytes of internet data <cit.> and we use public benchmarks, they may have seen the test data in training.
We test on seven guaranteed-unseen tables and their columns, all uploaded between April–June 2023 to the federal data repository Data.gov. Since the foundation model training was completed on or before March 2023, these datasets cannot have contaminated its corpus. Repeating the supervised column-type annotation task as in Section <ref>, we measure a 0.857 F_1 score, 90.0% precision and 81.8% recall. This is within 0.01 F_1 points, 0.1% precision and 5% recall of the benchmark results.
§ DISCUSSION
Our experiments show that chorus is a very promising approach for data-discovery tasks. The system is able to provide competitive performance on all three tested tasks. For 1 table-class detection and 3 join-column detection, it surpasses all the baselines by a clear margin. On 2 it performs best of the methods not pretrained specifically on the test dataset and comparably with those which are.
Importantly, the performance is robust: it consistently performs well, unlike other baseline methods. For example, TaBERT is the closest to the best performing baseline on T2Dv2 but the worst baseline on VizNet. Similarly DoDuo-Viz performs well on VizNet but loses 0.11 F_1 points when applied to T2Dv2 compared to DoDuo-Wiki. Conversely, DoDuo-Wiki loses 0.09 F_1 points compared to DoDuo-Viz when tested on the VizNet-based task.
Training data collection A major advantage of a foundation-model approach is that extensive domain-specific training data need not be collected. In particular, representation learning approaches require large amounts of data for learning the embeddings as well as data for learning the task (in some cases this can be the same dataset). For example, TaBERT requires 26 million tables for training its embeddings. This cost may be acceptable if the embeddings generalize—in paragraph out-of-domain performance we argue they are not. Even with the embeddings fixed, lots of task-specific labels are needed. In <cit.>, the use of 250 labels for one task is considered a “small dataset” by the authors and leads to subpar performance. In contrast, our prompts in Figure <ref> use one example each for the table-class detection and column-type annotation and zero examples for the join-column prediction task.
Out-of-domain performance We note a troubling pattern of a lack of cross-domain generalization in representation-learning approaches. In particular, the tested baselines degrade when used to embed tables not from the dataset the embeddings were trained on. Despite being described as “pretrained”, this is in contrast to word embeddings such as word2vec, which perform consistently across a number of data distributions. This finding is in-line with prior work: regressions of up to 0.40 F_1 points when switching to new datasets have been observed by other researchers <cit.>.
This is made worse by the computational cost of training the embeddings: tabert was trained using a cluster of 128 of Nvidia's Tesla V100 gpus <cit.>, at the suggested retail price of $14 000 each <cit.>. The cost of training fms is even greater, of course, but need only be borne once by the fm developer—not the end user.
Flexibility Another advantage of chorus we observe in the experiments is task adaptability. In the 1 table-class detection task, we are able to switch the prediction domain easily. Restricting to the 33 classes used by the benchmark can be done by providing the permitted classes to the foundation model; allowing the model to generalize to other DBPedia classes (the unsupervised heading of Section <ref>) is as simple as omitting those instructions. Contextual information, such as table title or url, could be as easily added. In the representation learning setting, such modifications would require retraining the embeddings.
Limitations and risks We control the risk of dataset contamination by testing for it in Section <ref>. We find that the performance of chorus on guaranteed-unseen datasets is comparable to those in public benchmarks, so good performance on the those benchmarks cannot be explained away as simple data contamination. Seperately, formal linguistic fluency means that errors may fool human reviewers. It is well known that linguistic fluency strongly influences the perception of correctness and quality <cit.>. As such, foundation models may be able to obtain favorable evaluation from testers by producing answers which are plausible but incorrect. This has been called subtle misinformation in prior work <cit.>. This is particularly relevant to the evaluation of Table <ref>. Finally, we note that there can be a large variance in result quality caused by minor changes in the prompt <cit.>.
§.§ Future directions
Additional tasks The promising performance on our three tasks may extend to many more. Other tasks related to data discovery and exploration that could be explored include schema auto-completion <cit.>, where missing parts of a partial schema are suggested to the user; join-graph traversal, where successive tables to join on are suggested <cit.>; and outlier detection, where potential erroneous data are detected.
However, take care when selecting tasks. Foundation models are not a panacea. For example, they virtually never recall facts that occur less than 10-100 times in the corpus <cit.>. This means that point lookups of data are unlikely to be supported by unassissted foundation models, at least for the foreseeable future. Further, we emphasize that many foundation models are trained less than one epoch, meaning they see each token in the corpus one time or not at all.
Private or domain-specific datasets As with all tested baselines, the foundation models are trained on public data. The distribution of data in the public sphere significantly defers from that in specialized domains or private data. Whether observed capabilities hold on e.g. enterprise data lakes is worth investigating. Further application to domain-specific ontology such as dron, a pharmaceutical ontology of drugs only, could be investigated.
Larger and smaller models Larger models are of interest because if scaling laws continue to hold, their performance should improve <cit.>. The larger GPT-4 model has been announced <cit.>. However, at the time of experiments the authors did not have API access to this model, so we leave exploring its capabilities to future work. On the other hand, the large cost of the current generation of foundation models has sparked interest in model distillation: reducing the size of the model without incurring a reduction in capabilities <cit.>. However, these downsized models have significantly degraded emergent abilities <cit.> and so would likely perform worse on precisely the tasks of interest to us.
End-to-end evaluation With foundation-model based data discovery systems becoming closer to reality, a holistic evaluation would be valuable to evaluate the utility of this approach to data analysts. This is important because many benchmarks are limited to sparsely-labelled portions of publicly available datasets.
§ RELATED WORK
The seminal early work in this area is WebTables <cit.>. WebTables aimed to extract relational tables from messy web data into a central repository, annotating them with predicted metadata to increase discoverability. This work introduced a constellation of related tasks: schema auto-completion, attribute synonym finding, and join-graph traversal. Additional work on wrapper induction <cit.> focused on developing shims to extracting table from heterogeneous sources.
The promise of foundation models for data profiling was outlined in a recent position paper <cit.>. This paper was based on evidence of foundation models being able to predict correlations in data from the column names <cit.>. Another work considered foundation models for data wrangling <cit.>: comprising the tasks of entity matching, error detection and data imputation. Finally, most recently foundation models have been applied to the classic problem of wrapper induction in the system evaporate <cit.>.
The currently deployed generation of approaches has focused on representation learning. These include turl <cit.> and tabert <cit.>. Both explore the use of fine-tuned language models for similar tasks. Other systems include Doduo <cit.> and TABBIE <cit.>. Prior to these table-embedding approaches, the prior generation of data tools involved deep learning for specific tasks. The standard-bearer for this approach is Sherlock <cit.>, a tool specialized for column-type detection which utilizes a deep neural network with about 1 600 input features and trained over millions of examples. Sato <cit.> is another example of this approach.
Recent tutorials <cit.> outline the prevalence of the problem of unstructured document data management. A user-study of scientists conclude that “current systems fail to sufficiently support scientists in their data-seeking process” <cit.>. A dataset search survey <cit.> in vldb, highlights the main open problems in this field: more natural query languages, better data integration, incorporating external knowledge, and interactive result presentation. Foundation models hold the promise of helping address many of these tasks.
Industry interest in this field is also keen. A large number of commercial solutions for data warehousing and data lakes are available. Commercial products derived from the original WebTables vision are described in the authors' follow-up paper <cit.>. Products including Amazon RedShift, Microsoft Azure Data Lake, Databricks Lakehouse are some of the commercial products in this space. Products with a more narrow focus, such as Trifacta Wrangler <cit.>, Tableau Prep and Paxata, incorporate substantial data discovery components. Industry-led prototype systems for data discovery include Sigma Computing's WarpGate <cit.> and Google Research's Dataset Search <cit.>. More recently, startups such as Numbers Station raised substantial funding for fusing foundation models with enterprise data analytics. Gartner estimates the data warehouse market size at $22 billion usd.
§ CONCLUSION
In this work, we investigated foundation models for data discovery and exploration. We propose chorus, a system for integrating foundation models into data discovery tasks and show it provides superior performance on three example tasks: table-class annotation, column-type detection and join-column prediction. We find that chorus is more robust than prior representation-learning approaches on a variety of datasets and that its performance advantage cannot be attributed to dataset contamination. We conclude that foundation models provide a promising future as a core component of the next generation of data discovery and exploration systems.
Our thanks to Yejin Choi for guidance on and productive discussions about language models. We are grateful to the authors of TaBERT, Sherlock and DoDuo for high-quality, replicable experiment code and documentation. We appreciate Cynthia Richey's and Kyle Deeds' assistance with the manuscript. Finally, thanks to Magdalena Balazinska for counsel during manuscript writing.
plainnat
|
http://arxiv.org/abs/2306.07689v1
|
20230613110340
|
Stability Analysis of Cosmological models in $f(T,φ)$ Gravity
|
[
"Amit Samaddar",
"S. Surendra Singh"
] |
gr-qc
|
[
"gr-qc",
"hep-th"
] |
0.55cm
Stability Analysis of Cosmological models in f(T,ϕ) Gravity
Amit Samaddar, S. Surendra Singh
Department of Mathematics, National Institute of Technology Manipur,
Imphal-795004,India
Email: [email protected], [email protected]
Abstract We investigated the stability condition in f(T,ϕ) gravity theory for considering two models by using dynamical system. We assume the forms of G(T) are (i) G(T) = α T+β/T, (ii) G(T) = ζ T ln(ψ T), where α, β, ζ and ψ be the free parameters. We evaluated the equilibrium points for these models and examine the stability behavior. We found five stable critical points for Model I and three stable critical points for Model II. The phase plots for these systems are examined and discussed the physical interpretation. We illustrate all the cosmological parameters such as Ω_m, Ω_ϕ, q and ω_Tot at each fixed points and compare the parameters with observational values. Further, we assume hybrid scale factor and the equation of redshift and time is t(z)=δ/σW[σ/δ(1/a_1(1+z))^1/δ]. We transform all the parameters in redshift by using this equation and examine the behavior of these parameters. Our models represent the accelerating stage of the Universe. The energy conditions are examined in terms of redshift and SEC is not satisfied for the model. We also find the statefinder parameters {r,s} in terms of z and discuss the nature of r-s and r-q plane. For both pairs {r,s} and {r,q} our models represent the ΛCDM model. Hence, we determine that our f(T,ϕ) models are stable and it satisfies all the observational values.
Keywords: f(T,ϕ) gravity theory, stability analysis, hybrid expansion law, energy conditions, statefinder parameters.
§ INTRODUCTION
From current observation of Supernova Type Ia (SNeIa) <cit.>, Cosmic Microwave Background (CMB)<cit.>, Baryon Acoustic Oscillations (BAO)<cit.>, it is provided that accelerating stage of the Universe is undergoing. The matter existing in the Universe are affected by an exotic forms of energy which is known as dark energy (DE) <cit.>. From latest CMBR data, our Universe contains 76% dark energy. In the right hand side of Einstein's field equations, the cosmological constant has negative equation of state (ω_de) which is accountable for the late time acceleration of the Universe. The cosmological constant creates several problems like cosmological constant problem. In present days, the combination of energy density and cosmological constant is known as critical density and the value is ρ_Λ∼10^-47GeV^4 but in quantum theory, the energy density is 10^121 times bigger than the observational value and the value is ρ_Λ∼10^74GeV^4 <cit.>. The form of the equation of state (EoS) parameter is ω_de=p_de/ρ_de <cit.>. If the EoS parameter ω_de tends to -1, then it represents the standard cosmology. If ω_de=1 then it describes the stiff fluid, if ω_de=0 then it describes matter dominant phase while ω_de=1/3 describes radiation dominant phase, for -1<ω≤ -1/3 the Universe is in quintessence phase and universe also exhibits for phantom dark energy model ω_de<-1 and lastly ω_de=-1 represents the cosmological constant i.e. ΛCDM model <cit.>. To discuss the expansion behavior of the Universe, cosmologists proposed a new parameter which is called deceleration parameter (q). If the numerical value of deceleration parameter q>0, then it represents the decelerated epoch of the Universe while for q<0, it represents the accelerated epoch of the Universe but for q=0, it represents the marginal expansion of the Universe. To solve the cosmological constant problem, we have an alternate theory which is dynamical dark energy section with insertion of the scalar field for example quintessence <cit.>, k-essence <cit.>, Galileons <cit.> etc.
To understand the several perspectives of modern cosmology, cosmologists need the modification of General Relativity. Several methods are introduced to explain the observational evidences. The easiest way to modify the Einstein's gravity theory is to substitute the Ricci scalar with a function f(R) in Einstein-Hilbert action <cit.>. An identical formulation of General Relativity is teleparallel gravity where instead of curvature by torsion which is responsible for the gravitational interaction <cit.>. Einstein first introduced this model to establish the electromagnetism and the Weitzenböck connection of non-Riemannian manifold in gravity theory <cit.>. Teleparallel Gravity (TG) can be assumed as a gauge theory <cit.> but General Relativity is related to geometric phenomenon. However, the General Relativity and Teleparallel Gravity both are not same but the equations of motion are same thus the new term Teleparallel equivalent of General Relativity (TEGR) is introduced. The alternation of Teleparallel equivalent of General Relativity is f(T) gravity where the torsion scalar T is replaced by an arbitrary function f(T) which is same as f(R) gravity theory <cit.>. The field equations of modified teleparallel gravity are second order but f(R) gravity has fourth order field equations but Local Lorentz Invariance does not exist here. f(T) gravity successfully describes the acceleration stage of the Universe, so, in present days, this is the most interesting theory. The another interesting theory is developed by adding scalar field with torsion which is called scalar torsion theory or f(T,ϕ) gravity. Several authors already analyzed the various perspectives of f(T,ϕ) gravity in different ways which are Noether symmetry of f(T,ϕ) gravity discussed by <cit.>. Cosmological dynamics of dark energy of f(T,ϕ) gravity analyzed by <cit.>. Dynamical system in f(T,ϕ) gravity discussed by <cit.> etc.
Dynamical system is very useful technique in cosmology to solve the Einstein's field equations. In this manuscript, we discussed the stability analysis in f(T,ϕ) gravity <cit.>. One of the major issues in theories of gravity is to find the analytical or numerical solutions due to the complicated field equations <cit.>. Some nonlinear terms are present in Einstein's Field equations which are not easy to solve and hence comparison to the observations cannot be easy one. To solve the Einstein's equations, some other methods are required and dynamical system analysis is one of such method which is capable to solve the nonlinear terms in Einstein's equations. Dynamical system is used to find the numerical solutions and understand the stability behavior of a given system <cit.>. In dynamical system, the most important thing is to find the critical points from the set of autonomous first-order ordinary differential equations which are obtained from the Einstein's equations. The stability analysis of a model are evaluated by calculating the Jacobian matrix at each equilibrium points and finding the characteristic values from Jacobian matrix <cit.>. This is the process to analyze the stability behavior of any model near a critical point <cit.>.
The outlines of this manuscript are: the field equations of f(T,ϕ) gravity are presented in f(T,ϕ) gravity in sec. (<ref>). In sec. (<ref>), we discuss the stability behavior of two model by introducing some new variables from the field equations. In this section, we find the equilibrium points and explore the stability behavior by phase plots. In sec. <ref>, we explore the nature of some physical parameters like energy density, pressure, deceleration parameter with the help of hybrid expansion law. In sec. (<ref>,<ref>), we discussed the energy conditions and the nature of the statefinder parameter. Conclusions are given in sec. (<ref>).
§ F(T,Φ) GRAVITY AND FIELD EQUATIONS
The substitute representation of gravity with respect to the torsion scalar but not curvature is Teleparallel Gravity <cit.>. In modified teleparallel gravity, the geometric part of the action is an algebraic function that depends on the torsion scalar (T) which is known as f(T) gravity. In TEGR action, the torsion scalar is promoted by an arbitrary function with the addition of the scalar field ϕ which is the generalization of f(T) gravity. The action of the f(T,ϕ) gravity with the presence of matter and radiation is <cit.>
S= ∫ d^4xe[f(T,ϕ)+P(ϕ)X]+S_m+S_r,
where f(T,ϕ) describes the function of torsion scalar (T) and scalar field (ϕ), e=det(e^C_μ)=√(-g) and X=-∂^μϕ∂_μϕ/2 is the kinetic term multiplication by an arbitrary function P(ϕ) in the action. General Relativity can be revealed in the structure of teleparallel gravity by applying the tetrad (e^C_μ) and spin connection (ω^C_Dμ) instead of the metric tensor. The observable component is tetrad field e^C_μ and the relation between Minkowski tangent space metric η_CD and metric tensor g_μν is g_μν=η_CDe^C_μe^D_ν, where η_CD = diag (-1,1,1,1). The tetrad component fulfills the orthogonality condition e^μ_Ce^D_μ=δ^D_C. The torsion scalar can be defined as,
T= S^μν_ψT^ψ_μν,
where T^ψ_μν and S^μν_ψ illustrate the tensor of the torsion and superpotential. The superpotential tensor is defined by,
S^μν_ψ= 1/2(K^μν_ψ+δ^μ_ψT^βν_β-δ^ν_ψT^βμ_β),
where the contortion tensor K^μν_ψ=1/2(T^μν_ψ+T^νμ_ψ-T^νμ_ψ). In equation <ref>, the torsion tensor T^ψ_μν is defined by,
T^ψ_μν=e^ψ_C∂_μe^C_ν-e^ψ_C∂_νe^C_μ+e^ψ_Cω^C_Dμe^D_ν-e^ψ_Cω^C_Dνe^D_μ.
It is attached with the Weitzenböck connection of the teleparallel gravity. The field equations are derived by varying the tetrad e^C_μ in the action or by using the relation of the torsion scalar (T) and curvature (R) with Levi-Civita connection and contortion tensor which satisfy
T=-R+e^-1∂μ(eT^βμ_β).
and hence, the field equations of General Relativity and Teleparallel Gravity are same. We assume the flat, isotropic and homogeneous Friedmann-Lemaître-Robertson-Walker metric to derive the f(T,ϕ) gravity field equations as,
ds^2= -dt^2+a^2(t)(dx^2+dy^2+dz^2),
where a(t) is the scale factor and we assume the diagonally tetrad field e^C_μ=diag(-1,a,a,a). The Friedmann equations and Klein-Gordon equation of f(T,ϕ) gravity can be derived by varying the action of equation (<ref>) with tetrad field and scalar field as follows,
f(T,ϕ)-P(ϕ)X-2Tf_,T=ρ_r+ρ_m,
f(T,ϕ)+P(ϕ)X-2Tf_,T-4Ḣf_,T-4Hḟ_,T=-p_r,
-P_,ϕX-3P(ϕ)Hϕ̇-P(ϕ)ϕ̈+f_,ϕ=0
where H=ȧ/a is the Hubble parameter and ȧ represents the derivative with respect to time t, f_,T=∂ f/∂ T and f_,ϕ=∂ f/∂ϕ. ρ_m and ρ _r represents the matter and radiation energy density and p_r is the pressure for radiation. We use the expression of torsion scalar in terms of Hubble parameter is T=6H^2. We assume the form of the function f(T,ϕ) as <cit.>,
f(T,ϕ)=-T/2k^2+G(T)-V(ϕ),
where G(T) is the function of T and V(ϕ) is the scalar potential. The equation of state parameter for matter dominant universe is ω_m=p_m/ρ_m=0, for radiation ω_r=p_r/ρ_r=1/3 epoch. Now equations (<ref>-<ref>) can be written as,
3/k^2H^2= 2TG_,T-G(T)+V(ϕ)+P(ϕ)X+ρ_m+ρ_r,
-2/k^2Ḣ=-4Ḣ(G_T+2TG_,TT)+2P(ϕ)+ρ_m+4/3ρ_r,
P_,ϕ(ϕ)X+P(ϕ)ϕ̈+3P(ϕ)Hϕ̇+V_,ϕ(ϕ)=0.
By <cit.>, the Friedmann equations (<ref>) and (<ref>) can be written as,
3/k^2H^2=ρ_r+ρ_m+ρ_de,
-2/k^2Ḣ=p_de+ρ_de+ρ_m+4/3ρ_r.
To compare the equations (<ref>) and (<ref>) with the equations (<ref>-<ref>), the expressions of pressure and energy density of the dark energy can be obtained as,
ρ_de= 2TG_,T-G(T)+V(ϕ)+P(ϕ)X,
p_de= -2TG_,T+G(T)-4Ḣ(G_T+2TG_,TT)-V(ϕ)+P(ϕ)X,
We consider potential energy V(ϕ)=V_0e^-λϕ and P(ϕ)=1. To perform the dynamical system analysis from the above two equations (<ref>) and (<ref>), we need to assume some special form of G(T) and in this work, we assume two G(T) forms which are given in <ref>. The energy equations are as follows:
ρ̇_̇ṁ +3Hρ_m=0,
ρ̇_̇ṙ +4Hρ_r=0.
The continuity equation for the pressure and energy density of the dark energy is
ρ̇_̇ḋė+3H(ρ_de+p_de)=0.
§ STABILITY ANALYSIS OF F(T,Φ) GRAVITY MODELS
The main objective to learn the dynamical system, specially for non-linear equations is to visualize the stability conditions of the fixed points or equilibrium points. Dynamical system is the most necessary technique to learn cosmological behavior in the Universe, where we could not find the exact solutions due to the complicated systems <cit.>. The dynamical systems are mostly used in cosmological models for non-linear systems of differential equations. The form of the system is v̇ = σ(v), where the function σ:V→ V, v̇ be the derivative with respect to time t ∈ℝ and v=(v_1,v_2,v_3,....,v_m) ∈ V, σ(v)=(σ_1(v),σ_2(v),....,σ_m(v)) <cit.>. This shows that we analyze the stability conditions for m variables with m equations. The equation v̇ = σ(v) said that the rate of change dv/dt for function v(t) with some condition. The condition is: if the current value is v, then the rate of change is σ(v). The equation v̇ = σ(v) is called ordinary differential equation. The differential equation is called autonomous if the condition doesn't depend upon time t, it only depends about the current value of the variable v. v=v_0 be the fixed point of the system v̇ = σ(v) if and only if σ(v_0)=0 <cit.>. We analyze the stability behavior of fixed points. The fixed point v_0 is stable if ∀ η>0 ∃ a ξ such that ϕ(t) be the solution of v̇ = σ(v) that satisfies the condition ϕ(t_0)-v_0<ξ, then ϕ(t) exist ∀ t≥ t_0 and satisfy ϕ(t)-v_0<η ∀ t≥ t_0. The critical point v_0 is asymptotically stable if ∃ a ξ s.t. ϕ(t) be the solution of v̇ = σ(v) satisfying ϕ(t_0)-v_0<ξ then lim_t→∞ϕ(t) = v_0 <cit.>. The minimal distinction between stable and asymptotic stable critical point is that for asymptotic critical point all trajectories approach to the point, but for stable critical point all trajectories made a circle near at that point <cit.>. In cosmology, all stable critical points are treated as asymptotically stable fixed point. The critical points which are not stable is called unstable critical points i.e. the trajectories starting near the critical points and escape away from it. Now we introduce some approaches to understand the stability criteria at the fixed points. Linear stability theory is the most useful process to analyze the physical properties of cosmological models <cit.>. This theory is used to linearize the equations at the equilibrium point for studying the dynamical properties near this point. Assume that v_0 is the equilibrium point of the system v̇ = σ(v). At equilibrium points, the system v̇ =σ(v) linearized by Taylor's expansion where each component of the vector field σ(v)=(σ_1(v),σ_2(v),....,σ_m(v)), becomes such that
σ_i(v) = σ_i(v_0)+∑^n_j=1∂σ_i/∂ v_j(v_0)y_j+1/2!∑^n_j,k=1∂^2σ_i/∂ v_j∂ v_k(v_0)y_jy_k+.......
where y is defined by y=v-v_0. Now we neglect the second order or above derivative terms and define the Jacobian matrix as
J=∂σ_i/∂ v_j=
[ ∂σ_1/∂ v_1 ∂σ_1/∂ v_2 ⋯ ∂σ_1/∂ v_n; ⋮ ⋯ ⋯ ⋮; ∂σ_n/∂ v_1 ∂σ_n/∂ v_2 ⋯ ∂σ_n/∂ v_n ],
This matrix is known as stability matrix. The eigenvalues are evaluated from Jacobian matrix J for equilibrium point v_0. The equilibrium point v_0 is hyperbolic if the characteristic roots of the matrix J have non zero real part, else the point v_0 is non-hyperbolic. If all the characteristic roots of the matrix J are positive and trajectories are move away from the point then the fixed point v_0 is known as unstable point or repeller. If all the characteristic roots are negative and the point attracts all nearby trajectories then the equilibrium point v_0 is known as stable as well as attractor. If two eigenvalues both are opposite in signs with real part and trajectories attract in some directions and repels along other directions then the equilibrium point v_0 is known as saddle node.
§.§ Model 1: G(T) = α T+β/T
We take the form of G(T) as <cit.>
G(T) = α T+β/T,
where α and β are the free parameters. By using equation (<ref>), the expressions of pressure and energy density for dark energy are as follows:
ρ_de=ϕ̇^2/2+V(ϕ)+6α H^2-β/2H^2 ,
p_de=ϕ̇^2/2-V(ϕ)-6α H^2+β/2H^2-4Ḣ(α+β/12H^4) ,
and Klein-Gordon equation (<ref>) can be expressed as,
ϕ̈+3Hϕ̇+V_,ϕ=0 ,
By using equations (<ref>-<ref>), the EoS parameter is as follows:
ω_de=p_de/ρ_de=ϕ̇^2/2-V(ϕ)-6α H^2+β/2H^2-4Ḣ(α+β/12H^4)/ϕ̇^2/2+V(ϕ)+6α H^2-β/2H^2.
To discuss the stability behavior of the model, we assume new dimensionless variables from equation (<ref>) to find the set of differential equations as follows:
x=kϕ̇/√(6)H, y=k√(V)/√(3)H, z=2α k^2, r=-k^2β/6H^4
ρ=k√(ρ_r)/√(3)H, λ=-V'(ϕ)/kV(ϕ), σ=V”(ϕ)V(ϕ)/V'(ϕ)^2.
where k^2=1. The density parameters with regard to the dimensionless variables are obtained as,
Ω_r=k^2ρ_r/3H^2=ρ^2,
Ω_m=k^2ρ_m/3H^2=1-x^2-y^2-z-r-ρ^2,
Ω_de=k^2ρ_de/3H^2=x^2+y^2+z+r.
From the field equations of f(T,ϕ) gravity (<ref>-<ref>) using the above variables of equation (<ref>), we will get
Ḣ/H^2=ρ^2-3(r-x^2+y^2+z-1)/2z-2r-2,
By using equation (<ref>), the expressions of deceleration parameter and the EoS parameter are obtained as,
q=-1-Ḣ/H^2=ρ^2-5r+3x^2-3y^2-z+1/2r-2z+2,
ω_de=-2ρ^2(z-r)-6x^2+6y^2+12r/3(x^2+y^2+r+z)(2z-2r-2),
By using the relation of the deceleration parameter and the EoS parameter (ω_tot), the expression of ω_tot in terms of the variables is
ω_tot =2 q-1/3=ρ^2-6r+3x^2-3y^2/3r-3z+3.
From the variables in equation (<ref>), the set of differential equations can be derived as,
x'=√(3/2)y^2λ-3x-x[ρ^2-3(r-x^2+y^2+z-1)]/2z-2r-2,
y'=-√(3/2)xyλ-y[ρ^2-3(r-x^2+y^2+z-1)]/2z-2r-2,
z'=0,
r'=-4r[ρ^2-3(r-x^2+y^2+z-1)]/2z-2r-2,
ρ'=-ρ(ρ^2-7r+3x^2-3y^2+z-1)/2z-2r-2,
λ'=-√(6)x(σ-1)λ^2,
We discuss the stability behavior of the above system of equations (<ref>-<ref>) at each equilibrium points with the scalar potential function V(ϕ)=V_0e^-λϕ. To evaluate the equilibrium points for the system of equations (<ref>-<ref>), we solve the equations x'=y'=z'=r'=ρ'=λ'=0. We found eight equilibrium points which are as follows
(1) Equilibrium Point (A): x=0, y=0, z=0, r=0, ρ=0,
(2) Equilibrium Point (B): x=0, y=0, z=0, r=1, ρ=0,
(3) Equilibrium Point (C): x=0, y=0, z=ν, r=μ, ρ=0 where μ=1-ν, ν≠0,
(4) Equilibrium Point (D): x=δ, y=0, z=0, r=γ, ρ=0 where γ=1-δ^2, δ≠0,
(5) Equilibrium Point (E): x=√(3/2)/λ, y=1/λ√(3/2), z=0, r=0, ρ=0 where λ≠0,
(6) Equilibrium Point (F): x=0, y=ζ, z=0, r=ϵ, ρ=0 where ϵ=1-ζ^2, ζ≠0,
(7) Equilibrium Point (G): x=0, y=τ, z=η, r=χ, ρ=0 where χ=1-τ^2-η^2,
(8) Equilibrium Point (H): x=λ/√(6), y=√(1-λ^2/6), z=0, r=0, ρ=0.
The characteristic values are evaluated by solving the Jacobian matrix at each equilibrium points presented in Table 1. The density parameters, deceleration and EoS parameters for the equilibrium points are presented in Table 2.
From Table 1, for the point A, two characteristic values λ_1 and λ_4 are negative and one characteristic value λ_2 is positive. For both positive and negative characteristic values, the equilibrium point A is saddle point and from Figure I (right plot), for this point, the trajectories diverge to the equilibrium point. From Table 2, at the point A, the density parameters Ω_m=1 and Ω_r=Ω_de=0 represent the matter dominant stage of the Universe and q=1/2, EoS parameters ω_de=ω_tot=0 show the decelerated stage of the Universe. Again for the point B, the characteristic values are (-3,0,-3,-2). Since all the characteristic values are negative so the point B is stable point and from Figure I (left plot) for the point B all the trajectories are directed towards the equilibrium point. Also, at this point the density parameters Ω_m=Ω_r=0 and Ω_de=1 represent the dark energy dominant Universe and EoS parameters ω_de=ω_tot=-1, deceleration parameter q=-1 assure the accelerated phase of the Universe. For the point C, the characteristic values are (-3,0,-3,-2). Since all the characteristic values are negative so the point C is stable point and from Figure I (left plot) for the point C all the trajectories are directed towards the equilibrium point. Also, at this point, the density parameters Ω_m=Ω_r=0 and Ω_de=1 represent the dark energy dominant Universe and EoS parameters ω_de=ω_tot=-1, deceleration parameter q=-1 assure the accelerated phase of the Universe. Further, the characteristic values for the point D are (6-9δ^2/δ^2-2,6+√(6)δλ/2,6(1-3δ^2)/δ^2-2,4-5δ^2/δ^2-2). At δ=1, the values are (3,6+√(6)λ/2,12,1). Since all the values are positive, so the point D is unstable point and from Figure I (right plot) for this point D, the direction of all the trajectories move away from this point. The density parameters Ω_m=Ω_r=0, and Ω_de=1 represent the dark energy dominant Universe at δ=1, deceleration parameter q=2 and EoS parameters ω_de=ω_tot=1 assure the decelerated phase of the Universe. The characteristic values for the point E are (-3/2,-9/2λ^2,0,-1/2). For λ^2=3, these values become (-3/2,-3/2,0,-1/2). Since all the characteristic values are negative so the point E is stable node and Figure I (right plot) for the point E, all the trajectories are directed towards the equilibrium point. The density parameters are Ω_m=1-3/λ^2, Ω_de=3/λ^2, Ω_r=0. At λ^2=3, the values becomes Ω_m=0, Ω_r=0 and Ω_de=1 which represent the dark energy dominant Universe while deceleration parameter q=1/2, EoS parameters ω_de=ω_tot=0 represents the decelerated stage of the Universe. The characteristic values for the point F are (-3,3ζ^2/ζ^2-2,-6(1-ζ^2))/ζ^2+2,-2). At ζ^2=1, the values become (-3,-3,0,-2). All the characteristic values are negative so the point F is stable node. Ω_m=Ω_r=0, and Ω_de=1 represents the dark energy dominant Universe, the deceleration parameter q=-1, and EoS parameters ω_de=ω_tot=-1 assure the accelerated phase of the Universe. The characteristic values for the point G are (-3,3τ^2/τ^2+2η-2,-6(τ^2+η-1))/τ^2+2η-2,-2). At τ^2=2, η =-1 the values are becomes (-3,-3,0,-2). All the characteristic values are negative so the point G is stable node and Figure I (right plot) for the point G, all the trajectories are directed towards the equilibrium point. Ω_m=Ω_r=0 and Ω_de=1 represent the dark energy dominant Universe, the deceleration parameter q=-1, EoS parameters ω_de=ω_tot=-1 assure the accelerated phase of the Universe. The characteristic values for the point H are (λ^2-3,λ^2-6/2,2λ^2,λ^2-4/2) and for λ^2<0, all the characteristic values are negative and for λ^2>0, all the characteristic values are positive. Thus, the point H is stable for λ^2<0 and unstable saddle for λ^2>0. The density parameters Ω_m=Ω_r=0, Ω_de=1 represents the dark energy dominant Universe, the deceleration parameter q=-1, EoS parameters ω_de=ω_tot=-1 assure the accelerated phase of the Universe.
§.§ Model 2: G(T) = ζ T ln(ψ T)
We assume the form of G(T) as <cit.>,
G(T) = ζ T ln (ψ T),
where ζ and ψ are the free parameters. By using equation (<ref>), the expressions of pressure and energy density for dark energy are as follows:
ρ_de=ϕ̇^2/2+V(ϕ)+2ζ T+ζ T ln (ψ T) ,
p_de=ϕ̇^2/2-V(ϕ)-12ζ H^2-6ζ H^2ln (6ψ H^2)-4Ḣ(ζ ln (6ψ H^2)+3ζ) ,
By using equations (<ref>-<ref>), the EoS parameter is as follows:
ω_de=p_de/ρ_de=ϕ̇^2/2-V(ϕ)-12ζ H^2-6ζ H^2ln (6ψ H^2)-4Ḣ(ζ ln (6ψ H^2)+3ζ)/ϕ̇^2/2+V(ϕ)+2ζ T+ζ T ln (ψ T).
To discuss the stability behavior of the model, we assume new dimensionless variables from equation (<ref>) to find the set of differential equations are as follows:
x=kϕ̇/√(6)H, y=k√(V)/√(3)H, z=4ζ k^2, r=2ζ ln (6ψ H^2)k^2
ρ=k√(ρ_r)/√(3)H, λ=-V'(ϕ)/kV(ϕ), σ=V”(ϕ)V(ϕ)/V'(ϕ)^2.
where k^2=1. The density parameters with regard to the dimensionless variables are obtained as,
Ω_r=k^2ρ_r/3H^2=ρ^2,
Ω_m=k^2ρ_m/3H^2=1-x^2-y^2-z-r-ρ^2,
Ω_de=k^2ρ_de/3H^2=x^2+y^2+z+r.
From the field equations of f(T,ϕ) gravity (<ref>-<ref>) along with the above variables of equation (<ref>), we will obtain
Ḣ/H^2=-ρ^2+3(r-x^2+y^2+z-1)/2-3z-2r,
By using equation (<ref>), the expression of deceleration parameter and the EoS parameter are obtained as,
q=-1-Ḣ/H^2=ρ^2-r+3x^2-3y^2+1/2-3z-2r,
ω_de=6y^2-6x^2-3z-ρ^2(3z-2r)/3(x^2+y^2+r+z)(3z+2r-2),
By using the relation of the deceleration parameter and the EoS parameter (ω_tot), the expression of ω_tot in terms of the variables is
ω_tot =2 q-1/3=6x^2-6y^2+3z+2ρ^2/6-9z-6r.
From the variables in equation (<ref>), the set of differential equations can be derived as,
x'=√(3/2)y^2λ-3x-x[ρ^2-3(r-x^2+y^2+z-1)]/3z+2r-2,
y'=-√(3/2)xyλ-y[ρ^2-3(r-x^2+y^2+z-1)]/3z+2r-2,
z'=0,
r'=ρ^2z-3z(r-x^2+y^2+z-1)]/3z+2r-2,
ρ'=ρ(1-ρ^2-r-3x^2+3y^2-3z)/3z+2r-2,
λ'=-√(6)x(σ-1)λ^2,
We discuss the stability behavior of the above system of equations (<ref>-<ref>) at each equilibrium points with the scalar potential function V(ϕ)=V_0e^-λϕ. To evaluate the equilibrium points for the system of equations (<ref>-<ref>), we solve the equations x'=y'=z'=r'=ρ'=λ'=0. We found eight equilibrium points which are as follows
(1) Equilibrium Point (A_1): x=0, y=0, z=0, r=η_1, ρ=0,
(2) Equilibrium Point (B_1): x=0, y=0, z=η_2, r=1-η_2, ρ=0 where η_2≠0,
(3) Equilibrium Point (C_1): x=η_3, y=0, z=0, r=1-η_3^2, ρ=0 where η_3≠0,
(4) Equilibrium Point (D_1): x=0, y=0, z=0, r=1-η_4^2, ρ=η_4 where η_4≠0,
(5) Equilibrium Point (E_1): x=√(3/2)/λ, y=±1/λ√(3/2), z=0, r=η_5, ρ=0 where λ≠0, η_5≠1,
(6) Equilibrium Point (F_1): x=0, y=η_6, z=0, r=1-η_6^2, ρ=0 where η_6≠0,
(7) Equilibrium Point (G_1): x=0, y=0, z=η_7, r=η_8, ρ=η_9 where η_9=±√(3η_7+3η_8-3),
The characteristic values evaluated by solving the Jacobian matrix at each equilibrium points are presented in Table 3 whereas the density parameters, deceleration and EoS parameters for the equilibrium points are presented in Table 4.
From Table 3, for the point A_1, two characteristic values σ_1 and σ_3 are negative and one characteristic value σ_2 is positive. For both positive and negative characteristic values, the equilibrium point A_1 is saddle point and from Figure II (middle plot), for this point, the trajectories diverge to the equilibrium point. From Table 4, at the point A_1 the density parameters Ω_m=1, Ω_r=Ω_de=0 represent the matter dominant stage of the Universe and q=1/2, EoS parameters ω_de=ω_tot=0 shows the decelerated stage of the Universe. Again for the point B_1, the characteristic values are (-3,0,-3,-2). Since all the characteristic values are negative so the point B_1 is stable point and from Figure II (left plot), for the point B_1, all the trajectories are directed towards the equilibrium point. Also, at this point, the density parameters Ω_m=Ω_r=0 and Ω_de=1 represent the dark energy dominant Universe and EoS parameter ω_de=ω_tot=-1 and deceleration parameter q=-1 assure the accelerated phase of the Universe. For the point C_1, the characteristic values are (1,3-√(3/2)λη_3,3,0). Since all the characteristic values are positive so the point C_1 is unstable point and from Figure II (right plot) for the point C_1 all the trajectories are directed towards the equilibrium point. Also, at this point, the density parameters Ω_m=Ω_r=0 and Ω_de=1 represent the dark energy dominant Universe and EoS parameters ω_de=ω_tot=1, deceleration parameter q=2 assure the decelerated phase of the Universe. Further, the characteristic values for the point D_1 are (-3,1,2,0). Since one characteristics value σ_1 is negative and other two values σ_2 and σ_3 are positive so the point D_1 is unstable saddle point and from Figure II (middle plot), for this point D_1, the direction of all the trajectories diverges to the equilibrium point. The density parameters Ω_m=Ω_r=0 and Ω_de=1 represent the dark energy dominant Universe whereas deceleration parameter q=1 and EoS parameters ω_de=ω_tot=1/3 assure the decelerated phase of the Universe. The characteristic values for the point E_1 are (-5/4,3/2,0,-1/2). Since one characteristics value σ_2 is positive and other two values σ_1 and σ_3 are negative so the point E_1 is unstable saddle point and from Figure II (middle plot) for this point E_1, the direction of all the trajectories diverge to the equilibrium point. The density parameters are Ω_m=1, Ω_r=0 and Ω_de=0 represent matter dominant stage of the Universe while deceleration parameter q=1/2, EoS parameters ω_de=ω_tot=0 represents the decelerated stage of the Universe. The characteristic values for the point F_1 are (-3,-2,0,-3). All the characteristic values are negative so the point F_1 is stable node. Ω_m=Ω_r=0 and Ω_de=1 represent the dark energy dominant Universe, the deceleration parameter q=-1 whereas EoS parameters ω_de=ω_tot=-1 assure the accelerated phase of the Universe. The characteristic values for the point G_1 are (-3,0,-3,-1). All the characteristic values are negative so the point G_1 is stable node and Figure II (left plot) for the point G_1, all the trajectories are directed towards the equilibrium point. Ω_m=Ω_r=0 and Ω_de=1 represent the dark energy dominant Universe, the deceleration parameter q=-1, EoS parameters ω_de=ω_tot=-1 assure the accelerated phase of the Universe.
§ HYBRID EXPANSION LAW IN F(T,Φ) GRAVITY MODEL
Several authors already use the power and exponential law form of scale factor a(t) to establish the cosmological models which give detail explanation of the evolution history of the Universe. But these do not consider the transition phase from the early Universe and the accelerated phase of the present Universe this shows that the deceleration parameter q=-1-Ḣ/H^2 is constant all over the cosmic evolution. Hybrid expansion law is one of the substitute way to solve this problem. We assume the relation between scale factor and Brans-Dicke scalar field is as follows <cit.>,
ϕ=ϕ_0a^γ,
where ϕ_0 and γ are constants. From current observations our present Universe belongs to the accelerated stage and at past it belongs to the decelerated stage. Thus, to find the proper model of the transition Universe, we consider the scale factor a(t) by the following hybrid expansion law <cit.>,
a=a_1t^δe^σ t,
where a_1, δ and σ are real positive constants. By substituting the value of a in equation (<ref>) we get,
ϕ=ϕ_0(a_1t^δe^σ t)^γ,
Figure III represents the nature of the scalar field (ϕ) vs time t by the choice of the parameters a_1=0.4, δ=0.7, σ=1.5 and γ=0.05. The figure shows that ϕ increases over time. From equation (<ref>), the Hubble parameter can be evaluated as,
H= ȧ/a=σ+δ/t.
The deceleration parameter is derived by using the equation (<ref>) as,
q=-1-Ḣ/H^2=-1+δ/(δ+σ t)^2.
The important thing in cosmology is that to transform all the parameters in redshift. The equation of scale factor (a(t)) and the redshift parameter (z) is a(t)=a_0/(1+z) where a_0=1 be the current value of the scale factor. The relation between the time and redshift can be obtained as,
t(z)=δ/σW[σ/δ(1/a_1(1+z))^1/δ],
where W represents the Lambert function which is also called "product algorithm".
The plot of deceleration parameter vs redshift is shown in Figure IV by the suitable choice of parameters. In Cosmology, the deceleration parameter explains the evolution history of the Universe from the decelerated phase of the early Universe to the accelerated phase of the current Universe. The Universe belongs to the decelerated stage at q>0, the expansion of the Universe is accelerated for q<0 and for q=0 the Universe is in marginal expansion. From current observations expansion of our present Universe is accelerated and the present value of the deceleration parameter belongs to the range -1 to 0, i.e. -1≤ q≤ 0. We write the deceleration parameter (q) with regard to the redshift z by using the equations (<ref>) and (<ref>). From Figure IV, we observe that for the parameters a_1=0.4, δ=0.7 and σ=1.5, the deceleration parameter takes the positive value at initial stage of the Universe and it takes the negative value at present and finally it goes to -1 at z=-1.
For Model I, we assume the G(T) function as G(T) = α T+β/T. By using this function in Friedmann equations (<ref>-<ref>), we evaluated the expressions of pressure and energy density as,
ρ_de=1/2[ϕ_0^2γ^2a_1^2γ(t^δe^σ t)^2γ(δ/t+σ)^2]+V_0e^-λϕ_0[a_1t^δe^σ t]^γ+6α(δ/t+σ)^2-β/2(δ/t+σ)^2,
p_de=1/2[ϕ_0^2γ^2a_1^2γ(t^δe^σ t)^2γ(δ/t+σ)^2]-V_0e^-λϕ_0[a_1t^δe^σ t]^γ-6α(δ/t+σ)^2+β/2(δ/t+σ)^2+4δ/t^2(α+β/12(δ/t+σ)^4).
To derive the above expressions, we use ϕ̇=ϕ_0γ a_1^γ(t^δe^σ t)^γ(δ/t+σ) and the scalar potential V=V_0e^-λϕ_0[a_1t^δe^σ t]^γ. The nature of energy density vs redshift in f(T,ϕ) gravity Model I is given in Figure V. From Fig VI, we can say that energy density takes the positive values for all z and the behavior of energy density function is increasing with redshift. At initial stage, the value of energy density is large positive number and finally it goes to zero for z=-1. The behavior of pressure vs redshift is given in Figure V. From the figure, we can say that for our model, the pressure (p) starts with large positive number at initial stage and at the end, it goes to zero this means p→ 0 at z→-1 in future. From current observations, our present Universe is in accelerated phase with positive energy density and negative pressure. We get the value of energy density is positive for all z and pressure is negative in f(T,ϕ) gravity model which describes the acceleration of the present Universe.
For Model II, we assume the G(T) function as G(T) = ζ T ln(ψ T). By using this function in Friedmann equations (<ref>-<ref>), we evaluated the expressions of pressure and energy density as,
ρ_de=ϕ̇^2/2+V(ϕ)+12ζ(δ/t+σ)^2 +6ζ(δ/t+σ)^2ln[6ψ(σ+δ/t)^2],
p_de=
ϕ̇^2/2-V(ϕ)
-12ζ(δ/t+σ)^2-6ζ(δ/t+σ)^2ln[6ψ(σ+δ/t)^2]+4δ/t^2[ζ ln[6ψ(σ+δ/t)^2]+3ζ],
To derive the above expressions, we use ϕ̇=ϕ_0γ a_1^γ(t^δe^σ t)^γ(δ/t+σ) and the scalar potential V=V_0e^-λϕ_0[a_1t^δe^σ t]^γ. The nature of energy density vs redshift in f(T,ϕ) gravity for Model II is given in Figure VI. From Fig. VI, we can say that energy density takes the positive values for all z and the behavior of energy density function is increasing with redshift. At initial stage, the value of energy density is large positive number and finally it goes to zero for z=-1. The behavior of pressure vs redshift is given in Figure VI. From the Figure, we can say that for our model the pressure (p) starts with large positive number at initial stage and at the end it goes to zero this means p→ 0 at z→-1 in future. From current observations, our present Universe is in accelerated phase with positive energy density and negative pressure. We get the value of energy density is positive for all z and pressure is negative in f(T,ϕ) gravity model which describes the acceleration of the present Universe.
For Model I, from equations (<ref>-<ref>), we evaluate the EoS parameter as,
ω_de=p_de/ρ_de=ϕ̇^2/2-V(ϕ)-6α(δ/t+σ)^2+β/2(δ/t+σ)^2+4δ/t^2(α+β/12(δ/t+σ)^4)/ϕ̇^2/2+V(ϕ)+6α(δ/t+σ)^2-β/2(δ/t+σ)^2,
For Model II, from equations (<ref>-<ref>), we evaluate the EoS parameter as,
ω_de=ϕ̇^2/2-V(ϕ)
-12ζ(δ/t+σ)^2-6ζ(δ/t+σ)^2ln[6ψ(σ+δ/t)^2]+4δ/t^2[ζ ln[6ψ(σ+δ/t)^2]+3ζ]/ϕ̇^2/2+V(ϕ)+12ζ(δ/t+σ)^2 +6ζ(δ/t+σ)^2ln[6ψ(σ+δ/t)^2],
The nature of EoS parameter vs redshift is given in Figure VII. From current observations, the range of EoS parameter is -1≤ω≤ 0. If ω=1 then it describes stiff fluid, for -1<ω≤ -1/3 the Universe is in quintessence phase, if ω=-1 then it represents ΛCDM model and for ω<-1 it describes phantom model. From Fig. VII, we analyze that at z=0, ω belongs to the quintessence phase and ω approaches to -1 at z=-1 which represents the ΛCDM model. Thus from Fig VII, we observe that both the models describe the accelerating stage of the Universe. Also, for Model I we find the numerical value of EoS parameter is ω_0=-0.992 and for Model II we find the numerical value of EoS parameter is ω_0=-0.883 which satisfies the current Planck observational data.
§ ENERGY CONDITIONS
In Cosmology, the energy conditions contribute to a significant part to determine the prosperity of the Universe. These energy conditions are used to verified the acceleration epoch of the Universe. These energy conditions can be adopted by Raychaudhury equations in 1995. The types of the energy conditions in f(T,ϕ) gravity are <cit.>,
(1) Weak Energy Conditions (WEC): ρ_de+p_de≥ 0, ρ_de≥ 0,
(2) Null Energy Conditions (NEC): ρ_de+p_de≥ 0,
(3) Dominant Energy Conditions (DEC): ρ_de-p_de≥ 0,
(4) Strong Energy Conditions (SEC): ρ_de+3p_de≥ 0.
For Model I, from equations (<ref>-<ref>), we evaluated the expressions of WEC, NEC, DEC, SEC are as follows
WEC ⇒
ρ_de+p_de=ϕ_0^2γ^2a_1^2γ(t^δe^σ t)^2γ(δ/t+σ)^2+4δ/t^2(α+β/12(δ/t+σ)^4) ≥ 0,
ρ_de≥ 0,
NEC ⇒
ρ_de+p_de=ϕ_0^2γ^2a_1^2γ(t^δe^σ t)^2γ(δ/t+σ)^2+4δ/t^2(α+β/12(δ/t+σ)^4) ≥ 0,
DEC ⇒
ρ_de-p_de=2V_0e^-λϕ_0[a_1t^δe^σ t]^γ+12α(δ/t+σ)^2-β/2(δ/t+σ)^2-4δ/t^2(α+β/12(δ/t+σ)^4) ≥ 0,
SEC ⇒
ρ_de+3p_de = 2[ϕ_0^2γ^2a_1^2γ(t^δe^σ t)^2γ(δ/t+σ)^2]-2V_0e^-λϕ_0[a_1t^δe^σ t]^γ-12α(δ/t+σ)^2
+β/2(δ/t+σ)^2+12δ/t^2(α+β/12(δ/t+σ)^4) ≥ 0.
The plot of energy conditions vs redshift is presented in Figure VIII. From Fig. VIII, we analyzed that WEC, NEC and DEC belongs to the positive region for all z so they satisfied the conditions but SEC belongs to the negative region which shows that SEC is violated. Thus, the violation of SEC describes the accelerated expansion of the Universe.
§ STATEFINDER PARAMETERS
Several dark energy models have been made to describe the behavior of dark energy and the accelerating stage of the Universe. To characterize between these models, Sahni et. al., commenced a set of parameters which is called Statefinder parameter {r,s} <cit.>. The statefinder parameters r and s can be obtained as follows
r=⃛ a/aH^3,
s=(r-1)/3(q-1/2).
The parameter r in equation (<ref>) can be expressed as,
r=2q^2+q-q̇/H.
From equations (<ref>) and (<ref>), we evaluated the form of r & s are as follows,
r=1-3δ/(δ+σ t)^2+2δ/(δ+σ t)^3,
s=2δ/(δ+σ t)^3-3δ/(δ+σ t)^2/3δ/(δ+σ t)^2-9/2.
The different relations of the statefinder parameters r & s illustrate the several dark energy models which are as follows <cit.>
(1) ΛCDM model for r=1, s=0,
(2) Quintessence model for r<1, s>0,
(3) Chaplygin gas model for r>1, s=<0.
The plot of r-s for the parameters a_1=0.4, δ=0.7, σ=1.5 and γ=0.05 are presented in Figure IX. From Figure IX, we observe that at initial stage, our model starts from the range r>1, s<0 which illustrate the Chaplygin gas model, after that our model represents quintessence model for r<1, s>0 and at the end, the model belongs to the point {r,s}= {1,0}. The characteristics of the statefinder parameters {r,s} for the model be r=1, s=0 which illustrate the ΛCDM model. In Figure X, we plot the r-q plane to discuss the behavior of our model for some suitable parameter. In this figure, the middle line shows the ΛCDM model and it divides the figure into two equal parts where the upper part of the line belongs to the Chaplygin gas model and the lower part of the line represents the quintessence model. The trajectories in r-q plane starts at the point q>0 and r>0 which represents the SCDM model, after that it represents the quintessence model for q<0 and r<1 and at the end, it goes to the de-Sitter stage at q=-1, r=1.
§ CONCLUSIONS
We investigated a dynamical system analysis of f(T,ϕ) gravity theory. We assume two forms of G(T) which are (i) G(T) = α T+β/T, (ii) G(T) = ζ T ln(ψ T) where α, β, ζ and ψ be the free parameters and analyze the stability behavior of these models by phase portrait. We evaluated the system of differential equations from Friedmann equations by introducing some new dimensionless variables (x,y,z,r,ρ,λ,σ). To discuss the stability analysis, we found equilibrium points from the set of autonomous differential equations. We found eight equilibrium points for Model I which are A (0,0,0,0,0), B (0,0,0,1,0), C (0,0,ν,1-ν,0), D (δ,0,0,1-δ^2,0), E (√(3/2)/λ,1/λ√(3/2),0,0,0), F (0,ζ,0,1-ζ^2,0), G (0,τ,η,1-τ^2-η^2,0), H (λ/√(6),√(1-λ^2/6),0,0,0) and seven critical points for Model II are A_1 (0,0,0,η_1,0), B_1 (0,0,η_2,1-η_2,0), C_1 (η_3,0,0,1-η_3^2,0), D_1 (0,0,0,1-η_4^2,η_4), E_1 (√(3/2)/λ,±1/λ√(3/2),0,η_5,0), F_1 (0,η_6,0,1-η_6^2,0) and G_1 (0,0,η_7,η_8,±√(3η_7+3η_8-3)). The equilibrium points B and C are stable while E is stable at λ^2=3, F is stable at ζ^2=1 and G is stable at τ^2=2, η=-1. For the points B, C, F and G, Ω_m=Ω_r=0, Ω_de=1 represent the dark energy dominant Universe and EoS parameters ω_de=ω_tot=-1, deceleration parameter q=-1 assure the accelerated phase of the Universe but for the point E, at λ^2=3, Ω_m=0, Ω_r=0, Ω_de=1 represents the dark energy dominant Universe while deceleration parameter q=1/2, EoS parameters ω_de=ω_tot=0 represents the decelerated stage of the Universe. The equilibrium points B_1, F_1 and G_1 are stable points for the Model II. For these three points the density parameters Ω_m=Ω_r=0, Ω_de=1 represent the dark energy dominant Universe, the deceleration parameter q=-1, EoS parameters ω_de=ω_tot=-1 assure the accelerated phase of the Universe. The current value of EoS parameter ω_de= -1.035^+0.055_-0.059(Supernovae Cosmological Project), ω_de= -1.073^+0.090_-0.089(WMAP+CMB), ω_de= -1.03±0.03(Planck 2018) <cit.> and deceleration parameter q=-1.08±0.29 <cit.>. From Table 2 and Table 4, our obtained values of EoS parameter (ω_de) and deceleration parameter (q) of the above two f(T,ϕ) models satisfy the observational data.
We assume the scale factor as a function of hybrid expansion law. We rewrite all the physical parameters with redshift by using the equation t(z)=δ/σW[σ/δ(1/a_1(1+z))^1/δ]. From several observational data, at present our Universe belongs to the accelerated stage. So, the deceleration parameter belongs to the range -1≤ q≤0. For our model, we find the deceleration parameter by using the scale factor and the equation of time and redshift. We plot the deceleration parameter vs redshift in Figure IV for a_1=0.4, δ=0.7 and σ=1.5. From Figure IV, the deceleration parameter takes the positive value at initial stage of the Universe and it takes the negative value at present and finally it goes to -1 at z=-1 this means at initial stage the Universe starts from transition phase and finally it goes to acceleration phase for z=-1 which satisfies the present observations. We represent the energy density vs redshift plot in Figure V for both the models by the suitable choice of parameters. In the beginning, the energy density takes positive value for all z and it goes to zero at z=-1. Figure VI represents the nature of pressure vs redshift for both models. At initial stage, pressure takes the large number for z and goes to zero at z→ -1. We get the value of energy density is positive for all z and pressure is negative which describes the acceleration of the present Universe. The nature of EoS parameter vs redshift is given in Figure VII. The EoS parameter ω belongs to the quintessence phase at z=0 and it goes to -1 for z=-1. Also in Model I, we find the numerical value of EoS parameter is ω_0=-0.992 for the parameters a_1=0.4, δ=0.7, σ=1.5, γ=0.05, α=1.2, β=1.5 and λ=0.005 and in Model II, we find the numerical value of EoS parameter is ω_0=-0.883 which satisfy the current Planck observational data. We examined the behavior of the energy conditions vs redshift and the plot of energy conditions are given in Figure VIII. WEC, NEC and DEC are satisfied all the conditions but SEC is not satisfied the conditions. In Sec. 6, Figures IX, X represent the nature of statefinder parameters. In r-s plane at initial stage our model belongs to the quintessence phase and at the end, it goes to the ΛCDM model. In r-q plane our model belongs to the SCDM model and finally it goes to ΛCDM model. Thus, we can conclude that the present models satisfy all the observational data.
99
Riess09 A. G. Riess et al., Astron. J. 116 (1998) 1009.
Perlmutter99 S. Perlmutter et al., Astrophys. J. 517 (1999) 565.
Spergel03 D. N. Spergel et al., Astrophys. J. Suppl. 148 (2003) 175.
Cole05 S. Cole et al., Mon. Not. R. Astron. Soc. 362 (2005) 505.
Sami06 E.J. Copeland, M. Sami, S. Tsujikawa, Int. J. Mod. Phys. D 15, 1753 (2006).
Kowalski08 M. Kowalski et al., Astrophys. J. 686 (2008) 749.
Spergel07 D. N. Spergel et al., Astrophys. J. Suppl. 170 (2007) 377.
Komatsu11 E. Komatsu et al., Astrophys. J. Suppl. 192 (2011) 18.
Wetterich88 C. Wetterich, Nucl. Phys. B bf302, 668 (1988).
Ratra88 B. Ratra, P. Peebles, Phys. Rev. D bf37, 3406 (1988).
Chiba00 T. Chiba, T. Okabe, M. Yamaguchi, Phys. Rev. D bf62, 023511 (2000).
Mukhanov00 C. Armendariz-Picon, V.F. Mukhanov, P.J. Steinhardt, Phys. Rev. Lett. bf85, 4438 (2000).
Nicolis09 A. Nicolis, R. Rattazzi, E. Trincherini, Phys. Rev. D bf79, 064036 (2009).
Deffayet09 C. Deffayet, G. Esposito-Farese, A. Vikman, Phys. Rev. D bf79, 084003 (2009).
Capozziello03 S. Capozziello, S. Carloni and A. Troisi, Rec. Res. Develop. Astron. Astrophys. 1 (2003) 625.
Mirza17 Behrouz Mirza, Fatemeh Oboudiat, "Constraining f(T) gravity by dynamical system analysis", JCAP 11 (2017) 011.
Einstein28 A. Einstein, Riemannian Geometry with Maintaining the notion of distant parallelism, Sitz. Preuss. Akad. Wiss. (1928) 217.
Andrade00 V.C. de Andrade, L.C.T. Guillen and J.G. Pereira, "Gravitational energy momentum density in teleparallel gravity", Phys. Rev. Lett., 84 (2000) 4533 [gr-qc/0003100] [INSPIRE].
Bengochea09 G. R. Bengochea, R. Gabriel, R. Ferraro, Phys. Rev. D 79 2009, 124019.
Levi22 L.K. Duchaniya, B. Mishra, Jackson Levi Said, arXiv:2210.11944v1 [gr-qc].
Manuel21 Manuel Gonzalez-Espinozaa, Giovanni Otalora, Eur. Phys. J. C, 05 2021, DOI: 10.1140/epjc/s10052-021-09270-x.
Kadam23 L. K. Duchaniya, S. A. Kadam, Jackson Levi Said, B. Mishra, Eur. Phys. J. C, 83, 27 (2023), https://doi.org/10.1140/epjc/s10052-022-11155-6.
Amit23 Amit Samaddar, S. Surendra Singh, Eur. Phys. J. C, 83, 283 (2023), https://doi.org/10.1140/epjc/s10052-023-11458-2.
Singh19 S. Surendra Singh, C. H. Sonia, Advances in High Energy Physics 2020:1-18.
Singh23 Amit Samaddar, S. Surendra Singh, Shivangi Rathore, "Stability analysis of cosmological models coupled minimally with scalar field in f(Q) gravity", arXiv:2302.02999v1.
Shah21 Parth Shah, Gauranga C. Samanta, Eur. Phys. J. C, 79 (2019), 414.
Sonia22 C.H. Sonia, S. Surendra Singh, Eur. Phys. J. C. C82, 10 (2022).
Arcos04 H.I. Arcos, J.G. Pereira, Int. J. Mod. Phys. D 13, 2193 (2004).
Pereira12 R. Aldrovandi, J.G. Pereira, Teleparallel gravity: an introduction, vol. 173 (Springer Science & Business Media, 2012).
Otalora20 M. Gonzalez-Espinoza, G. Otalora, arXiv:2011.08377 (2020).
Hohmann18 M. Hohmann, L. Järv, U. Ualikhanova, Phys. Rev. D, 97, 104011 (2018).
M06 E. J. Copeland, M. Sami, S. Tsujikawa, Int. J. Mod. Phys. D 15, 1753 (2006).
Rafael17 Rafael Luís, Elias Rodrigues, Discrete Dynamics in Nature and Society, 6186354(2017).
Wright29 Böhmer, C. G., Tamanini, N., Wright, M., Phys. Rev. D, 91, 12 (2015a), 123002. arXiv:1501.06540.
Wright30 Böhmer, C. G., Tamanini, N., Wright, M., Phys. Rev. D, 91, 12 (2015b), 123003. arXiv:1502.04030.
Biswas15 Sujay Kr. Biswas, Subenoy Chakraborty, International Journal of Modern Physics D, 7, 1550046 (2015).
Chaubey32 R. Chaubey, Rakesh Raushan, Astrophys Space Sci, 361:215 (2017).
Campo33 Sergio del Campo, Ramón Herrera, Diego Pavón, arXiv:0812.2210.
Myrzakulov11 Ratbay Myrzakulov, Eur. Phys. J. C, 71, 1752 (2011), DOI 10.1140/epjc/s10052-011-1752-9.
Setare12 M.R. Setare, N. Mohammadipour, JCAP, 11, 030 (2012), http://iopscience.iop.org/1475-7516/2012/11/030.
Yadav21 A. K. Yadav, A.M. Alshehri, Nafis Ahmad, G.K. Goswami, Mukesh Kumar, Physics of the Dark Universe, 31, 100738 (2021).
Bhardwaj19 A. K. Yadav, P. K. Sahoo, and V. Bhardwaj, Modern Physics Letters A, 34.19 (2019): 1950145.
Tripathy20 S. K. Tripathy et al., y Phys. Scr. 95 (2020) 115001.
Bennai22 M. Koussour, S.H. Shekh, M. Bennai, T. Ouali, Chinese Journal of Physics, DOI: 10.1016/j.cjph.2022.11.013.
Mishra22 B. Mishra et. al., International Journal of Geometric Methods, DOI: 10.1142/S0219887823500834.
Sahni03 Sahni, Varun, et al., Journal of Experimental and Theoretical Physics Letters 77.5 (2003): 201.
Alam03 U. Alam, V. Sahni, T. D. Saini, A. A. Starobinsky, Mon. Not. R. Astron. Soc., 344, 1057 (2003).
Aghanim20 N. Aghanim et al., Astron. Astrophys. 641, A6 (2020).
Amanullah10 R. Amanullah et al., Astrophys. J., 716, 712 (2010).
Hinshaw13 G. Hinshaw et al., Astrophys. J Suppl. Ser., 208, 19 (2013).
Camarena20 D. Camarena, V. Marra, Phys. Rev. Res., 2, 013028 (2020).
|
http://arxiv.org/abs/2306.04688v1
|
20230607180006
|
The Prevalence of the $α$-bimodality: First JWST $α$-abundance Results in M31
|
[
"David L. Nidever",
"Karoline Gilbert",
"Erik Tollerud",
"Charles Siders",
"Ivanna Escala",
"Carlos Allende Prieto",
"Verne Smith",
"Katia Cunha",
"Victor P. Debattista",
"Yuan-Sen Ting",
"Evan N. Kirby"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
Nidever et al.
The Prevalence of the α-bimodality
17
2021
10.1017/xxxxx
Proceedings IAU Symposium
F. Tabatabaei, B. Barbuy & Y. Ting, eds.
^1Department of Physics, Montana State University, P.O. Box 173840, Bozeman, MT 59717-3840, USA.
email: [email protected]
^2Space Telescope Science Institute, Baltimore, MD, USA
^3Princeton University, 4 Ivy Lane, Princeton, NJ 08544, USA
^4The Observatories of the Carnegie Institution for Science, 813 Santa Barbara St., Pasadena, CA 91101, USA
^5Instituto de Astrofsica de Canarias, E-38205 La Laguna, Tenerife, Spain
^6Departamento de Astrofsica, Universidad de La Laguna (ULL), E-38206 La Laguna, Tenerife, Spain
^7NSF’s National Optical-Infrared Astronomy Research Laboratory, 950 North Cherry Avenue, Tucson, AZ 85719, USA
^8 Institut d'Astrophysique de Paris, UMR 7095 CNRS, Sorbonne Université, 98bis Bd. Arago, 75014 Paris, France
^9Observatório Nacional, Rua General José Cristino, 77, 20921-400 São Cristóvão, Rio de Janeiro, RJ, Brazil
^10 Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ, 85721, USA
^11Jeremiah Horrocks Institute, University of Central Lancashire, Preston, PR1 2HE, UK
^12Research School of Astronomy & Astrophysics, Australian National University, Cotter Rd., Weston, ACT 2611, Australia
^13Research School of Computer Science, Australian National University, Acton, ACT 2601, Australia
^14Department of Physics and Astronomy, University of Notre Dame, 225 Nieuwland Science Hall, Notre Dame, IN 46556, USA
We present initial results from our JWST NIRSpec program to study the α-abundances in the M31 disk.
The Milky Way has two chemically-defined disks, the low-α and high-α disks, which are closely
related to the thin and thick disks, respectively.
The origin of the two populations and the α-bimodality between them is not entirely clear, although there are now several models that can reproduce the observed features. To help constrain the models and discern the origin, we have undertaken a study of the chemical abundances of the M31 disk using JWST NIRSpec, in order to determine whether stars in M31's disk also show an α-abundance bimodality. Approximately 100 stars were observed in our single NIRSpec field at a projected distance of 18 kpc from the M31 center. The 1-D extracted spectra have an average signal-to-noise ratio of 85 leading to statistical metallicity precision of 0.016 dex, α-abundance precision of 0.012 dex, and a radial velocity precision 8 km s^-1 (mostly from systematics).
The initial results indicate that, in contrast to the Milky Way, there is no α-bimodality in the M31 disk, and no low-α sequence. The entire stellar population falls along a single chemical sequence very similar to the MW's high-α component which had a high star formation rate.
While this is somewhat unexpected, the result is not that surprising based on other studies that found the M31 disk
has a larger velocity dispersion than the MW and is dominated by a thick component. M31 has had a more active accretion and merger history than the MW which might explain the chemical differences.
galaxies: abundances – galaxies: stellar content – galaxies: structure – galaxies: evolution – Andromeda galaxy
The Prevalence of the α-bimodality:
First JWST α-abundance Results in M31
David L. Nidever^1,
Karoline Gilbert^2,
Erik Tollerud^2,
Charles Siders^1,
Ivanna Escala^3,4,
Carlos Allende Prieto^5,6,
Verne Smith^7,8,
Katia Cunha^8,9,10,
Victor P. Debattista^11,
Yuan-Sen Ting^12,13 and
Evan N. Kirby^14
July 31, 2023
=======================================================================================================================================================================================================================================
§ INTRODUCTION
It has long been known that the Milky Way's disk is composed of a thin and thick component <cit.>. More recently, it has been determined that the disk is also composed of two chemical populations in the [α/Fe]–[Fe/H] plane: the old (>8 Gyr), high α-abundance population and a younger (<8 Gyr), low α-abundance population <cit.>. There exists a valley or “gap” between these two populations at intermediate metallicities ([Fe/H]≈-1) giving rise to an α-bimodality which is clearly seen in the <cit.> volume-complete sample of solar neighborhood stars.
The SDSS/APOGEE survey <cit.> found that this chemical feature is widespread across the MW disk with only the relative strengths of the two components changing with position in the galaxy <cit.>.
The origin of the α-bimodality has been debated for decades.
The two-infall model of <cit.> proposed that there were two episodes of gas infall that produced the two chemical populations.
In contrast, <cit.> stated that the chemically bimodal appearance comes about naturally due to the quick transition of stellar populations at early times from high-α to low-α and from radial mixing of stars at various radii in a disk containing a radial metallicity gradient.
<cit.> took this further by building an analytical chemodynamical model that includes radial mixing and was able to reproduce the observed Milky Way abundances with some fine-tuning of the parameters.
Alternatively, <cit.> showed that a disk with clumpy star formation could reproduce the α-bimodality. Early in the galaxy's evolution an instability forms clumps of gas with high star formation rate (SFR) that quickly self-enrich in α-abundances and create the galaxy's high-α sequence.
At the same time, low-SFR star formation takes place throughout the galaxy and produces the low-α sequence.
Another model put forward by <cit.> suggests that gas from early high-SFR outflows into the halo were re-accreted at later times reducing the metallicity of the gas and creating the low-α sequence.
Finally, there are some galaxy simulations that can reproduce the Milky Way's α-bimodality by a gas-rich merger that brings in metal-poor gas around 8 Gyr ago and starts the formation of the low-α sequence <cit.>.
Since we are now in a situation where multiple models can explain the Milky Way abundance data,
we need a larger statistical sample than one to make further progress and rule-out models.
This can be achieved by measuring the α-abundances for a Milky Way-like galaxy, such as M31.
Existing [α/Fe] measurements of stars in M31 were primarily obtained with medium-resolution spectra (R∼ 3000 to 6000) using the DEIMOS multi-object spectrograph on the Keck II telescope, and required either long exposure times (∼ 6 hours) to achieve S/N sufficient to measure a bulk [α/Fe] abundance for individual stars with precision to ∼ 0.2 to 0.4 dex <cit.>, or co-addition of ∼ 5 to 7 lower S/N stellar spectra to obtain mean abundance measurements of small groups of stars to a similar precision <cit.>.
Most of the existing M31 abundances are in M31's stellar halo
<cit.>, tidal debris structures
<cit.>,
and satellites <cit.>.
Measurements of [α/Fe] have been made in only one field in M31's stellar disk, at 26 kpc in projected radius; the 10 stars belonging to the disk component with [α/Fe] abundances have an average [α/Fe] of 0.6 dex with a standard deviation in [α/Fe] abundances of 0.28 dex <cit.>.
Even with many hours of integration time on the 10-m Keck telescopes, the M31 stellar α-abundances are not precise enough to detect an α-bimodality like the one seen in the Milky Way, for which a precision of ∼0.05 dex or better is needed. However, such a high abundance precision can be achieved by using JWST NIRSpec using the micro-shutter assembly (MSA) to obtain spectra of ∼100 stars in one pointing. Although the spectral resolution is lower (R ∼2700) than traditionally desired for chemical abundance work (R ≥ 20,000), <cit.> showed that precise abundances can be obtained with such resolutions as long as the signal-to-noise ratio (S/N) is high enough. We describe here the initial results of our Cycle 1 JWST NIRspec program to measure chemical abundances in the M31 disk where we find no α-bimodality.
§ THE JWST NIRSPEC M31 DISK PROGRAM
Our JWST Cycle 1 program (2609; PI: Nidever) observed one NIRSpec+MSA field in the southeastern M31 disk at a projected radius of 18 kpc. We used the highest resolution grating G140H with the F100LP blocking filter that gave us a wavelength coverage of 9,000–18,000 Å. In addition, we observed Milky Way star cluster (M71 and IC166; with NGC 6791 to be observed in summer 2023) to help calibrate and validate the NIRSpec radial velocities (RVs) and chemical abundances. These clusters have ground-based RV and chemical abundance results that can be compared to.
We determined that ∼100 stars and an abundance precision of ∼0.05 dex are needed to detect the α-bimodality seen in the Milky Way. Preliminary analysis with synthetic spectra indicated that this abundance precision could be achieved with the JWST NIRSpec resolution and wavelength coverage if a S/N of 70 was obtained.
§ DATA REDUCTION AND ANALYSIS
The JWST CalSpec[<https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline-overview>] pipeline automatically processes the 2-D NIRSpec images and performs spectral extraction as well as wavelength calibration. It became immediately clear that there were several problems with the data processing, especially the spectral extraction which made the 1-D stellar spectra look like a sawtooth pattern. Attempts to rerun the pipeline locally with different parameter setting proved ineffective. Therefore, we developed a new Python software suite called [<https://github.com/dnidever/spyderwebb>] that builds on the JWST CalSpec pipeline but greatly improves the data reduction. In addition, it adds the capability to determine radial velocities with <cit.> as well as stellar parameters and chemical abundances with <cit.>. The main components and improvements in are:
* Better background subtraction.
* Optimal extraction routines with empirical, non-parametric profiles.
* Rejection of outlier pixels.
* Slit-correction of wavelengths.
* Radial velocity determination with .
* Abundance determination using .
One revelation while working with the NIRSpec data was that due to the improved performance of the JWST telescope <cit.> the point spread function (PSF) on the NIRSpec detector is smaller than expected – FWHM=0.9 pixels. This means that the PSF is significantly undersampled compared to the nominally required Nyquist sampling of ∼2 pixels per PSF. While this undersampling is not problematic for spectral extraction in the spatial dimension, it does mean that the 1-D spectra cannot be resampled onto a new wavelength scales because there is not enough information to do so. We obtained six dithered exposures in our M31 field and were planning to combine these six “visit” spectra into one combined spectrum for each star. Due to the undersampling, we instead had to “forward model” each visit spectrum using a model spectrum convolved with the correct PSF and sampled onto the observed wavelength scale. Fortunately, this was straightforward to accomplish with and .
An unanticipated benefit of the PSF undersampling is that the spectral resolution is higher than expected. For the G140H/F100LP setup, the predicted top resolution is R ∼ 2,700. However, our analyses show that it is actually closer to R ∼ 4,000–5,000. This means that with proper analysis, higher precision is achievable with the RVs and chemical abundances. In the future, it would be advisable for programs to spectrally dither by a ∼0.5 pixels to recover full sampling, as is done by APOGEE due to a slight undersampling in the blue part of the spectrum <cit.>.
Radial Velocities:
We used to determine precise radial velocities and estimates of the stellar parameters. forward-models a spectrum by convolving a model spectrum by the observed spectrum line-spread-function (LSF). This allows us to handle the undersampled NIRSpec spectra, since the observed spectra do not need to be resampled. uses a machine-learning <cit.> model trained on a 3-D grid (, , and [Fe/H]) of high-resolution synthetic spectra covering 3,000–18,000 Å. also has the ability to simultaneously fit multiple spectra of the same star with a single stellar model (“jointfit”), which we used on the multiple visit NIRSpec spectra.
Figure <ref> shows results for the NIRSpec M71 member radial velocities. The left panel shows a histogram with a dispersion of 30 which is substantially higher than the literature value <cit.>. The middle panel shows the M71 stellar RVs versus their predicted NIRSpec shutter position in the spectral dimension (). The strong correlation indicates that there is an RV offset due to the position of the stars in the “slit”.
The right panel shows the distribution of the slit-corrected RVs with a significantly smaller scatter of 8 .
The statistical uncertainties from are ≲1 , indicating that the scatter is still dominated by systematics. We plan to investigate further ways to improve the RV and wavelength calibration (Tollerud et al., in prep.).
Abundances:
We used to determine the stellar parameters, metallicity and α-abundance. The 4-D grid in , , [M/H] and [α/M] was generated for wavelengths 9,000–18,000 Å using the <cit.> spectral synthesis package with Kurucz model atmospheres <cit.>. was run in a mode where the synthetic spectrum is convolved with the correct LSF and resampled onto the wavelength scale of the observed spectrum.
Figure <ref> shows example JWST NIRSpec spectra (black) and their best-fit synthetic spectrum illustrating that reliable stellar parameters can be obtained.
§ M31 ABUNDANCES
Figure <ref> shows a comparison of the APOGEE red clump Milky Way α-abundances <cit.> on the left and our NIRSpec M31 α-abundances on the right. While the MW has a prominent low-α population that extends from [Fe/H]=-0.6 to +0.5, that population does not exist in our M31 sample. In fact, almost the entire stellar sample can be described by a single track similar to that of the MW's high-α population which is associated with old stars in the thick disk and a high SFR. Therefore, there is no α-abundance bimodality in the M31 disk.
While this finding is somewhat surprising, it is consistent with other recent results. <cit.> used the PHAT photometric survey to find that the M31 southeastern disk is dominated by a thick component with scale-height of 0.77 kpc, which is similar to the Milky Way's thick disk <cit.>.
In addition, <cit.> found a higher velocity dispersion in the M31 disk of ∼60 which is multiple times higher than the velocity dispersion in the MW's older components.
This difference between the MW and M31's disks is likely due to M31's higher merger and accretion rate. Evidence of this includes the
Giant Southern Stream <cit.> and the metal-rich inner halo that was likely produced by a recent merger only ∼2 Gyr ago <cit.>.
Therefore, it understandable that the M31 disk is dominated by a thick component.
What does this mean for the α-bimodality models previously mentioned? While mergers and interactions will “puff-up” or dynamically heat a stellar disk, this will not change the chemistry of the already-existing stars or change internal processes such as basic chemical evolution. However, the mergers might have changed certain conditions required by the models. For example, the <cit.> and <cit.> models require a radial metallicity gradient. While the MW has a fairly strong gradient <cit.>, M31's is factor of 3.4× lower <cit.> which has been attributed to mergers and interactions.
These models also require radial migration to create the low-α sequence that is very extended in metallicity, but radial migration is generally less efficient in a dynamically hot disk <cit.>.
On the other hand, the clumpy star formation model requires conditions that allow for the clump instability. It's quite possible that these conditions were not met in the M31 disk due to the active merger history. Therefore, all of the models will need to be tested in the conditions of M31 to ascertain if they can explain the existence of a bimodality in the MW but the lack of one in M31.
§ CONCLUSIONS
We present our initial results from our JWST NIRSpec M31 disk project. We observed one field and obtained high-S/N spectra of 103 RGB stars. For these stars, we were able to determine stellar parameters and precise α-abundances. While the Milky Way has two α-abundance sequences (low-α and high-α) with an α-bimodality at intermediate metallicities, our M31 results show nothing like the MW's low-α sequence or the α-bimodality. In fact, the M31 abundances can be explained by a single high-α population formed with a high star formation rate. These results are consistent with other recent findings in the literature that conclude that the M31 disk is dominated by a thick, high velocity dispersion stellar population. The difference between the MW and M31 disks are likely driven by the higher merger and accretion rate of M31. These contrasts suggest that the dominant processes at work in forming the chemistry and structure in M31's disk were somewhat different than in the Milky Way.
While calibration work is still needed to realize its full potential, we believe that the capabilities of JWST with NIRSpec/MSA for stellar spectroscopy work will produce precise radial velocities and chemical abundances for many galaxies in the Local Group and beyond and will produce important advances in our understanding of galaxy formation and evolution.
§ ACKNOWLEDGEMENTS
We thank the conference organizers for putting together such an interesting conference. Our greatest thanks goes to the JWST, NIRSpec and STScI teams for constructing such an amazing telescope and instrument.
[Allende-Prieto & Team(2023)]FERRE Allende-Prieto, C. & Team, A. 2023, Astrophysics Source Code Library. ascl:2301.016
[Barth et al.(2020)]Barth2020 Barth, N. A., Gerber, J. M., Boberg, O. M., et al. 2020, , 494, 4548. doi:10.1093/mnras/staa1019
[Bland-Hawthorn & Gerhard(2016)]BlandHawthorn2016 Bland-Hawthorn, J. & Gerhard, O. 2016, , 54, 529. doi:10.1146/annurev-astro-081915-023441
[Bovy et al.(2014)]Bovy2014 Bovy, J., Nidever, D. L., Rix, H.-W., et al. 2014, , 790, 127. doi:10.1088/0004-637X/790/2/127
[Buck(2020)]Buck2020 Buck, T. 2020, , 491, 5435. doi:10.1093/mnras/stz3289
[Casey et al.(2016)]Casey2016 Casey, A. R., Hogg, D. W., Ness, M., et al. 2016, arXiv:1603.03040. doi:10.48550/arXiv.1603.03040
[Chiappini, Matteucci, & Gratton(1997)]Chiappini1997 Chiappini, C., Matteucci, F., & Gratton, R. 1997, , 477, 765.
[Clarke et al.(2019)]Clarke2019 Clarke, A. J., Debattista, V. P., Nidever, D. L., et al. 2019, , 484, 3476.
[Dalcanton et al.(2023)]Dalcanton2023 Dalcanton, J. J., Bell, E. F., Choi, Y., et al. 2023, arXiv:2304.08613. doi:10.48550/arXiv.2304.08613
[D'Souza & Bell(2018)]DSouzaBell2018 D'Souza, R. & Bell, E. F. 2018, Nature Astronomy, 2, 737. doi:10.1038/s41550-018-0533-x
[Donor et al.(2020)]Donor2020 Donor, J., Frinchaboy, P. M., Cunha, K., et al. 2020, , 159, 199. doi:10.3847/1538-3881/ab77bc
[Dorman et al.(2015)]Dorman2015 Dorman, C. E., Guhathakurta, P., Seth, A. C., et al. 2015, , 803, 24. doi:10.1088/0004-637X/803/1/24
[Escala et al.(2019)]escala2019 Escala, I., Kirby, E. N., Gilbert, K. M., et al. 2019, , 878, 42. doi:10.3847/1538-4357/ab1eac
[Escala et al.(2020)]escala2020dhs Escala, I., Gilbert, K. M., Kirby, E. N., et al. 2020, , 889, 177. doi:10.3847/1538-4357/ab6659
[Escala et al.(2020)]escala2020 Escala, I., Kirby, E. N., Gilbert, K. M., et al. 2020, , 902, 51. doi:10.3847/1538-4357/abb474
[Escala et al.(2021)]escala2021 Escala, I., Gilbert, K. M., Wojno, J., et al. 2021, , 162, 45. doi:10.3847/1538-3881/abfec4
[Escala et al.(2023)]Escala2023 Escala, I., Quirk, A. C. N., Guhathakurta, P., et al. 2023, , 165, 75. doi:10.3847/1538-3881/aca9cd
[Fuhrmann(1998)]Fuhrmann1998 Fuhrmann, K. 1998, , 338, 161
[Fuhrmann(2011)]Fuhrmann2011 Fuhrmann, K. 2011, , 414, 2893. doi:10.1111/j.1365-2966.2011.18476.x
[Gaia Collaboration et al.(2021)]gaia Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021, , 649, A1.
[Gilbert et al.(2019)]gilbert2019 Gilbert, K. M., Kirby, E. N., Escala, I., et al. 2019, , 883, 128. doi:10.3847/1538-4357/ab3807
[Gilbert et al.(2020)]gilbert2020 Gilbert, K. M., Wojno, J., Kirby, E. N., et al. 2020, , 160, 41. doi:10.3847/1538-3881/ab9602
[Gilmore & Reid(1983)]Gilmore1983 Gilmore, G. & Reid, N. 1983, , 202, 1025. doi:10.1093/mnras/202.4.1025
[Gregersen et al.(2015)]Gregersen2015 Gregersen, D., Seth, A. C., Williams, B. F., et al. 2015, , 150, 189. doi:10.1088/0004-6256/150/6/189
[Hammer et al.(2018)]Hammer2018 Hammer, F., Yang, Y. B., Wang, J. L., et al. 2018, , 475, 2754. doi:10.1093/mnras/stx3343
[Hayden et al.(2015)]Hayden2015 Hayden, M. R., Bovy, J., Holtzman, J. A., et al. 2015, , 808, 132.
[Haywood et al.(2016)]Haywood2016 Haywood, M., Lehnert, M. D., Di Matteo, P., et al. 2016, , 589, A66.
[Hubeny & Lanz(2017)]Hubeny2017 Hubeny, I. & Lanz, T. 2017, arXiv:1706.01859. doi:10.48550/arXiv.1706.01859
[Ibata et al.(2001)]Ibata2001 Ibata, R., Irwin, M., Lewis, G., et al. 2001, , 412, 49. doi:10.1038/35083506
[Khoperskov et al.(2021)]Khoperskov2021 Khoperskov, S., Haywood, M., Snaith, O., et al. 2021, , 501, 5176. doi:10.1093/mnras/staa3996
[Kirby et al.(2020)]kirby2020 Kirby, E. N., Gilbert, K. M., Escala, I., et al. 2020, , 159, 46. doi:10.3847/1538-3881/ab5f0f
[Kurucz(2005)]Kurucz2005 Kurucz R. L., 2005, Memorie della Societa Astronomica Italiana Supplementi, 8, 14
[Mackereth et al.(2018)]Mackereth2018 Mackereth, J. T., Crain, R. A., Schiavon, R. P., et al. 2018, , 477, 5072.
[Majewski et al.(2017)]Majewski2017 Majewski, S. R., Schiavon, R. P., Frinchaboy, P. M., et al. 2017, , 154, 94. doi:10.3847/1538-3881/aa784d
[Nidever et al.(2014)]Nidever2014 Nidever, D. L., Bovy, J., Bird, J. C., et al. 2014, ApJ, 796, 38. doi:10.1088/0004-637X/796/1/38
[Nidever et al.(2015)]Nidever2015 Nidever, D. L., Holtzman, J. A., Allende Prieto, C., et al. 2015, , 150, 173. doi:10.1088/0004-6256/150/6/173
[Nidever(2021)]Doppler Nidever, D. 2021, 10.5281/zenodo.4906681
[Peña & Flores-Durán(2019)]Pena2019 Peña, M. & Flores-Durán, S. N. 2019, RMxAA, 55, 255. doi:10.22201/ia.01851101p.2019.55.02.13
[Reddy et al.(2006)]Reddy2006 Reddy, B. E., Lambert, D. L., & Allende Prieto, C. 2006, , 367, 1329. doi:10.1111/j.1365-2966.2006.10148.x
[Rigby et al.(2023)]Rigby2023 Rigby, J., Perrin, M., McElwain, M., et al. 2023, , 135, 048001. doi:10.1088/1538-3873/acb293
[Schönrich & Binney(2009)]Schoenrich2009 Schönrich, R., & Binney, J. 2009, , 396, 203.
[Sellwood & Binney(2002)]Sellwood2002 Sellwood, J. A. & Binney, J. J. 2002, , 336, 785. doi:10.1046/j.1365-8711.2002.05806.x
[Sharma et al.(2021)]Sharma2021 Sharma, S., Hayden, M. R., & Bland-Hawthorn, J. 2021, , 507, 5882. doi:10.1093/mnras/stab2015
[Solway et al.(2012)]Solway2012 Solway, M., Sellwood, J. A., & Schönrich, R. 2012, , 422, 1363. doi:10.1111/j.1365-2966.2012.20712.x
[Ting et al.(2017)]Ting2017 Ting, Y.-S., Conroy, C., Rix, H.-W., et al. 2017, , 843, 32. doi:10.3847/1538-4357/aa7688
[Wilson et al.(2019)]Wilson2019 Wilson, J. C., Hearty, F. R., Skrutskie, M. F., et al. 2019, , 131, 055001. doi:10.1088/1538-3873/ab0075
[Wojno et al.(2020)]wojno2020 Wojno, J., Gilbert, K. M., Kirby, E. N., et al. 2020, , 895, 78. doi:10.3847/1538-4357/ab8ccb
[Wojno et al.(2022)]wojno2022 Wojno, J. L., Gilbert, K. M., Kirby, E. N., et al. 2022, arXiv:2211.15288. doi:10.48550/arXiv.2211.15288
[Yoshii(1982)]Yoshii1982 Yoshii, Y. 1982, PASJ, 34, 365
|
http://arxiv.org/abs/2306.08041v1
|
20230613180118
|
On Faking a Nash Equilibrium
|
[
"Young Wu",
"Jeremy McMahan",
"Xiaojin Zhu",
"Qiaomin Xie"
] |
cs.MA
|
[
"cs.MA",
"cs.AI",
"cs.CR",
"cs.GT",
"cs.LG"
] |
Circuit QED detection of induced two-fold anisotropic pairing in a hybrid superconductor-ferromagnet bilayer
A. Yacoby
July 31, 2023
============================================================================================================
We characterize offline data poisoning attacks on Multi-Agent Reinforcement Learning (MARL), where an attacker may change a data set in an attempt to install a (potentially fictitious) unique Markov-perfect Nash equilibrium. We propose the unique Nash set, namely the set of games, specified by their Q functions, with a specific joint policy being the unique Nash equilibrium. The unique Nash set is central to poisoning attacks because the attack is successful if and only if data poisoning pushes all plausible games inside it. The unique Nash set generalizes the reward polytope commonly used in inverse reinforcement learning to MARL. For zero-sum Markov games, both the inverse Nash set and the set of plausible games induced by data are polytopes in the Q function space. We exhibit a linear program to efficiently compute the optimal poisoning attack. Our work sheds light on the structure of data poisoning attacks on offline MARL, a necessary step before one can design more robust MARL algorithms.
§ INTRODUCTION
Data poisoning attacks are well-known in supervised learning (intentionally forcing the learner to train a wrong classifier) and reinforcement learning (wrong policy) <cit.>.
Can data poisoning attacks be a threat to Markov Games, too?
This paper answers this question in the affirmative:
Under mild conditions, an attacker can force two game-playing agents to adopt any fictitious Nash Equilibrium (NE), which does not need to be a true NE of the original Markov Game.
Furthermore, the attacker can achieve this goal while minimizing its attack cost, which we define below.
Obviously, such power poses a threat to the security of Multi-Agent Reinforcement Learning (MARL).
Formally, we study two-player zero-sum offline MARL.
Let D be a dataset {(, , )}_k=1^K with K tuples of state s, joint action a = (a_1, a_2), rewards (r, -r).
The attacker's target NE is an arbitrary pure strategy pair π^† := (π^†_1, π^†_2).
The attacker can poison D into another dataset D^† by paying cost C(D,D^†).
Two MARL agents then receive D^† instead of D.
The attacker wants to ensure that the agents learn the target NE π^† while minimizing C.
This problem is not well studied in the literature. Naive approaches – such as modifying all the actions in the dataset to those specified by the target policy (π^†_1, π^†_2) – might not achieve the goal for MARL learners who assign penalties due to the lack of data coverage.
Modifying all the rewards in the dataset that coincides with the target policy to the reward upper bound might be feasible, but would not be optimal in terms of attack cost C.
Results on data poisoning against single-agent reinforcement learning also cannot be directly applied to the multi-agent case. In particular, there are no optimal policies in MARL, and equilibrium policies are computed instead. There could be multiple equilibria that are significantly different, and as a result, installing a target policy as the unique equilibrium is difficult.
Adversarial attacks on MARL have been studied in <cit.>, but we are only aware of one previous work <cit.> on offline reward poisoning against MARL.
Nonetheless, they made the strong assumption that the learners compute the Dominant Strategy Markov Perfect Equilibrium (DSMPE).
In contrast, we assume a weaker solution concept, Markov Perfect Equilibrium (MPE).
Our general attack framework also accommodates other forms of data poisoning.
Our framework can be summarized by the mnemonic “ToM moves to the UN”.
(i) UN stands for the Unique Nash set, which is the set of Q functions that make the target π^† the unique NE.
Uniqueness is crucial for the attacker to ensure that MARL agents choose the target NE with certainty, and not breaking ties arbitrarily among multiple NEs.
(ii) ToM stands for the attacker's Theory of Mind of the MARL agents, namely
the plausible set of Q functions
that the attacker believes
the agents will entertain upon receiving the poisoned dataset D^†.
(iii) The attack is successful if, by controlling D^†, the attacker can move the Tom set inside the UN set.
A successful attack with the smallest cost C(D,D^†) is optimal.
Summary of Contributions:
* We show that the set of zero-sum Markov games for which a deterministic policy is the unique MPE is equivalent to the set of games for which the policy is a strict MPE, and can be characterized by a polytope in the Q function space.
* We describe a class of MARL learners that compute equilibrium policies based on games within confidence regions around a point estimate of the Q function of the Markov game. With appropriate parameters, an attack on these learners would work on most of the model-based and model-free offline MARL learners proposed in the literature.
* We convert a version of the reward poisoning problem to a linear program that can be solved efficiently, and we provide an attack that is always feasible as long as the sizes of the attacker's confidence regions are sufficiently small.
* We provide a unified framework for offline data poisoning attacks on MARL agents. Our results highlight a security threat to multi-agent reinforcement learning agents, a necessary step before one can design novel MARL algorithms robust to adversarial attacks.
§ FAKING A NASH EQUILIBRIUM
§.§ The Unique Nash Set (UN) of a Normal-form Game
We present the main components of our approach with a normal-form game, in particular, a two-player zero-sum game is a tuple (, R), where = × is the joint action space and R : →[-b, b] is the mean reward function. We use b = ∞ in the case of unbounded rewards. Given , we denote the set of reward functions by = { R : →ℝ}.
A pure strategy profile π = (, ) is a pair of actions, where ∈ specifies the action for agent i ∈{1, 2}. We focus on pure strategies, but we allow mixed strategies in which case we use the notation () to represent the probability of i using the action ∈, and R computes the expected reward R(π) ∑_a_1∈, a_2∈(a_1) (a_2) R((a_1, a_2)).
[Nash Equilibrium]
A Nash equilibrium (NE) of a normal-form game (, R) is a mixed strategy profile π that satisfies,
R((, a_2)) = R(π) = R((a_1, )), ∀ a_1 : (a_1) > 0, a_2 : (a_2) > 0,
R((, a_2)) ≤ R(π) ≤ R((a_1, )), ∀ a_1 : (a_1) = 0, a_2 : (a_1) = 0,
in particular, for a pure strategy profile π, it is a Nash equilibrium if,
R((, a_2)) ≤ R(π) ≤ R((a_1, )), ∀ a_1≠, a_2≠.
We define (R) {π : π is an NE of (, R) } to be the set of all Nash equilibria of a normal-form game (, R).
Now, we define the inverse image of from a single pure strategy profile π back to the space of reward functions to be the unique Nash set.
[Unique Nash]
The unique Nash set of a pure strategy profile π is the set of reward functions R such that (, R) has a unique Nash equilibrium π,
(π) ^-1({π}) = { R ∈ : (R) = {π}}.
To characterize (π), we note that for normal-form games, a pure strategy profile π is the unique Nash equilibrium of a game if and only if it is a strict Nash equilibrium, which is defined as a policy π that satisfies (<ref>) with strict inequalities.
For any pure strategy profile π,
(π) = { R ∈ : π is a strict NE of (, R) }
= { R ∈ : R((, a_2)) < R(π) < R((a_1, )), ∀ a_1≠, a_2≠}.
Here, the uniqueness is among all Nash equilibria including mixed-strategy Nash equilibria. The proof of the equivalence between (<ref>) and (<ref>) is in the appendix. We restrict our attention to pure-strategy equilibria and defer the discussion of mixed strategy profiles to the last section.
To avoid working with strict inequalities, we define a closed subset of (π) of reward functions that lead to strict Nash equilibria with an ι reward gap, which means all strict inequalities in (<ref>) are satisfied with a gap of at least ι, for some ι > 0.
[Iota Strict Unique Nash]
For ι > 0, the ι strict unique Nash set of a pure strategy profile π is,
(π; ι) { R ∈ : R((, a_2)) + ι≤ R(π) ≤ R((a_1, )) - ι, ∀ a_1≠, a_2≠}.
For every pure strategy profile π and ι > 0, we have (π; ι) ⊂(π), and the set is a polytope in .
§.§ The Attacker's Theory of Mind (ToM) for Offline Normal-form Game Learners
We provide a model of the attacker's theory of mind of the victim. We assume that the victims compute the Nash equilibria based on the reward functions estimated from a dataset D ∈, where is the set of possible datasets with K episodes in the form {(, )}_k=1^K , with ∈ and ∈[-b, b] for every k ∈[K].
[Theory of Mind]
Given a dataset D ∈, the theory-of-mind set (D) ⊆ is the set of plausible reward functions that the victims estimate based on D to compute their equilibria. In particular, if the victims learn an action profile π, then π∈⋃_R ∈(D)(R).
The theory-of-mind sets can be arbitrary and could be difficult to work with. We define an outer approximation the set that is a hypercube in .
[Outer Approximation of Theory of Mind]
An outer approximation of (D) is a set denoted by (D) that satisfies (D) ⊆(D) for every D ∈, and can be written in the form,
(D) = { R ∈ : | R(a) - R̂(a) | ≤(a), ∀ a∈},
for some point estimate R̂ and radius .
We call (D) a linear outer approximation if R̂ is linear in {}_k=1^K.
We present a few examples of the theory-of-mind sets as follows.
[Theory of Mind for Maximum Likelihood Victims]
Given a dataset D ∈, if the attacker believes the victims are maximum likelihood learners, then (D) is a singleton R^ MLE, where, for every a∈,
R^ MLE (a) 1N(a)∑_k=1^K1_{ = a} if N(a) > 0
0 if N(a) = 0
; N(a) ∑_k=1^K1_{ = a}.
The smallest outer approximation (D) can be specified using R̂ = R^ MLE and = 0, and is linear since (<ref>) is linear in {}_k=1^K .
[Theory of Mind for Pessimistic Optimistic Victims]
Given a dataset D ∈, if the attacker believes the victims are learners that use pessimism and optimism by adding and subtracting bonus terms and estimating one or two games, as in <cit.>, then (D) may contain two reward functions R and R, where for every a∈,
R(a) R^ MLE (a) - β(a); R(a) R^ MLE (a) + β(a),
with β(a) = c√(N(a)) being the bonus term, for some constant c.
The smallest outer approximation (D) can be specified using R̂ = R^ MLE and (a) = β(a) for every a∈, and is linear since (<ref>) and (<ref>) are both linear in {}_k=1^K .
[Theory of Mind for Data Splitting Victims]
Given a dataset D ∈, if the attacker believes the victims use maximum likelihood estimates on a subsample of the D, similar to the data-splitting procedure in <cit.>, then (D) could be viewed as a high-probability set of rewards that the victims are estimating and would be half of the confidence interval width for the mean of the subsample around the mean of the complete dataset R^ MLE.
§.§ The Cheapest Way to Move ToM into UN for Normal-form Games
The goal of the attacker is to install a specific action profile as the unique Nash equilibrium of the game learned by the victim while minimally modifying the training data. We consider a general attacker's cost as a function C : ×→ℝ^+ where C(D, ) is the cost of modifying the dataset from D to . Given the original data set D ∈, the attacker's attack modality (D) is the set of datasets the attacker is allowed to modify the original dataset to. For the reward poisoning problem, where ^(R)(D) is all possible datasets in which only rewards are modified from to , we consider the following cost function.
[L_1 Cost Function]
For reward poisoning problems, we define the L_1 cost of modifying the dataset from D = {(, )}_k=1^K to = {(, )}_k=1^K by C^(1)(D, ) ∑_k=1^K| - |.
Now, given the original dataset D and the attacker's target action profile , we formally state the attacker's problem as finding the cheapest way to move (D) into ().
[Attacker's Problem]
The attacker's problem with the target action profile is,
inf_∈(D) C(D, )
s.t. () ⊆().
In general, (<ref>) cannot be solved efficiently, but for reward poisoning problems with L_1 cost objective, we can relax the attacker's problem using ι strict unique Nash sets, which is a polytope described by (<ref>), and a linear outer approximation of the theory-of-mind set, a hypercube described by (<ref>), which can be converted into a linear program and solved efficiently. We state this observation as the following proposition and depict the relationship between the sets in Figure <ref>.
Given ι > 0 and a linear , the following problem is a relaxation of the attacker's reward poisoning problem and can be converted into a linear program,
min_∈^(R)(D) C^(1)(D, )
s.t. () ⊆(; ι).
In Figure <ref>, given a dataset D, the general attacker's problem (<ref>) of moving (D) (light green) to () (light red) such that it is inside () (light blue) while minimizing the distance from D to is often intractable. We construct a relaxed problem (<ref>) of moving (D) (green) to () (red) such that it is inside () (blue), in which all sets are polytopes and thus can be converted to a linear program for linear costs and linear theory-of-mind mappings.
In the appendix, we provide the complete linear program and show that the solution of (<ref>) is feasible for (<ref>). The optimality of the linear program solution depends on how close the outer approximation of the theory-of-mind set is, and in the case when the theory-of-mind set is already a hypercube, the infimum in (<ref>) can be achieved by taking the limit as ι→ 0. The following is an example illustrating the conversion of (<ref>) into a linear program.
[Maximum Likelihood Centered Linear Program]
In the case R̂ = R^ MLE in the theory-of-mind set, (<ref>) is given by,
min_∈[-b, b]^K ∑_k=1^K| - |
s.t. R^ MLE is a linear function of satisfying (<ref>)
R and R are upper and lower bounds of (; R^ MLE ) satisfying (<ref>)
(R, R) is in () satisfying (<ref>)
Since (; R^ MLE ) is a hypercube and () is a polytope, the fact that the corners of the hypercube are inside the unique Nash set if and only if every element in the hypercube is in the unique Nash set implies that the constraint in (<ref>) is satisfied. Technically, we only require one corner of the hypercube to be inside the unique Nash polytope, as shown in Figure <ref>, and we leave the details to the proof of Proposition <ref> in the appendix. Then, because the objective and all of the constraints in (<ref>) are linear in , R, R and R^ MLE, this problem is a linear program.
§ FAKING A MARKOV PERFECT EQUILIBRIUM
§.§ The Unique Nash Set (UN) of a Markov Game
We now consider the attacker's problem for Markov games. A finite-horizon two-player zero-sum Markov game G is a tuple (, , P, R, H), where is the finite state space; = × is the joint action space; P = { : ×→Δ}_h=1^H is the transition function with the initial state distribution P_0∈Δ; and R = {R_h : ×→[-b ,b]}_h=1^H is the mean reward function; and H is the finite time horizon.
A deterministic Markovian policy π = (, ) is a pair of policies, where = { : →}_h=1^H for i ∈{1, 2}, and (s) specifies the action used in period h and state s. Again, we focus on deterministic policies, but we allow stochastic policies in which case we use the notation = { : →Δ}_h=1^H for i ∈{1, 2}, and (s)() represent the probability of i using the action ∈ in period h state s.
The Q function is defined as, for every h ∈[H], s ∈, a∈,
(s, a) = (s, a) + ∑_s' ∈(s' | s, a) max_∈Δmin_∈Δ(s', π),
with the convention (s, a) = 0, and in the case π is stochastic, we write,
(s, (s)) = ∑_a_1∈∑_a_2∈(s)(a_1) (s)(a_2) (s, (a_1, a_2)).
Given , , H, we denote the set of Q functions by = {{ : ×→ℝ}_h=1^H}. Technically, is not the set of proper Q functions of Markov games since both the reward functions and the transition functions do not have to be proper, and given Q ∈, we may not be able to construct a Markov game that induces Q. This choice is made to accommodate both model-based and model-free victims who may or may not estimate the rewards and transitions explicitly from the dataset.
A stage game of a Markov game G in period h ∈[H], state s ∈ under policy π is a normal form game (, (s)), where is the joint action space of G; and (s) is the mean reward function, meaning the reward from action profile a∈ is (s, a). We define Markov perfect equilibria as policies in which the action profile used in every stage game is a Nash equilibrium.
[Markov Perfect Equilibrium]
A Markov perfect equilibrium (MPE) policy π is a policy such that (s) is a Nash equilibrium in the stage game (, (s)).
We define the set of all Markov perfect equilibria policies of a Markov game that induces Q ∈ by (Q) = {π : π is an MPE of a Markov game with Q function Q }.
We note that Nash equilibria for Markov games can also be defined by converting the Markov game into a single normal-form game, but we only consider Markov perfect equilibria since Nash equilibria that are not Markov perfect require coordination and commitment to policies in stage games that are not visited along equilibrium paths, which is not realistic in the multi-agent reinforcement learning setting.
We define the unique Nash set for Markov games as follows.
[Unique Nash]
The unique Nash set of a deterministic Markovian policy π for a Markov game G is the set of Q functions such that π is the unique Markov perfect equilibrium under policy π,
(π) ^-1({π}) = {Q ∈ : (Q) = {π}}.
Next, we extend the characterization of the unique Nash set for normal-form games to the Markov game setting.
For any deterministic policy π,
(π) = { Q ∈ : (s) is a strict NE of (, (s)), ∀ h ∈[H], s ∈}
= { Q ∈ : (s, ((s), a_2)) < (s, π(s)) < (s, (a_1, (s))),
∀ a_1≠(s), a_2≠(s), h ∈[H], s ∈
},
We show the equivalence between (<ref>) and (<ref>) in the proof of Theorem <ref> in the appendix. To avoid working with strict inequalities in (<ref>), we again define the ι strict version of the unique Nash polytope.
[Iota Strict Unique Nash]
For ι > 0, the ι strict unique Nash set of a deterministic policy π is,
(π; ι) { Q ∈ : (s, ((s), a_2)) + ι≤(s, π(s)),
(s, π(s)) ≤(s, (a_1, (s))) - ι,
∀ a_1≠(s), a_2≠(s), h ∈[H], s ∈
}.
For every deterministic policy π and ι > 0, we have (π; ι) ⊂(π), and the set is a polytope in .
§.§ The Attacker's Theory of Mind (ToM) for Offline Multi-Agent Reinforcement Learners
Similar to the theory-of-mind set for normal-form game learners, we define the set for Markov game learners in the space. Here, is the set of datasets with K episodes in the form {{(, , )}_h=1^H}_k=1^K with ∈, ∈ and ∈[-b, b] for every k ∈[K], and the victims compute the Markov perfect equilibria based on the Q functions estimated from such datasets.
[Theory of Mind]
Given a dataset D ∈, the theory-of-mind set (D) ⊆ is the set of Q functions that the victims estimate based on D to compute their equilibria. In particular, if the victims learn a policy π, then π∈⋃_Q ∈(D)(Q).
[Theory of Mind for Maximum Likelihood Victims]
To extend Example <ref> in the Markov game setting, we define R^ MLE the same way and P^ MLE as follows,
^ MLE (s, a) 1(s, a)∑_k=1^K1_{ = s, = a} if (s, a) > 0
0 if (s, a) = 0
,
(s, a) ∑_k=1^K1_{ = s, = a} ,
^ MLE (s' | s, a) ∑_k=1^K1_{ = s', = s, = a}(s, a) if (s, a) > 0
1| | if (s, a) = 0
,
P_0^ MLE (s) 1K∑_k=1^K1_{s^(k)_1 = s} .
We can construct Q^ MLE based on R^ MLE and P^ MLE according to (<ref>), and since all Nash equilibria have the same value for zero-sum games, Q^ MLE is unique for every Markov perfect equilibrium of the Markov game with rewards R^ MLE and transitions P^ MLE. Then we have that (D) is a singleton Q^ MLE.
[Theory of Mind for Confidence Bound Victims]
Given a dataset D ∈, if the attacker believes the victims estimate the Markov game by estimating the rewards and transitions within some confidence region around some point estimates such as the maximum likelihood estimates, as described in <cit.>, then (D) would be a polytope with Q functions induced by the Markov games (, , P, R, H) with P and R satisfying, for every h ∈[H], s ∈, a∈,
(s, a) ∈(s, a) {R ∈ℝ: | R - (s, a) | ≤(s, a) },
(s, a) ∈(s, a) {P ∈Δ : P - (s, a)_1≤(s, a)} ,
for some point estimates P̂, R̂, and radii and . We note that (D) is a polytope in , but it has an exponential number of vertices. We can construct a tight hypercube around this polytope and call it the outer approximation of (D). It contains all the Q functions in the following set, for every h ∈[H], s ∈, a∈,
(s, a) ∈[(s, a), (s, a)],
(s, a) min_R ∈(s, a) R + min_P ∈(s, a)∑_s' ∈ P(s') max_∈Δmin_∈Δ(s', π),
(s, a) max_R ∈(s, a) R + max_P ∈(s, a)∑_s' ∈ P(s') max_∈Δmin_∈Δ(s', π).
We omit Example <ref> and Example <ref> for Markov games since the constructions are identical, except it is done for every stage game. As described in Example <ref>, we formally define the outer approximation of the theory-of-mind set for Markov games as follows.
[Outer Approximation of Theory of Mind]
An outer approximation of (D) is a set denoted by (D) that satisfies (D) ⊆(D) for every D ∈, and can be written in the form,
(D) = { Q ∈ : | (s, a) - (s, a) | ≤(s, a), ∀ a∈, h ∈[H], s ∈},
for some point estimate Q̂ and radius .
We call (D) a linear outer approximation if Q̂ is linear in {{}_h=1^H}_k=1^K .
§.§ The Cheapest Way to Move ToM into UN for Markov Games
In this subsection, we restate the attacker's problem for multi-agent reinforcement learners.
[Attacker's Problem]
The attacker's problem with target policy is,
inf_∈(D) C(D, )
s.t. () ⊆().
For reward poisoning problems, we consider the following L_1 cost.
[L_1 Cost Function]
For reward poisoning problem, where ^(R)(D) is all possible datasets in the form = {{(, , )}_h=1^H}_k=1^K that are modified from D = {{(, , )}_h=1^H}_k=1^K , we define the L_1 cost by C^(1)(D, ) = ∑_k=1^K∑_h=1^H| - |.
We use the same ι strictness relaxation of the unique Nash set and the linear outer approximation of the theory-of-mind set to convert (<ref>) into a linear program, which can be solved efficiently. We state this observation as the following theorem.
Given ι > 0 and a linear , the following problem is a relaxation of the attacker's reward poisoning problem and can be converted into a linear program,
min_∈^(R)(D) C^(1)(D, )
s.t. () ⊆(; ι).
[Maximum Likelihood Centered Linear Program]
In the case R̂ = R^ MLE and P̂ = P^ MLE, and we construct (D) as described in Example <ref>, (<ref>) can be converted into a linear program even without explicitly constructing the (D) set. We provide an intuition here and the formal construction in the proof of Theorem <ref>,
min_∈[-b, b]^K ∑_k=1^K∑_h=1^H| - |
s.t. R^ MLE is a linear function of satisfying (<ref>)
P^ MLE is independent of satisfying (<ref>)
Q^ MLE is a linear function of R^ MLE thus satisfying (<ref>)
Q and Q are upper and lower bounds of (; Q^ MLE ) satisfying (<ref>)
(Q, Q) is in () satisfying (<ref>)
Similar to Example <ref>, we move the hypercube (; Q^ MLE ) into the polytope () by moving one of the corners into the polytope. Note that if Q and Q are not constructed directly as linear functions of , and are computed by (<ref>), then these constraints are not linear in . We avoid this problem by using the dual linear program of (<ref>). We present the details in the appendix in the proof of Theorem <ref>. All other constraints are linear in , and as a result, (<ref>) is a linear program.
In the end, we present a sufficient but not necessary condition for the feasibility of (<ref>) and (<ref>). This condition applies directly to normal-form games with H = 1.
For ι > 0, (D) with Q̂ = Q^ MLE, and (s, a) > 0 for every h ∈[H], s ∈, a∈ where either a_1 = (s) or a_2 = (s), the attacker's reward poisoning problem is feasible if for every h ∈[H], s ∈, a∈,
(s, a) ≤b - ι4 H.
0.495
0.495
= 0 if = ()
-b if = ()
b if = ()
otherwise
To construct a feasible attack under (<ref>), we use the poisoned rewards in (<ref>). An example where each agent has three actions and the target action profile being action (1, 1) is shown in Table <ref>. With this , the maximum likelihood estimate of the game has a unique Nash equilibrium (s) with a value of 0 in every stage (h, s). Furthermore, if either the radius of rewards or the radius of Q functions for the theory-of-mind set is less than b - ι4 H, we can show inductively that (s) remains the unique Nash equilibrium in every stage (h, s), thus showing that every Q function in the theory-of-mind set is also in the unique Nash set, which means the attack is feasible. The complete proof is in the appendix.
§ DISCUSSIONS
We discuss a few extensions.
* Faking a Unique Mixed Strategy Nash Equilibrium: due to the sensitivity of mixing probabilities from small perturbations of the reward function, as long as the theory-of-mind set has non-zero volume, it is impossible to install a mixed strategy profile (or stochastic policy for Markov games) as the unique equilibrium in general. However, this could be possible when the theory-of-mind set is a singleton. To characterize the unique Nash set for a mixed strategy profile, we need to extend Proposition <ref> to include an additional invertibility condition on the reward function, but it is difficult to convert this condition into a linear constraint. We leave the technical details for future work.
* Faking an Optimal Policy for Single-Agent Reinforcement Learners: to attack a single-agent Markov decision process, we observe that a policy π is the unique optimal policy if and only if π is deterministic and is the strict optimal policy. As a result, the unique optimal policy set is also a polytope and can be viewed as a special case of the unique Nash set for a one-player game. In the case of reward poisoning, the attacker's problem can be formulated as a linear program similar to (<ref>).
* Faking a Unique Coarse Correlated Equilibrium in Every Stage Game: for two-player zero-sum Markov games, π is the unique Markov Perfect Coarse Correlated Equilibrium if and only if π is the unique Markov Perfect Equilibrium. Therefore, the results in the previous section apply directly.
* Faking a Unique Markov Perfect Dominant Strategy Equilibria for General-Sum Games: for n-player general-sum Markov games, if π is a deterministic policy and it is a Markov Perfect Strict Dominant Strategy Equilibrium, then π is the unique Markov Perfect Equilibrium. The attacker's formulation in <cit.> can be viewed as a special case of our results when Nash equilibria are replaced by dominant strategy equilibria.
plain
§ SUPPLEMENTARY MATERIAL
§.§ Proof of Proposition <ref> and Theorem <ref>
We show that for zero-sum games, strict MPEs are MPEs and they are unique. We use the following definition of MPE and strict MPE for zero-sum games rewritten in terms of Q functions. Proposition <ref> is a special case of Theorem <ref> with H = | | = 1.
(Markov Perfect Equilibrium for Zero-sum Games) is a MPE if for each h ∈[H], s ∈,
^(s, (s)) ≥^(s, (a_1, (s))), ∀ a_1≠(s),
^(s, (s)) ≤^(s, ((s), a_2)), ∀ a_2≠(s).
(Strict Markov Perfect Equilibrium for Zero-sum Games) is a strict MPE if for each h ∈[H], s ∈,
^(s, (s)) > ^(s, (a_1, (s))), ∀ a_1≠(s),
^(s, (s)) < ^(s, ((s), a_2)), ∀ a_2≠(s).
Fix a period h ∈[H], and assume in periods h + 1, h + 2, ..., H, is the unique NE in every state s ∈. This is vacuously true in period H.
First, (s) is a NE since (<ref>) implies (<ref>) and (<ref>) implies (<ref>).
Now, for a contradiction, assume (a^'_1, a^'_2) ≠(s) is another NE in the stage game in period h in some state s ∈, then,
^(s, (a^'_1, a^'_2)) ≥^(s, ((s), a^'_2)),
^(s, (a^'_1, a^'_2)) ≤^(s, (a^'_1, (s))).
From the strict MPE conditions,
^(s, (s)) (<ref>)>^(s, ((s), a^'_2)),
^(s, (s)) (<ref>)<^(s, (a^'_1, (s))).
Combine the above inequalities, we get,
^(s, (s)) (<ref>), (<ref>)>^(s, (a^'_1, a^'_2)),
^(s, (s)) (<ref>), (<ref>)<^(s, (a^'_1, a^'_2)),
which is a contradiction.
Therefore, is the unique NE in period h, state s. Since h and s are arbitrary, is the unique MPE.
§.§ Proof of Proposition <ref> and Theorem <ref>
We first write out the complete optimization problem for (<ref>) in Example <ref>, then we show that the optimization is a relaxation by showing for any Q^∈[^, ^] elememtwise, is a strict MPE, and as a result Theorem <ref> implies its uniqueness. The proof that the problem can be converted into a linear program is similar to LP conversions in <cit.>. We do not write out the complete LP, and instead we show that each constraint can be converted into a linear constraint. Theorem <ref> is a special case of (<ref>) with given ^ and ^ that are not derived from the rewards and transitions, and Proposition <ref> is a special case of Theorem <ref> when H = | | = 1.
min_∈[0, 1]^H K ∑_k=1^K∑_h=1^H| - |
subject to (s, a) = ∑_k=1^K∑_h=1^H1_{ = s, = a}max{(s, a), 1},
∀ h ∈[H], s ∈, a∈,
^(s, a) = min_R ∈(s, a) R + min_P ∈(s, a)∑_s' ∈ P(s') ^(s', (s')),
∀ h ∈[H], s ∈, a∈,
^(s, a) = max_R ∈(s, a) R + max_P ∈(s, a)∑_s' ∈ P(s') ^(s', (s')),
∀ h ∈[H], s ∈, a∈,
^(s, a) = ^(s, a) = 0, ∀ s ∈, a∈,
^(s, (s)) ≥^(s, (a_1, (s))) + ι,
∀ h ∈[H], s ∈, a_1≠(s),
^(s, (s)) ≤^(s, ((s), a_2)) - ι,
∀ h ∈[H], s ∈, a_2≠(s).
Since we evaluate the and functions on the policy , we add superscript on and inside the optimization for clarity.
Take any R ∈ and P ∈, due to the definition of ^ and ^, which are replicated in (<ref>) and (<ref>), we know that, for each h ∈[H], s ∈, a∈,
^(s, a) ≤^(s, a) ≤^(s, a).
Fix period h ∈[H], and assume in periods h + 1, h + 2, ..., H, is the Nash equilibrium in every state s ∈. This is vacuously true in period H.
For a fixed s ∈, for any a_1≠(s),
^(s, (s)) (<ref>)≥^(s, (s))
(<ref>)≥^(s, (a_1, (s))) + ι
(<ref>)≥^(s, (a_1, (s))) + ι,
and for any a_2≠(s),
^(s, (s)) (<ref>)≤^(s, (s))
(<ref>)≤^(s, ((s), a_2)) - ι
(<ref>)≥^(s, ((s), a_2)) - ι,
(<ref>) and (<ref>) imply that (s) is the Nash equilibrium in period h state s.
Therefore, Q^∈(; ι), and by Theorem <ref>, is the unique MPE.
Now, to show that the problem can be converted into an LP, we note that (<ref>), (<ref>) and (<ref>) are linear constraints. We only have to convert (<ref>) and (<ref>) into linear constraints, in particular, we convert the following linear program, for some h ∈[H], s ∈, a∈,
min_P∑_s' ∈ P(s') ^(s', (s'))
subject to P(s') ≤(s' | s, a) + (s, a), ∀ s' ∈,
P(s') ≥(s' | s, a) - (s, a), ∀ s' ∈,
∑_s' ∈ P(s') = 1,
P(s') ≥ 0, ∀ s' ∈,
into its dual problem,
max_u∈ℝ^, v∈ℝ^, w∈ℝ ∑_s' ∈(s' | s, a)(u_s' - v_s') + (s, a) (u_s' + v_s') + w
subject to u_s' - v_s' + w≥ -^(s', (s')), ∀ s' ∈,
u_s'≥ 0, v_s'≥ 0, ∀ s' ∈.
Therefore, (<ref>) can be rewritten as the following linear constraints,
^(s, a) = (s, a) - (s, a)
+ ∑_s' ∈(s' | s, a)(u_s' - v_s') + (s, a) (u_s' + v_s') + w,
u_s' - v_s' + w ≥ -^(s', (s')), ∀ s' ∈,
u_s' ≥ 0, v_s'≥ 0, ∀ s' ∈.
The similar dual problem can be written out for the to replace (<ref>),
^(s, a) = (s, a) + (s, a)
+ ∑_s' ∈(s' | s, a)(u_s' - v_s') + (s, a) (u_s' + v_s') + w,
u_s' - v_s' + w ≥^(s', (s')), ∀ s' ∈,
u_s' ≥ 0, v_s'≥ 0, ∀ s' ∈.
The linearization of the other and constraints are similar.
§.§ Proof of Theorem <ref>
Again, we write the proof for (<ref>) in Example <ref>, and Theorem <ref> is a special case with given ^ and ^ that are not derived from the rewards and transitions. In particular, setting = and = 0 would like to the result stated in Theorem <ref>. We first provide the intuition behind the proofs. The proof is at the end of this subsection.
Suppose the target action profile is (1, 1) in some state s in period h, we show that the target action profile (1, 1) is the unique NE for any (s, ·) ∈[(s, ·), (s, ·)] under the following attack,
= -b if ≠(), = ()
0 if = (), = ()
b if = (), ≠()
otherwise
.
To simplify the notations, we define the bounds on the cumulative Q value in period h + 1, h + 2, ..., H as,
= ∑_h' = h + 1^Hmin_s' ∈(s', (s'))
= ∑_h' = h + 1^Hmax_s' ∈(s', (s'))
(s) is lower bounded by,
∖ 1 2 ... | |
1 0 - (s, (1, 1)) + b - (s, (1, 2)) + ... b - (s, (1, | |)) +
2 -b - (s, (2, 1)) + ? ... ?
... ... ... ... ...
| | - b - (s, (| |, 1)) + ? ... ?
(s) is upper bounded by,
∖ 1 2 ... | |
1 0 + (s, (1, 1)) + b + (s, (1, 2)) + ... b + (s, (1, | |)) +
2 -b + (s, (2, 1)) + ? ... ?
... ... ... ... ...
| | -b + (s, (| |, 1)) + ? ... ?
For (1, 1) to be the ι strict, thus unique, Nash equilibrium for all Q ∈[, ], sufficient conditions are, for a_1≠ 1 and a_2≠ 1,
- (s, (1, 1)) + - ι2 ≥ - b2 H(H - h + 1) ≥ -b + (s, (a_1, 1)) + + ι2 ,
(s, (1, 1)) + + ι2 ≤b2 H(H - h + 1) ≤ b - (s, (1, a_2)) + - ι2 ,
which would be true in period 1 if the following is satisfied for a such that either a_1 = (s) or a_2 = (s),
(s, a) ≤b - ι4 H≤b2 H - ι2,
which in turn implies,
≥ - b2 H(H - h + 1) + b4 H,
≤b2 H(H - h + 1) - b4 H.
We provide the formal proof below.
We assume is satisfied, meaning, for each h ∈[H], s ∈, a∈,
(s, a) ≤b - ι4 H≤b2 H - ι2 .
In addition, take R ∈, based on (<ref>), we can compute using (<ref>), and for each h ∈[H], s ∈,
- (s, (s)) ≤(s, (s)) ≤(s, (s)),
-b - (s, (a_1, (s))) ≤(s, (a_1, (s)))
≤ -b + (s, (a_1, (s))),
b - (s, ((s), a_2)) ≤(s, ((s), a_2))
≤ b + (s, ((s), a_2)).
We proceed by induction. In period H, for a_1≠(s),
^(s, (s)) - ι2 = (s, (s)) - ι2
(<ref>)≥ - (s, (s)) - ι2
(<ref>)≥ - b2 H
≥ -b + b2 H
(<ref>)≥ - b + (s, (a_1, (s))) + ι2
(<ref>)≥(s, (a_1, (s))) + ι2
= ^(s, (a_1, (s))) + ι2 ,
and for a_2≠(s),
^(s, (s)) + ι2 = (s, (s)) + ι2
(<ref>)≤(s, (s)) + ι2
(<ref>)≤b2 H
≤ b - b2 H
(<ref>)≤ b - (s, (a_1, (s))) - ι2
(<ref>)≤(s, ((s), a_2)) - ι2
= ^(s, ((s), a_2)) - ι2 .
Now, fix a period h < H, we assume in periods h' ∈{h + 1, h + 2, ..., H}, in every state s ∈, is the Nash equilibrium, and,
- b2(H - h' + 1) ≤^(s, (s)) ≤b2(H - h' + 1).
This is true in period H due to (<ref>) and (<ref>).
Now in period h, for a fixed s ∈, for any a_1≠(s),
^ (s, (s)) - ι2
= (s, (s)) + ∑_s' ∈(s' | s, (s)) ^(s', (s')) - ι2
≥(s, (s)) + min_s' ∈^(s', (s')) - ι2
(<ref>)≥(s, (s)) - b2(H - h) - ι2
(<ref>)≥ - (s, (s)) - b2 H(H - h) - ι2
(<ref>)≥ - b2 H - b2 H(H - h)
≥ - b2 H(H - h + 1)
≥ -b + b2 H + b2 H(H - h)
(<ref>)≥ -b + (s, (a_1, (s))) + b2 H(H - h) + ι2
(<ref>)≥(s, (a_1, (s))) + b2 H(H - h) + ι2
(<ref>)≥(s, (a_1, (s))) + max_s' ∈^(s', (a_1, (s'))) + ι2
≥(s, (a_1, (s))) + ∑_s' ∈(s' | s, (a_1, (s))) ^(s', (a_1, (s'))) + ι2
= ^(s, (a_1, (s))) + ι2 ,
and for a_2≠(s),
^ (s, (s)) + ι2
= (s, (s)) + ∑_s' ∈(s' | s, (s)) ^(s', (s')) + ι2
≤(s, (s)) + max_s' ∈^(s', (s')) + ι2
(<ref>)≤(s, (s)) + b2 H(H - h) + ι2
(<ref>)≤(s, (s)) + b2 H(H - H) + ι2
(<ref>)≤b2 H + b2 H(H - h)
= b2 H(H - h + 1)
≤ b - b2 H - b2 H(H - h)
(<ref>)≤ b + (s, (a_1, (s))) - b2 H(H - h) - ι2
(<ref>)≤(s, ((s), a_2)) - b2 H(H - h) - ι2
(<ref>)≤(s, ((s), a_2)) + min_s' ∈^(s', ((s), a_2)) - ι2
≤(s, ((s), a_2)) + ∑_s' ∈(s' | s, ((s), a_2)) ^(s', ((s), a_2)) - ι2
= ^(s, ((s), a_2)) - ι2 .
Therefore, is the Nash equilibrium in period h state s, and (<ref>) and (<ref>) are consistent (<ref>). By induction, is a strict, thus unique, Nash equilibrium in every stage game, making the unique MPE.
|
http://arxiv.org/abs/2306.01841v1
|
20230602180102
|
Binary and Ternary Natural Language Generation
|
[
"Zechun Liu",
"Barlas Oguz",
"Aasish Pappu",
"Yangyang Shi",
"Raghuraman Krishnamoorthi"
] |
cs.CL
|
[
"cs.CL"
] |
^133In: A Rosetta Stone for decays of r-process nuclei
C. X. Yuan
today
======================================================
Ternary and binary neural networks enable multiplication-free computation and promise multiple orders of magnitude efficiency gains over full-precision networks if implemented on specialized hardware. However, since both the parameter and the output space are highly discretized, such networks have proven very difficult to optimize. The difficulties are compounded for the class of transformer text generation models due to the sensitivity of the attention operation to quantization and the noise-compounding effects of autoregressive decoding in the high-cardinality output space. We approach the problem with a mix of statistics-based quantization for the weights and elastic quantization of the activations and demonstrate the first ternary and binary transformer models on the downstream tasks of summarization and machine translation. Our ternary BART base achieves an R1 score of 41 on the CNN/DailyMail benchmark, which is merely 3.9 points behind the full model while being 16x more efficient. Our binary model, while less accurate, achieves a highly non-trivial score of 35.6. For machine translation, we achieved BLEU scores of 21.7 and 17.6 on the WMT16 En-Ro benchmark, compared with a full precision mBART model score of 26.8. We also compare our approach in the 8-bit activation setting, where our ternary and even binary weight models can match or outperform the best existing 8-bit weight models in the literature. Our code and models are available at: <https://github.com/facebookresearch/Ternary_Binary_Transformer>.
§ INTRODUCTION
Generative pre-trained transformers <cit.> have emerged as powerful and generic tools, driving breakthroughs not only in language understanding but the field of AI in general. These models owe their success mainly to their seemingly infinite ability to scale to ever-larger data and model sizes. Unfortunately, such scaling comes at the cost of large computational requirements, putting extensively large generative transformers out of reach of all but the most resource-rich institutions. Even moderately sized pre-trained transformers have limited applications due to their size and computational cost. Making generative transformers more efficient is imperative for widening their use to more devices and practical applications.
In this work, we explore making generative pre-trained transformers more efficient via the quantization of their weights and activations. Quantizing the weights of a neural network is useful for compression and allows the model to be stored more efficiently. However, compression alone does not reduce computation costs since the network's activations need to be computed in full precision. Quantizing both weights and activations allows computation to be performed with lower precision, potentially leading to significant efficiency gains depending on the quantization level and hardware implementation. Quantizing neural networks have a long history, and multiple works have attempted to quantize pre-trained transformers at various quantization levels <cit.>. Most of this work focuses on encoder-only models (mainly BERT) for sentence and token classification tasks. Quantizing text generation models has generally been regarded as a more difficult task <cit.> due to the large output vocabulary and sequential decoding. Recent work has tackled this problem, though only for mild quantization levels (down to 8-bit activations) and with mixed success.
In contrast, we are interested in very low-bit quantization, down to ternary and even binary weights and activations. In order to achieve this, we combine and unify best practices for weight and activation quantization and present a framework that uses gradient-matching quantization for weights and elastic quantization for activations. We apply our method to natural language generation tasks and, for the first time, demonstrate low-bit generative transformers of competitive accuracy. Our ternary (weight and activation) model lags a full-precision BART <cit.> model by only 4 points in ROUGE on the XSUM summarization dataset. In contrast, our model with ternary weights and 8-bit activations comes within 1 point and even outperforms comparable state-of-the-art models with 8-bit weights. We also demonstrate a fully binary (weights and activations) model. While not as competitive, it is able to achieve a highly non-trivial ROUGE-1 score of 31.7.
Our results also extend to machine translation models. On the WMT16 En-Ro benchmark, we quantize an mBART model to extend the ternary-weight 8-bit activation SoTA by 1.2 points while demonstrating fully ternary and fully binary translation models for the first time.
We summarize our contributions as follows:
∙ We propose a novel combination of statistics-based weight quantization with learning-based activation quantization, which enables stably training transformer encoder-decoder models to converge in the fully ternary/binary settings, which was not previously possible.
∙ We significantly improve the state-of-the-art text generation models in the 8-bit activation and ternary/binary weight settings while setting the first non-trivial baselines for the fully ternary and fully binary settings.
§ METHOD
In this section, we first introduce the previous practices in binarization and ternarization. Then, we introduce a unified statistic-based weight binarization / ternarization method that can alleviate the gradient mismatch issue and enhance the quantized weights entropy. Lastly, we analyze the difference between weight quantization and activation quantization and propose an elastic ternarization method for activations. We abbreviate our method as , short for “Ternary / Binary Transformer”.
§.§ Preliminary
§.§.§ Ternarization
Ternary neural networks, where real values are quantized to three levels, are first introduced in <cit.>. Thus, these values can be represented in 2 bits, leading to a 16× reduction in size and computation. Moreover, the computations can be calculated multiplication-free, leading to even further computation gains on suitable hardware. The recent work integrates the ternarization algorithm in natural language models for quantizing the weights and activations in classification tasks <cit.> and ternarizing the weight (8-bit activations are used) in generative models <cit.>. The general formula <cit.> for ternarization is as follows:
𝐗_𝐓^i ={[ - α__𝐓, if 𝐗_𝐑^i < -Δ; 0, if -Δ⩽𝐗_𝐑^i ⩽Δ; + α__𝐓, if 𝐗_𝐑^i > Δ; ].
Δ = 0.7 · ||𝐗_𝐑||_l1/n__𝐗_𝐑
α__𝐓 = ∑_i 𝐗_𝐑^i·1_| 𝐗_𝐑^i|>Δ/∑_i 1_| 𝐗_𝐑^i|>Δ
Here 𝐗_𝐓 denotes the ternary weights/activations, and 𝐗_𝐑 represents their real-valued counterparts. n__𝐗_𝐑 denotes the total number of elements in the tensor. Δ is the ternary threshold, and α__𝐓 is the scaling factor that minimizes l2-loss between 𝐗_𝐓 and 𝐗_𝐑.
§.§.§ Binarization
The neural network binarization denotes representing the weights and/or activation with bi-level values. It is first proposed in BNN <cit.> and has evolved in the follow-up works <cit.>. <cit.> formulates binarization as:
𝐗_𝐁^i = α__𝐁· Sign(𝐗_𝐑^i) ={[ - α__𝐁, if 𝐗_𝐑^i < 0; + α__𝐁, if 𝐗_𝐑^i ⩾ 0 ].
α__𝐁 = ||𝐗_𝐑||_l1/n__𝐗_𝐑
Here 𝐗_𝐁 can represent binary weights or binary activations. α__𝐁 denotes the scaling-factor that minimize the l2 loss between 𝐗_𝐑 and α__𝐁· Sign(𝐗_𝐑).
The acceleration and compression effect of ternary/binary neural networks is significant. By representing the weights and activations with {-1, 0, 1}, the network enjoys ∼16× memory saving compared to its 32-bit floating-point counterpart. When further binarize the weights and activations to only 1-bit (i.e., {-1, 1}), up to 32× model-size reduction and 58× speedup on CPUs have been achieved <cit.>, where the matrix multiplication operations are replaced with light-weighted bitwise XNOR operations.
Despite its appealing characteristics, naively binarizing or ternarizing the transformer model for natural language generation results in several accuracy drops or even a total failure in training. It has been observed that the attention layers of the transformer network are difficult to quantize to low bits. Also, the auto-regressive decoding tends to accumulate errors due to quantization. Given the nature of generative language networks that require high-precision output, quantizing both the activations and weights in these models to extreme bit values is non-trivial and has not been explored before.
§.§ Stats-based max-entropy isometric weight quantization
We propose a statistics-based method for weight binarization/ternarization. Particularly, this novel quantization method considers maximizing the entropy of the quantized weights and reducing the gradient mismatch in the backward pass. Previous works <cit.> are mainly focused on minimizing the l2 loss between the quantized weights and the real-valued weights to find the optimal quantization scheme,
α^* = || α𝐖̂_𝐐 - 𝐖_𝐑 ||_l2
where 𝐖̂_𝐐 denotes binary/ternary weights and α^* denotes the optimal scaling factor calculated.
Despite the broad application and great success of the classic quantization scheme, we found that merely minimizing the l2 loss neglects several critical but intractable issues in ultra-low-bit weight quantization: (1) The information entropy of the quantized weights is not considered. Eq. <ref> and Eq. <ref> calculate the quantized weights to minimize the distance to the real-valued weights, which could lead to imbalanced quantized weight distribution and harm the quantized weights representation capacity. (2) The quantization function Eq. <ref> and Eq. <ref> are not isometric, meaning that it does not consider the magnitude consistency between the quantized weights and real-valued weights, while we find that magnitude consistency contributes significantly to accurate gradient estimation.
Considering the above two limitations in previous solutions, we are motivated to design a novel quantization function that enhances information entropy and reduces gradient mismatch. To boost the weights representation capability, in information theory, more information is preserved when the quantized weights contain higher entropy:
max_p_i ℋ = -p_i log(p_i), s.t. ∑_i=1^N p_i = 1
with p_i denoting the proportion of real-valued weights being quantized to i^th quantization level in total N levels. Eq. <ref> can be easily solved with a Lagrange multiplier, and the optimal p_i^* = 1/N, i ∈{1,2,…, N}, suggesting the best quantization scheme to preserve maximum information entropy is to distribute the real-valued weights in all quantization levels as evenly as possible.
For reducing the gradient mismatch, as suggested by the previous binarization work <cit.>, the magnitude difference between the quantized weight and the real-valued weight will greatly influence the gradient scale and a mismatch in magnitude will be amplified in back-propagation and cause gradient vanishing or explosion during training. Thus it is important to ensure the magnitude of real-valued weights and quantized weights are consistent.
Combining two requirements discussed above, we proposed max-entropy isometric weight quantization.
In ternarization, it is formulated as
𝐖_𝐓^i = α__𝐓 Clip(𝐖_𝐑^i - μ__𝐓/α__𝐓, -1, 1)
where μ__𝐓 = 𝐖_𝐑,
α__𝐓 = 4/3·||𝐖_𝐑 - μ__𝐓||_l1/n__𝐖_𝐑
Where 𝐖_𝐓 and 𝐖_𝐑 refer to the ternary weights and real-valued weights, respectively. The rounding function · and Clip(·) function quantize weights to {-1, 0 ,1}. μ__𝐓 is the mean of real-valued weights and n__𝐖_𝐑 denotes the number of weights in the weight matrix. Scaling factor α is calculated from the weight statistics and follows the entropy rule to scale the real-valued weight 𝐖_𝐑 to be evenly distributed in quantization levels. In the ternary case, the weights are quantized to {-α__𝐓, 0, α__𝐓}. When the real-valued weights are initialized as uniformly and symmetrically distributed <cit.>, the scaling factor α__𝐓 will distribute 𝐖_𝐑^i/α__𝐓 to [-1.5, 1.5], such that the output ternary weights will have near uniform distribution in three ternary levels. Meanwhile, Eq. <ref> is an isometric mapping where the real-valued weights are scaled by 1/α__𝐓 to near [-1, 1] and time α__𝐓 to scale back after quantization. In this way, the magnitude is preserved.
Correspondingly, in the binary case we have,
𝐖_𝐁^i = α__𝐁·Sign(𝐖_𝐑^i - μ__𝐁/α__𝐁)
where μ__𝐁 = 𝐖_𝐑,
α__𝐁 = ||𝐖_𝐑 - μ__𝐁||_l1/n__𝐖_𝐑
Here 𝐖_𝐁 denotes the binary weights, where substracting the average μ__𝐁 makes the real-valued weight zero-centered before binarization and thus encourages an even distribution in binarized weights. Then the scaling factor α__𝐁 matches the magnitude between real-valued and binary weights. Particularly, in Eq. <ref>, 𝐖_𝐁^i = α__𝐁·Sign(𝐖_𝐑^i - μ__𝐁/α__𝐁) = α__𝐁·Sign(𝐖_𝐑^i - μ__𝐁), we explicitly include the α__𝐁 in the denominator to keep the binarization function isometric and the gradients w.r.t. weights can be calculated straightforwardly as:
∂𝐖_𝐁^i/∂𝐖_𝐑^iSTE≈1_|𝐖_𝐑^i - μ__𝐁/α__𝐁|<1
STE is abbreviated for straight-through estimator <cit.>, which replaces the non-differentiable Sign function with Clip function in the backward pass. We show that the proposed max-entropy isometric weight quantization improves the accuracy of weight binarization / ternarization by 6.0 / 11.53 RougeL scores on the CNN/DailyMail benchmark, respectively. More details can be found in Sec. <ref>.
§.§ Learning-based activation quantization
In contrast to neural network weights that are stored on the disk, activations are calculated on-the-fly. The distribution of activations in a particular layer depends on the network weights as well as the corresponding input sequence, and thus varies from batch to batch. In order to have the quantization function better capture the underlying activation distribution, we propose learning-based activation quantization.
Inspired by BiT <cit.>, we divide the activation layers into two categories: the activation layers with non-negative values (𝐗_𝐑∈ℝ_+), i.e., Softmax/ReLU layer outputs and the rest of the layers with both positive and negative activations (𝐗_𝐑∈ℝ). We binarize / ternarize the first activation category (𝐗_𝐑∈ℝ_+) to {0, α} / {0, α, 2α}, and symmetrically quantize the later activation category (𝐗_𝐑∈ℝ) to {-α, α} and {-α, 0, α} in binary and ternary cases respectively. In this way, the activation distribution matches the original full-precision activations and thus reduces the quantization error. Further, we learn to scale the real-valued activations to better fit quantization thresholds, and this learnable scaling factor can be updated end-to-end with the gradients from the network loss to better account for overall network optimization.
In the ternary case, we propose the elastic ternarization function formulated as,
𝐗_𝐓^i = α__𝐓𝐗̂_𝐓^i
= {[ α__𝐓 Clip(𝐗_𝐑^i/α__𝐓, 0, 2), if 𝐗_𝐑∈ℝ_+; α__𝐓 Clip(𝐗_𝐑'^i/α__𝐓, -1, 1), if 𝐗_𝐑∈ℝ ].
where 𝐗_𝐑 and 𝐗_𝐓 denote real-valued and ternary activations, respectively. To keep the formula concise, we set 𝐗_𝐑' = 𝐗_𝐑 - 𝐗_𝐑, denoting the zero-mean real-valued activations. α__𝐓 is the scaling factor.
Different from the weight quantization, the scaling factor in Eq. <ref> is learned with the gradient update. We follow the practice in <cit.> to calculate the gradients with straight-through estimation (STE) bypassing the non-differentiable rounding function:
∂𝐗_𝐓^i/∂α__𝐓 STE≈
{[ 𝐗̂_𝐓^i -𝐗_𝐑^i/α__𝐓·1_0 ⩽𝐗_𝐑^i ⩽ 2α__𝐓, if 𝐗_𝐑∈ℝ_+; 𝐗̂_𝐓^i -𝐗_𝐑'^i/α__𝐓·1_|𝐗_𝐑'^i| ⩽α__𝐓, if 𝐗_𝐑∈ℝ ].
The learnable scaling factor can dynamically adapt to different activation distributions and improve the ternarization accuracy. In the binary case, it is formulated as.
𝐗_𝐁^i = α__𝐁𝐗̂_𝐁^i
={[ α__𝐁 Clip(𝐗_𝐑^i/α__𝐁, 0, 1), if 𝐗_𝐑∈ℝ_+; α__𝐁· Sign(𝐗_𝐑'^i/α__𝐁), if 𝐗_𝐑∈ℝ ].
Here 𝐗_𝐁 denotes the binary activations.
Correspondingly, the gradients w.r.t. the scaling factor α can be easily calculated as
∂𝐗_𝐁^i/∂α__𝐁 STE≈
{[ 𝐗̂_𝐁^i -𝐗_𝐑^i/α__𝐁·1_0 ⩽𝐗_𝐑^i ⩽α__𝐁, if 𝐗_𝐑∈ℝ_+; Sign(𝐗_𝐑'^i), if 𝐗_𝐑∈ℝ ].
We demonstrate that with the learning-based activation quantization method and statistics-based weight quantization scheme, the proposed for the first time is able to quantize the BART model for natural language generation tasks to ternary and even binary weights and activations, and achieve reasonable accuracy on summarization and translation benchmarks.
§ EXPERIMENTS
In this section, we evaluate the effectiveness of our low-bit quantization scheme for natural language generative model on text summarization benchmarks: CNN/DailyMail <cit.> and XSUM <cit.>. We additionally experiment on the machine translation task with mBART on WMT16 English-Romanian (En-Ro) dataset <cit.>.
§.§ Experimental settings
We follow recent work <cit.> in training the quantized network with initialization and knowledge distillation from a full-precision pre-trained model. Specifically, we use the BART-base <cit.> as our full-precision baseline for summarization tasks and mBART-large <cit.> for the translation task. We train the quantized models for 20 epochs on 8 GPUs with a batch size of 128 and a learning rate of 2.5e-4 for 8-bit activation models and 5e-4 for binary and ternary activation models.
§.§ Summarization
For the summarization task, we adopt the following benchmarks:
The XSUM dataset <cit.> consists of 226k documents sampled from the online news website of BBC, together with short, one sentence summaries. Since the summaries are very short, abstractive methods tend to do better on this dataset.
CNN/DailyMail <cit.> is another news summarization benchmark, with longer documents (~30 sentences) and longer, multi-sentence summaries. The dataset contains close to 300k document-summary pairs.
We use BART-base model <cit.>, which is an English-only encoder-decoder transformer with 140 million parameters. We compare using the standard ROUGE-{1,2,l} metrics for this task.
For the ternary weights and 8-bit activations setting, we compare with two state-of-the-art methods QuantBart <cit.> and DQ-BART <cit.>. For the fully ternary setting, and the binary quantization experiments, there is no prior art. Therefore we provide a naive quantization baseline, using popular implementations from previous work <cit.>, and adapt the binary and ternary methods proposed for the BERT models <cit.> to BART.
Our main results are summarized in Table <ref>. In the ternary weights and 8-bit activations setting, improves previous SoTA by up to 2.3 points in ROUGE score on XSUM, and up to 0.5 points on CNN/DailyMail. Both improvements are significant.
Further quantizing weights to binary, while keeping activations at 8-bit, we are still able to achieve a ROUGE-L score of 33.3 on XSUM, which is 0.8 points higher than the previous ternary SoTA (DQ-BART), and comparable on CNN/DailyMail. This is the first demonstration of a binary-weight generative transformer model of competitive accuracy to our knowledge. Additionally, binary weight BART model achieves 1.2 points higher ROUGE score on CNN compared with the SoTA pruning method with the same compressed model size.
Moving on to ternary and binary activations, there is no prior art, and previous implementations fail to produce meaningful results. Our method, on the other hand, achieves ROUGE-L scores of 29.1 and 38.3 on XSUM and CNN/DailyMail in the fully ternary setting, which are 6.6 and 3.8 points behind the full-precision baseline respectively. Our fully binary (weights and activations) model has a wider gap at 10.4 and 8.9 points, however still manages to produce highly non-trivial output at ROUGE-L scores of 25.3 and 33.2 points for XSUM and CNN/DailyMail.
§.§ Machine translation
We also evaluate our model on machine translation. We adopt the En-Ro benchmark from the WMT'16 shared task <cit.> to be compatible with previous work. Our base model is an mBART-large model <cit.>, a 680 million parameter multi-lingual encoder-decoder transformer pre-trained on 25 languages.
Table <ref> shows our results. In the ternary weight setting with 8-bit activations, we improve the previous SoTA by 1.2 points, achieving 24.63 BLEU. Remarkably our binary weight model also outperforms the previous ternary weight SoTA by almost a full point. It scores 24.3 BLEU – only 1.5 points behind a full mBART model while being 16× smaller.
In the fully ternary and binary settings, where previous methods failed to converge, models are able to reach practical levels of performance, with ternary mBART achieving 21.7 BLEU, and binary mBART at 17.59.
§.§ Ablations
As stated earlier, our main proposed modeling improvement is a combination of two methods: statistics-based quantization for the weights, and learning-based quantization for the activations. We ablate the contribution of these methods and present the results in Table <ref>.
The results clearly show that while each method can give moderate gains by itself over the baseline, these improvements are not sufficient by themselves to produce meaningful results. None of the ablated models can achieve an R2 score above 1.5. It's only the combination of the two, which together stabilize the training and result in good convergence for fully ternary and binary models.
§.§ Sequence length analysis
In language generation tasks, the error compounding issue in the recursive decoder generation process will largely amplify the quantization error or even lead to divergent results, and thus is an harsh factor to test the robustness of a quantization method. The average generated sequence length indicates whether the quantized model can overcome the compounding error and generate reasonable length of text.
In Table <ref> we compare the generated sequence length between the proposed method and the baseline method (i.e., TWN <cit.> for ternary, BWN <cit.> for binary). Our method successfully produces summarizations with comparable length as the full-precision model on XSUM benchmark, even when both weights and activations are binarized.
Compared to XSUM dataset, for which the document are summarized to only one sentence, CNN/DailyMail is more challenging because it allows longer summary. We can clearly see that, the text generate with our 8-bit activation models can maintain near the similar average length as the full-precision BART model, while the binary and ternary activation models deviate moderately. In contrast, the baseline method is only able to derive reasonable summarization with 2-bit weight 8-bit activations and fails at lower bit-width, showing the difficult natural of the language generation tasks.
§.§ Visualization
To further understand the effectiveness of the proposed method, we visualize weight and activation histograms in the BART model ternarized with the baseline method and the proposed method in Fig. <ref>.
Both the baseline method and our method use per-row weight ternarization, and thus a tensor tensor will have #row of scaling factors. As we can see in Fig. <ref> (b) and (g), the proposed method allows the weights to be more evenly distributed in three ternarization levels, which can allow higher information entropy in quantized weights, as discussed in Sec. <ref>. Additionally, we calculate the quantized weight distribution entropy (i.e., Eq. <ref>) in 96 fully-connected layers in the BART-base model and found that the proposed method achieves consistently higher entropy in quantized weights than the baseline method in all the layers. Further, an interesting phenomenon we can see in Fig. <ref> (a) (e) is that ternary weights in a baseline model are very close to the Gaussian distribution, in contrast, weights ternarized with are capturing a more sophisticated distribution. This phenomenon implies that the proposed method helps the weights learn more informative patterns and thus better satisfy the high demand for language generation tasks.
For activation quantization, it is evident that the attention layer and the SoftMax output only contain the positive activations (𝐗_𝐑∈ℝ_+). If simply ternarized to {-α, 0, α}, the ternary activations will waste one representative level (Fig. <ref>(d)) and therefore lead to lower accuracy. Instead, the proposed method uses a two-set ternarization method that ternarizes the non-negative activation layer (𝐗_𝐑∈ℝ_+) to {0, α, 2α}, and learns the scaling factor α to better fit the underlying real-valued distribution. This ternarization method greatly reduces information loss and enhances the final accuracy.
§ RELATED WORK
Quantization has long been studied to make neural networks more efficient (see <cit.> for a survey). Due to the popularity of BERT, numerous works have studied quantization for transformer models, starting with 8-bit quantization <cit.>, and progressing to 4-bit <cit.>, ternary <cit.> and binary <cit.>. All of these works have focused on the encoder-only setting.
In the generative setting, <cit.> demonstrate quantized models for machine translation, and <cit.> for language modeling, though only for moderate quantization levels (4-8 bits). Most recently, <cit.> and <cit.> pushed weight quantization down to 2 bits (with 8-bit activation quantization) and evaluated on language modeling and summarization. However, our method outperforms these works substantially, while also demonstrating accurate generative transformers with both weights and activations quantized to 2-bit and even 1-bit for the first time.
§ CONCLUSION
We have demonstrated high accuracy ternary and binary natural language generation models based on a pre-trained transformer encoder-decoder backbone. Quantizing both the weights and the activations of the network allow these models to run on special-purpose hardware using binary and ternary arithmetic, which doesn't require multiplication modules. Therefore our results promise multiple orders of magnitude gains in efficiency while running these models, and can drastically expand the use cases of such models beyond just high end gpu servers. We are especially excited about the implications of our results for larger text generation models such as GPT-3 <cit.>. These models have both demonstrated impressive capabilities, while also presenting enormous scaling and computational challenges. Low-bit quantization is a promising approach to mitigate some of these issues. Whether our approach will scale to these models is an open problem and an exciting future research direction.
§ LIMITATIONS
We conduct experiments on public datasets of finite sentence length, while generalizability to extremely long sequences or even streaming data has not been verified. Furthermore, the generalizability of the proposed quantization method to other tasks, including computer vision or speech recognition, remains to be tested. In addition, binarization and ternarization require bit-packing to have actual memory savings and dedicated hardware support for real-time acceleration, which is more of a hardware implementation aspect and not studied in this paper.
§ ETHICS STATEMENT
We affirm that we contribute to society, avoid harm, and are honest and trustworthy. We respect previous work and appropriately cite the methods and datasets we are using. All data we use is public and no private data is involved. There is some potential risk if the translation technique is maliciously used by a third party and thus we are committed to maintaining the compression techniques we have developed and the general summarization/machine translation techniques used correctly without incurring any form of discrimination.
acl_natbib
|
http://arxiv.org/abs/2306.11814v1
|
20230620181133
|
The Ages and Metallicities of the Globular Clusters in the Sparkler
|
[
"Angela Adamo",
"Christopher Usher",
"Joel Pfeffer",
"Adélaïde Claeyssens"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
firstpage–lastpage
[
[
June 5, 2023
================
JWST observations of the strongly lensed galaxy The Sparkler have revealed a population of gravitationally bound globular cluster (GC) candidates. Different analyses have resulted in broadly similar ages but significantly different metallicities, questioning the assembly history that has led to the formation of such a population. In this letter, we re-analyse the two sets of photometry available in the literature with the code mcmame especially tailored to fit physical properties of GCs. We find the ages and metallicities from both datasets are consistent within 1σ uncertainties. A significant group of GCs is consistent with being old and metal poor ([Fe/H] ∼ -1.7). For this group, the ages do not converge, hence, we conclude that they are definitively older than 1 Gyr and can be as old as the age of the Universe. The remaining GCs have younger ages and a metallicity spread. The ages and metallicities distribution of GCs in the Sparkler are consistent with those observed in Local Group’s galaxies at similar lookback times. Comparing with predictions from E-MOSAICS simulations we confirm that the Sparkler GC population traces the self-enrichment history of a galaxy which might become a few times 10^9 M_⊙ massive system at redshift z = 0.
galaxies: star clusters – galaxies: high redshift – galaxies: globular clusters
§ INTRODUCTION
The first JWST observations of a cosmological lensed field <cit.> have enabled the detection of globular cluster (GC) candidates in a galaxy, the Sparkler, close to the peak of the cosmic formation history, e.g., redshift z = 1.38 <cit.>.
Sizes and intrinsic physical properties are consistent with these stellar systems to be gravitationally bound <cit.>. GCs have long been considered remnants of the past assembly history of their host galaxies <cit.>. The combination of JWST and magnification by gravitational telescopes has been reported as a powerful tool to enable the detection of GCs at high redshift <cit.>. Indeed, initial JWST studies of lensed galaxies have reported potentially gravitational bound young (∼ 10 Myr) proto-GCs up to z ∼ 6, i.e., at the edge of reionisation <cit.>. Simulations predict the formation of proto-GCs beginning in the reionisation era <cit.>, although their survival rates depend on tidal disruption <cit.>.
The discovery of the GC population in the Sparkler has raised great interest in the community because of the reported redshift formation of these systems (z∼9) as well the implications for the assembly history of its host galaxy. Two independent analyses have been conducted so far.
In the first study, M22 extracted with fixed aperture photometry the integrated spectral energy distribution (SED) of the 9 clusters surrounding the system. They performed a nonparametric star formation history (SFH) SED fitting analysis with the code . Their recovered solutions suggest that 5 of the 9 objects have ages of ∼ 3.9-4.1 Gyr at redshift z = 1.38, consistent with a redshift of formation between 7 and 11. Surprisingly the metallicity of these GCs are all significantly high, with abundances between 20 and 75 % Solar (-0.7 ≲ [Fe/H] ≲ -0.1). The remaining
4 systems have ages below 300 Myr and display a similar metallicity range.
In the second analysis, C23 use a Gaussian fitting approach to simultaneously determine the size and fluxes of 10 clusters surrounding the Sparkler. The SED fitting analysis has been performed using single stellar population models <cit.> with 4 different metallicity steps (Z=0.0004 to 0.02, the latter considered Solar). Two of the 8 GC candidates (in common between the two works) have ages of 4 Gyr (thus, in agreement with the M22' analysis) but with significantly lower metallicity ([Fe/H] ∼ -1.7). The remaining systems have ages between 0.1 and 1 Gyr and significant spread in metallicity (-1.7 ≤ [Fe/H] ≤ -0.4). The left panel of Figure <ref> illustrates the age and metallicities of the Sparkler clusters as derived by M22 and C23 and their degree of disagreement.
By exclusively using the M22 results, <cit.> argue that the Sparkler is the progenitor of a Milky Way (MW) like galaxy but with significant differences. As visible in the left panel of Figure <ref>, the Sparkler would be missing the very old and metal-poor GCs typically observed in Local Group galaxies. The lack of the latter population is in disagreement with models of galaxy assembly that take into account self-enrichment <cit.>, and would point toward an unusually fast enrichment and rapidly assembly for the Sparkler.
In this letter, we revisit the <cit.> interpretations of the the Sparkler GC population and its galaxy assembly history using a different approach. We re-analyse the GC SEDs independently published by M22 and C23 using the latest state-of-the-art single stellar population libraries widely used to study integrated light of GCs in the local Universe. Using posterior-distributions, we derive constraints on the age, metallicity, extinction, mass of the GCs in the two datasets. We establish the level of agreement between the two sets of photometric analyses and we compare the recovered ages and metallicities of the Sparkler with the age-mass relation (AMR) of observed GC populations observed in the Local Group (MW, LMC, SMC, Fornax), as well as predicted from cosmological simulations that analytically develop formation and evolution of star clusters <cit.>. We assume the <cit.> ΛCMD cosmology parameters .
§ ANALYSIS
We first use the published JWST NIRCam photometry of the GCs surrounding the Sparkler in the bands F090W, F150W, F200W, F277W, F356W and F444W by C23.
The Monte Carlo Markov Chain code mcmame <cit.> is used to sample the age, metallicity, mass and reddening posteriors of a grid of stellar population models subject to the constraints provided by the JWST NIRCam photometry.
We use fsps <cit.> to calculate a model grid at z = 1.378 with the MIST isochrones <cit.>, the MILES spectral library <cit.> and the <cit.> extinction curve.
We run mcmame with a uniform prior on metallicity in the range -2.5 < [Fe/H] < 0.5, on age in the range between 1 Myr and 5 Gyr (lookback time of the universe at that redshift), positive extinction (A_V > 0) and on log mass.
Before fitting, we correct the photometry for the Milky Way's foreground extinction reported in table 1 of C23.
We plot the recovered median and 68 % intervals of the age and metallicity posteriors in the right hand panel of Figure <ref> along with those of the GCs of MW and its satellite galaxies. For the oldest clusters where we do not get a convergence in the age estimation (see Figure <ref>), we plot the median as a lower limit, to reflect that the ages of these systems remain unconstrained. Our ages and metallicities are qualitatively in agreement with those found by C23 (left panel). We recover a group of GCs with older ages and low metallicities and a group of GCs with young ages and a range of metallicities. These recovered values are also in agreement with the observed positions the clusters in the color-color diagram showed in Figure 13 of C23.
Secondly, we re-analysed the M22 photometry.
We plot a comparison of the medians of the age and metallicity posteriors we obtained from the C23 and the M22 photometry in Figure <ref>.
In Table <ref>, we report the 16, 50, 84 % values for each fitted parameter from the two sets of photometry. Overall, by applying the same methodology to fit the observed GC SEDs, we derive solutions consistent within 1σ from each others in the majority of the cases. We conclude that both methods to perform photometry are consistent.
We speculate that the differences between the physical properties derive for the GC population of the Sparkler by M22 and C23 arise from both the different methods used to analyse the SEDs and underestimations of the resulting physical parameter uncertainties. The latter could be largely reduced with more precise photometry and deeper data covering the 0.3 to 2 μm restframe.
As for the SED analysis, we notice that both our reanalysis and that of M22 utilise the same stellar population models (fsps, ) although M22 use models calculated with the BaSeL stellar library <cit.> and the Padova isochrones <cit.> rather than the MILES library <cit.> and MIST isochones <cit.> used in this work. Differences between the two stellar libraries might be the source of the disagreement, although we notice that Yggdrasil models used by C23 are based on Padova stellar libraries <cit.>.
In Figure <ref> we identify the position of each GC using in the 3-color image of the Sparkler applying the naming convention of C23.
For each cluster, we show the corner plots of the recovered age and metallicity for both C23 and M22 reanalyses when available, matching their respective identities.
The corner plots show significant degeneracies and non-Gaussian posteriors in both datasets. The metallicities seem to be constrained in the majority of the cases. On the other hand, the ages do not converge for the older GCs, where we can only conclude that they are definitively older than 1 Gyr and can be as old as the age of the Universe. For younger clusters we see that age constrains are significantly tighter, while the metallicity remains in some cases less constrained. The large uncertainties associated to the data in the right side of Figure <ref> simply reflect the level of convergence in the determination of the two quantities in both datasets.
§ DISCUSSION & CONCLUSION
The Sparkler galaxy, magnified by the gravitational potential of the galaxy cluster SMACS0723, is a rather low mass galaxy at redshift z = 1.38. M22 report a total stellar mass for the host of log(M_*/M_⊙) ∼ 9.7, which corrected for lensing effect implies an intrinsic mass between 5×10^8 and 1×10^9 M_⊙ (see C23 for discussion of the different lensing model predictions).
Taking advantage of the published photometry for the GC candidates by M22 and C23,
we reanalyse the two datasets with the same framework. In both datasets, we find a group of old and metal-poor GCs. Their ages do not converge, leading us to conclude that they are definitively older than 1 Gyr and can be as old as the age of the Universe. The other group coincides with younger clusters. The latter age constrains are significantly tighter, while uncertainties in the metallicity remain large. A recently accepted JWST cycle 2 program (GO 2969) will obtain NIRSpec spectroscopy of the 0.4 to 2 μm restframe, enabling to narrow down ages and metallicities for the cluster population.
In general, we conclude that the Sparkler harbours metal-poor GCs that have ages overlapping with the metal-poor sequence of Local Group galaxies. The younger GCs have formed with higher metallicities. The spread in metallicities in the younger GCs could support the scenario of a rapid accretion/merger with a satellite galaxy. The positions of the GCs around the Sparkler clearly suggest that the galaxy has undergone rapid dynamical evolution which has caused the GCs to remain located around the galaxy in a similar fashion to GCs observed in galaxies in the local Universe. We cannot conclude whether the metal-poor(rich) GCs have been formed in-situ or ex-situ, but the age of the younger GCs suggest that the interaction that led to the ejection of the younger population has happened less than 1 Gyr ago, i.e., at the peak of cosmic noon.
In the left panel of Figure <ref>, we compare the derived ages and metallicities to the GC age-metallicity relations of different mass galaxies from the E-MOSAICS suite of simulations <cit.>.
E-MOSAICS combines the EAGLE <cit.> hydrodynamic model of cosmological galaxy formation with subgrid models for the formation and evolution of star clusters <cit.>.
The GC age-metallicity relations for MW, LMC and SMC-mass galaxies were presented in <cit.>, where the relation for MW-mass galaxies are derived from 25 zoom-in simulations <cit.> and the relations for LMC and SMC-mass galaxies are derived from a 34.4^3 comoving-Mpc^3 volume <cit.>.
Overall, we see that the Sparkler GCs follow approximately the age–metallicity relation expected by GCs forming in host galaxies that are growing in the E-MOSAICS simulations to became analogs of galaxies observed in the Local Group. On the right panel, we plot the median growth predictions for a range of different galaxy stellar masses at redshift z = 0 from the EAGLE Recal-L25N752 simulation <cit.>. The position of the Sparkler is highlighted in grey, suggesting that the Sparkler might possibly become a few times 10^9 M_⊙ galaxy at z = 0, similar to M33 <cit.>, e.g., between the LMC <cit.> and the MW <cit.> galaxies. These conclusions differ from those reached by <cit.> whom suggested, using only the M22 results, that the Sparkler could be a MW progenitor without the metal-poor old GC population.
GC populations have long been established as remnants of past events in the assembly history of galaxies <cit.>. In particular, the GC age–metallicity relation has early on been recognised as a powerful tool to reconstruct the assembly history of our own galaxy <cit.>, as well as in the Local Group <cit.> and beyond <cit.>. Decoding observed GC AMR, however, it is not straight forward, because it requires to trace back GCs formed in-situ vs. those accreated from other satellite galaxies, thus, formed ex-situ. Cosmological simulations of Milky Way-like galaxies (as done in E-MOSAICS) which analytically develop the formation and evolution of star clusters has shown the AMR to vary with galaxy assembly history at fixed stellar mass <cit.>. Both observations and simulations agree that the Milky Way has a steep AMR for the in-situ GCs, suggesting a rapid assembly and metal enrichment of the host, while younger metal-poor GC sequences are the results of accretions from lower-mass satellites <cit.>. On the other hand, GCs forming in galaxies like the Magellanic clouds result in shallower AMR <cit.>. In the case of the Sparkler, the comparison with E-MOSAICS derived GC AMR (left plot), and average galaxy growth (right plot of Figure <ref>) suggest that the Sparkler GC AMR is shallower than the one expected and observed for a Milky Way type galaxy. The large metallicity spread at younger ages could be evidence of a recent merger that can also explain why the GCs are located around the main body of the galaxy, in a configuration that is also observed in redshift z = 0 galaxies.
To conclude, the combination of gravitational lensing and JWST sensitivity and resolution has enabled the unprecedented detection of GCs surrounding a star-forming galaxy at redshift about z = 1.4. We show that the Sparkler GCs fit well-within the derived assembly history of galaxies in our Local Group from both numerical simulations as well as extrapolated from the observed GC populations. However, this results cannot be generally applied - the Sparkler is a single galaxy with a unique assembly history. With the increasing effort to survey more lens regions with JWST we expect a significant increase in the number of detected proto-GC as well as evolved GC populations. As GCs trace the growth of their host galaxies, we expect to reveal a broader spectrum of galaxy assembly histories than accessible in the Local Group.
§ ACKNOWLEDGEMENTS
The authors thank the referee for a constructive report. AA, AC and CU acknowledge support from the Swedish Research Council, Vetenskapsrådet (2021-05559, 2016-05199).
JP is supported by the Australian government through the Australian Research Council's Discovery Projects funding scheme (DP220101863). The authors thank the E-MOSAICS team for kindly sharing access to their simulations.
This work made use of numpy <cit.>, scipy <cit.>, matplotlib <cit.>, corner <cit.> and astropy <cit.>.
§ DATA AVAILABILITY
Data used in the analysis are publicly available.
mnras
|
http://arxiv.org/abs/2306.02987v1
|
20230605160022
|
Frequency Regulation with Storage: On Losses and Profits
|
[
"Dirk Lauinger",
"François Vuille",
"Daniel Kuhn"
] |
math.OC
|
[
"math.OC",
"cs.SY",
"econ.GN",
"eess.SY",
"q-fin.EC"
] |
Frequency Regulation with Storage: On Losses and Profits
Dirk Lauinger, François Vuille, Daniel Kuhn
Low-carbon societies will need to store vast amounts of electricity to balance intermittent generation from wind and solar energy, for example, through frequency regulation. Here, we derive an analytical solution to the decision-making problem of storage operators who sell frequency regulation power to grid operators and trade electricity on day-ahead markets. Mathematically, we treat future frequency deviation trajectories as functional uncertainties in a receding horizon robust optimization problem. We constrain the expected terminal state-of-charge to be equal to some target to allow storage operators to make good decisions not only for the present but also the future. Thanks to this constraint, the amount of electricity traded on day-ahead markets is an implicit function of the regulation power sold to grid operators. The implicit function quantifies the amount of power that needs to be purchased to cover the expected energy loss that results from providing frequency regulation. We show how the marginal cost associated with the expected energy loss decreases with roundtrip efficiency and increases with frequency deviation dispersion. We find that the profits from frequency regulation over the lifetime of energy-constrained storage devices are roughly inversely proportional to the length of time for which regulation power must be committed.
§ INTRODUCTION
In November 1896 when America's first large-scale power plant at Niagara Falls began transmitting electricity to the city of Buffalo about 20 miles away, the electricity came in the form of alternating current. The reason was that “unlike direct current, alternating current [could] travel” <cit.>. Most electricity grids still rely on alternating current today. The frequency of the alternating current is an indicator of the mismatch between electricity demand and supply.
The purpose of frequency regulation is to insure electricity grids against unforeseen second-to-second supply and demand mismatches. Traditionally, this insurance has been provided by centralized power plants, often fired by fossil fuels. As wind and solar power plants replace fossil-fuel-fired power plants, power generation becomes more weather-dependent,
which may increase the demand for and decrease the supply of frequency regulation. Electricity storage could help to fill the gap. Lithium-ion batteries, in particular, are considered a promising source of frequency regulation, thanks to their fast dynamics and rapid cost decline <cit.>.
Several studies <cit.> have claimed that it is or will be profitable to invest in lithium-ion batteries for frequency regulation in the near future. Such studies focused on battery costs and frequency regulation prices but they often did not explicitly model the uncertain demand for frequency regulation. In our previous work <cit.>, we accounted for EU delivery guarantees on frequency regulation <cit.> and found that they significantly reduce the expected profits from frequency regulation. We also showed that the profits depend on charging and discharging efficiencies, and on the dispersion of frequency deviations.
In this work, we focus on two research questions:
* How do charging and discharging losses influence frequency regulation profits?
* What regulatory changes would make frequency regulation with storage more profitable?
To answer these questions, we derive an analytical solution to a simplified decision-making problem of a storage operator selling frequency regulation in the continental European electricity grid and trading electricity on day-ahead wholesale or retail markets. We distill the following insights:
* Although frequency deviations vanish on average, the average power flow entering a storage device is nonincreasing in the amount of regulation power offered. The lower the roundtrip efficiency of the storage device and the higher the dispersion of the frequency deviations, the higher the dissipative losses incurred by the provision of regulation power. In a numerical case-study, we find that the losses amount to between 0.6% and 4.3% of the regulation power offered to grid operators. To our best knowledge, we are the first to quantify the expected dissipative losses as a function of roundtrip efficiency and frequency deviation dispersion.
* For storage devices that are constrained by their storage capacity and their initial state-of-charge, rather than their charging and discharging capacities, the profits from frequency regualtion over the lifetime of the devices are inversely proportional to the length of time for which frequency regulation must be committed.
* EU regulators can make frequency regulation with storage more profitable if they (i) make it easier for small-scale storage operators to access wholesale electricity markets, or (ii) establish an intraday market for frequency regulation.
While the analytical solution provides general insights in storage operation, it requires several simplifying assumptions and may thus be of limited practical value for the operation of any specific storage device. The general insights can, however, be used as a base line against which storage operators can compare the results of more detailed operational models. We describe the simplifying assumptions below and explain how they can be addressed in operational models.
First, we assume that charging and discharging losses are constant. In reality, the losses depend on the instantaneous state-of-charge of the storage device, the charging and discharging power, and temperature. <cit.> formulated a dynamic program to account for decision-dependent losses and found them to reduce regulation profits by 10% to 20% for lithium-ion batteries.
Second, we assume that storage operators only trade on day-ahead markets and that they must commit to constant market bids throughout the entire day. In practice, they could participate on intraday markets for the wholesale of electricity. <cit.> and <cit.> developed multi-stage stochastic programs for participation in day-ahead and intraday markets. They found higher expected profits on intraday than day-ahead markets. Löhndorf and Wozabal pointed out that these profits can only be fully realized by assets with a low price impact.
Third, we assume no market power. This is reasonable for small storage assets, such as stationary batteries, whereas large storage assets, such as pooled pumped-hydro power plants may exert some market power. <cit.> formulated a Stackelberg game to analyze the market power of monopolistic storage operators under various EU market regulations. <cit.> developed a bi-level optimization model to assess the market power of electricity producers. They found that electricity producers can reduce storage investments if they exert market power.
Finally, we do not consider any battery degradation. <cit.> have shown that battery degradation can be negligible for electric vehicles providing frequency regulation depending on the operating conditions. Other storage technologies may experience different degradation dynamics. <cit.> formulated a stochastic mixed-integer linear program that accounts for battery degradation. They found that storage operators will want to protect themselves against deep battery discharges, which are particularly harmful to battery longevity, by limiting the amount of regulation power they sell to grid operators.
In terms of methodological development, we use a constraint on the expected terminal state-of-charge, rather than a value function as in <cit.>, to steer the storage operator toward decisions that work well for both the present and the future.
Expected value constraints have previously been investigated by <cit.> to establish
stochastic dominance of the second order,
by <cit.> in hypothesis testing, and by <cit.> as a generalization of mean-variance models.
Here, we will use them for analytical tractability. We show in Section <ref> that the amount of power bought on electricity markets is an implicit function of the amount of regulation power sold to grid operators under the expected value constraint. This leads to a one-dimensional decision problem, which we can be solved highly efficiently by bisection for general frequency deviation distributions (see Section <ref>) and analytically for two- and three-point distributions (see Section <ref>).
To ease readability, we refer to generic electricity storage devices as batteries and relegate all essential proofs to Appendix <ref> and all other proofs to the supplementary material (SM). The SM also includes additional description and analysis of the case study presented in Section <ref>.
Notation. We designate all random variables by tilde signs. Their realizations are denoted by the same symbols without tilde signs. For any z ∈, we set [z]^+ = max{z,0} and [z]^- = max{-z,0}. For any closed intervals T, U⊆ℝ, we define L(T, U) as the space of all Riemann integrable functions f:T→U, and we denote the intersection of a set B⊆L(, ℝ) with L(, ℝ_+) as B^+. For any signed function δ∈L(, ℝ), we denote by |δ| the absolute value function with |δ| (t) = |δ(t) | for every t ∈.
We use g'_- and g'_+ to denote the left and right derivatives of a univariate proper convex function g:ℝ→ (-∞, +∞], respectively.
§ PROBLEM DESCRIPTION
We study the decision problem of a battery operator who can sell frequency regulation power x^r ∈ℝ_+ to a grid operator and buy electric power x^b ∈ℝ on a wholesale or retail market. We allow x^b to be negative, in which case the amount | x^b | of power is sold. Both x^r and x^b are chosen ex ante and kept constant over a prescribed planning horizon = [0, T] (, the next day).
At any time t ∈, the battery operator measures the normalized deviation δ̃(t) ∈ [-1,1] of the uncertain instantaneous grid frequency ν̃(t) from the nominal frequency ν_0 and must consume the amount x^b + δ̃(t) x^r of power from the grid. Mathematically, the normalized frequency deviation at time t is given by the clipped ramp function
δ̃(t) =
+1 if ν̃(t) > ν_0 + Δν,
ν̃(t) - ν_0/Δν if ν_0 - Δν≤ν̃(t) ≤ν_0 + Δν,
-1 if ν̃(t) < ν_0 - Δν,
where Δν is the maximum frequency deviation against which the grid operator seeks protection.
The remuneration for offering frequency regulation is twofold: the power x^r set aside for frequency regulation is compensated at the availability price p̃^a(t), and the regulation power δ̃(t) x^r actually delivered at time t is compensated at the delivery price p̃^d(t).
The power x^b bought on the market is priced at p̃^b(t). The expected cost over the planning horizon thus amounts to
∫_p̃^b(t) x^b - ( p̃^a(t) - δ̃(t) p̃^d(t) )x^r dt.
The net power flow leaving the grid at time t ∈ is given by x^b + δ(t) x^r. In the following, we find it useful to distinguish the charging power y^+(x^b, x^r, δ(t)) = [x^b + δ(t) x^r]^+ from the discharging power y^-(x^b, x^r, δ(t)) = [x^b + δ(t) x^r]^-. We assume that the charging power is bounded above by the charging capacity y̅^+ ∈ℝ_+, and the discharging power is bounded above by the discharging capacity y̅^- ∈ℝ_+. When the battery is charging (y^+ > 0), then only a fraction η^+ of the charging power enters the battery, where η^+ ∈ (0,1] represents the charging efficiency. The rest is dissipated during the charging process. Conversely, when the battery is discharging (y^- > 0), then a multiple 1/η^- of the discharging power leaves the battery, where η^- ∈ (0,1] represents the discharging efficiency. The battery state-of-charge at any time t∈ can thus be expressed as
y(x^b, x^r, δ, y_0, t) = y_0 + ∫_0^t η^+ y^+(x^b, x^r, δ(t')) - 1/η^- y^-(x^b, x^r, δ(t')) dt',
where y_0 denotes the initial state-of-charge, and δ∈L(T, [-1,1]) is a given frequency deviation trajectory. Throughout the planning horizon, the battery state-of-charge must remain between 0 and the battery capacity y̅ > 0. We assume from now on that 0 ≤ y_0 ≤y̅. The following proposition establishes fundamental qualitative properties of the state-of-charge function y.
All else being equal, the battery state-of-charge y(x^b,x^r,δ,y_0,t) is concave and strictly increasing in x^b, concave in x^r, concave nondecreasing in δ, and affine nondecreasing in y_0.
Proposition <ref> slightly strengthens Proposition 1 by <cit.>.
The battery may be used beyond the immediate planning horizon for selling more regulation power, for exchanging power on other electricity markets, or for supplying power to electric devices. The extent to which this is possible depends on the state-of-charge at the end of the immediate planning horizon. The value of any particular terminal state-of-charge y(x^b, x^r, δ, y_0, T) could be captured by a reward-to-go function as in dynamic programming <cit.>. This allows the battery operator to trade off present and future costs when selecting x^b and x^r. Another approach is to constrain the terminal state-of-charge to be close to some target y^⋆ that will guarantee satisfactory future performance.
Both terminal costs and terminal constraints are widely studied in model predictive control <cit.>.
The terminal state-of-charge is uncertain at time 0 when the battery operator selects x^b and x^r because it depends on the frequency deviation trajectory δ during the planning horizon . In fact, the battery operator can only be sure to meet a fixed target y^⋆ if she sells no regulation power (x^r = 0), which shields her from the uncertainty of the frequency deviations. If she sells regulation power (x^r > 0), however, then all she can hope for is to reach a terminal state-of-charge that is close to the target y^⋆ on average. In the following, we will thus require that the terminal state-of-charge be equal to y^⋆ in expectation.
We emphasize that this constraint is not dictated by physics but is simply a means to contain future operating costs, which are not modeled explicitly.
Throughout the planning horizon, the battery operator must be able to honor all market commitments for all reasonably likely frequency deviation trajectories δ.
<cit.> show that extreme frequency deviation trajectories are very uncommon. It would thus appear overly conservative to impose the charging, discharging, and battery state-of-charge constraints robustly for all possible frequency deviation trajectories.
Inspired by applicable regulations by the <cit.>, we assume instead that the battery operator must satisfy the constraints only for the frequency deviation trajectories in the uncertainty set
= {δ∈ℒ(, [-1, 1]): ∫_|δ(t) | dt ≤γ}
parametrized by the uncertainty budget γ∈ (0,T]. Note that γ represents the maximum amount of time for which a scenario δ∈ may adopt an extreme value δ(t) ∈{-1,1}. Note also that can be seen as an extension of the budget uncertainty sets introduced by <cit.> to functional uncertainties. The following lemma establishes symmetry properties of that will allow us to reduce the decision problem of the battery operator to a deterministic optimization problem.
We have δ∈ if and only if |δ|∈^+.
In summary, the battery operator's decision problem is to select x^b and x^r so as to minimize expected costs while meeting the battery state-of-charge target y^⋆ in expectation and ensuring that the charger, discharger, and battery capacities are respected at all times and under all frequency deviation trajectories δ∈. This gives rise to the following robust optimization problem.
R=4pt1.2[ min_x^b ∈, x^r ∈_+ 3>l∫_p̃^b(t) x^b - ( p̃^a(t) - δ̃(t) p̃^d(t) )x^r dt; y^+(x^b, x^r ,δ(t)) ≤y̅^+ ∀δ∈, ∀ t ∈; y^-(x^b, x^r, δ(t)) ≤y̅^- ∀δ∈, ∀ t ∈; y(x^b,x^r,δ,y_0,t) ≤y̅ ∀δ∈, ∀ t ∈; y(x^b,x^r,δ,y_0,t) ≥ 0 ∀δ∈, ∀ t ∈; [ y(x^b, x^r, δ̃, y_0, T) ] = y^⋆ ]
The battery operator only needs to insure frequency deviation trajectories in . For trajectories outside of , the battery operator has to deliver regulation power up to the smallest time instant t_γ such that ∫_0^t_γ|δ(t) | dt = γ. At all time instants t > t_γ, the battery operator does not need to deliver any regulation power and may assume that δ(t) = 0.
We thus assume that ℙ[δ̃∈] = 1 throughout the rest of the paper.
For later use, we note that any δ∈ has a mean absolute deviation 1/T∫_|δ(t) | dt no greater than γ/T.
For a fixed frequency deviation trajectory δ, the textbook approach to solving the deterministic counterpart of problem (<ref>) is to first discretize the planning horizon into N periods and then introduce N binary variables expressing whether the battery is charging or discharging during the respective periods <cit.>. This results in a large-scale mixed-integer linear program. In the remainder, we will show that the robust opimization problem (<ref>) is much easier to solve than its deterministic counterpart. In fact, we will see that for realistic values of the roundtrip efficiency η^+η^- the search space can be reduced to merely three candidate solutions. All candidate solutions can be computed highly efficiently by bisection. For specific distributions of the frequency deviations, problem (<ref>) can even be solved in closed form. This implies that robustification reduces complexity. In the next section, we first show that the robust optimization problem (<ref>) is equivalent to a one-dimensional deterministic optimization problem.
§ REDUCTION TO A DETERMINISTIC OPTIMIZATION PROBLEM
In order to simplify problem (<ref>), we first rewrite its objective function as an explicit linear function of the decision variables. Next, we show that the robust constraints are equivalent to deterministic linear constraints. Finally, we exploit the terminal state-of-charge constraint to express x^b as an implicit function of x^r, which allows us to reformulate problem (<ref>) only in terms of x^r.
Note first that the objective function of problem (<ref>) can be expressed as T(c^b x^b - c^r x^r), where c^b = 1/T∫_p̃^b(t) dt denotes the expected average market price of electricity, and c^r = 1/T∫_p̃^a(t) - δ̃(t)p̃^d(t) dt denotes the expected average price of regulation power. In the following, we will assume without much loss of generality that c^b > 0 and c^r > 0.
We now show that the robust constraints are equivalent to deterministic linear constraints. This may be surprising because the state-of-charge is concave in the decision variables, implying that the upper bounds on the state-of-charge represent nonconvex constraints. Similarly, as the state-of-charge is concave in δ, finding the worst-case frequency deviation trajectories for the lower bounds on the state-of-charge amounts to solving a nonconvex optimization problem. In addition to these complications, the bounds on the state-of-charge also need to hold for all time instants in the planning horizon.
In general, constraints with such properties are severely intractable.
Given that x^b and x^r do not depend on time, it is tempting to think that δ can be restricted to a constant function of time without loss of generality. This restriction of the uncertainty set, however, relaxes the feasible set, and one can show that the relaxed feasible set contains decisions that are infeasible in practice. Indeed, averaging the real frequency deviation signals underestimates the maximum state-of-charge and overestimates the minimum state-of-charge <cit.>.
Although the robust constraints of problem (<ref>) seem intractable, we can reformulate them as deterministic linear constraints. This is possible because the worst-case frequency deviation trajectories and the worst-case time instants can be evaluated a priori. The following proposition summarizes our results.
[Constraint reduction]
If 0 ≤ y_0 ≤y̅, then the following equivalences hold.
=4pt1.2[ (i) y^+(x^b, x^r ,δ(t)) ≤y̅^+ ∀δ∈, ∀ t ∈ x^r + x^b ≤y̅^+; (ii) y^-(x^b, x^r, δ(t)) ≤y̅^- ∀δ∈, ∀ t ∈ x^r - x^b ≤y̅^-; (iii) y(x^b,x^r,δ,y_0,t) ≤y̅ ∀δ∈, ∀ t ∈ x^r + max{T/γ x^b, x^b } ≤y̅ - y_0/η^+ γ; (iv) y(x^b,x^r,δ,y_0,t) ≥ 0 ∀δ∈, ∀ t ∈ x^r - min{T/γ x^b, x^b } ≤η^- y_0/γ ]
Proposition <ref> is inspired by Theorems 1 and 2 by <cit.>. The proof critically exploits the monotonicity properties of y established in Proposition <ref> and the symmetry of the uncertainty set established in Lemma <ref>. The proof reveals that the upper bound on the charging power and the upper bound on the state-of-charge are valid for all frequency deviations signals δ∈ and all time instants t ∈ if and only if they are valid for the time instants γ and T and for the particular frequency deviation signal δ^(+), defined as δ^(+)(t) = 1 if t ≤γ and δ^(+)(t) = 0 otherwise. Similarly, the upper bound on the discharging power and the lower bound on the state-of-charge are valid for all frequency deviations signals δ∈ and all time instants t ∈ if and only if they are valid for the time instants γ and T and for the particular frequency deviation signal δ^(-) = - δ^(+).
Intuitively, if x^b ≥ 0, then the maximum state-of-charge is achieved at time T by any nonnegative frequency deviation trajectory that exhausts the uncertainty budget, , that has a cumulative deviation of γ, such as δ^(+). If x^b < 0, then the maximum state-of-charge is achieved at time γ by the frequency deviation trajectory that exhausts the uncertainty budget as quickly as possible, , that achieves a cumulative deviation of γ as soon as possible. Since δ∈L(T,[-1,1]), the nonnegative signal that exhausts the uncertainty budget as quickly as possible is δ^(+). As δ^(+)(γ) = 1 and δ(t) ≤ 1 for all δ∈ and all t ∈, the maximum charging power is achieved at time γ by the frequency deviation trajectory δ^(+). The intuition for the lower bound on the state-of-charge and the upper bound on the discharging power is similar.
In the following, we will exploit the constraint on the expected terminal state-of-charge to express x^b as an implicit function of x^r. To this end, we first compress the stochastic process {δ̃(t) }_t ∈ to a single random variable ξ̃= δ̃(t̃), where t̃ is a random time independent of δ̃ that follows the uniform distribution on the planning horizon T. The marginal probability distribution ℙ_ξ of ξ̃ is defined through ℙ_ξ[B] = ℙ[ξ̃∈B] for every Borel set B⊆ℝ. One readily verifies that
ℙ_ξ[B]
= ℙ[ξ̃∈B]
= 𝔼[ ℙ[δ̃(t̃) ∈B | t̃] ]
= 1/T∫_ℙ[ δ̃(t̃) ∈B | t̃ = t ] dt
= 1/T∫_ℙ[ δ̃(t) ∈B] dt
for every Borel set B⊆T, where the third and the fourth equalities hold because t̃ is uniformly distributed on T and because t̃ is independent of δ̃(t) for every t ∈T, respectively.
Historical frequency deviation data suggest that the marginal distribution of ξ̃ is symmetric around zero (see Figure <ref> in SM <ref>).
From now on, we will thus make the following assumption.
[Symmetry]
We have ℙ_ξ[B] = ℙ_ξ[-B] for every Borel set B⊆T.
If the stochastic process {δ̃(t) }_t ∈ is stationary, then the marginal distribution of δ̃(t) coincides with ℙ_ξ for every t ∈. Based on data from the UK and the continental European electricity grid,
<cit.> show, however, that frequency deviations may not be stationary on timescales of up to 24 hours but become stationary on longer timescales. The 24 hour threshold coincides with the typical length of planning horizons for frequency regulation. This suggests that ℙ_ξ does not change from one planning horizon to the next and can thus be estimated from historical data.
In the following, we define F:→ [0,1] as the cumulative distribution function corresponding to ℙ_ξ, and we define φ:→_+ as the antiderivative of F with φ(-1) = 0. For short, we will refer to φ as the super-cumulative distribution function. We are now ready to investigate the expected terminal state-of-charge as a function of x^b and x^r. To this end, we define η_d = 1/η^- - η^+.
[Properties of the expected terminal state-of-charge]
The expected terminal state-of-charge is continuous,
jointly concave in x^b and x^r, strictly increasing and unbounded above in x^b, and nonincreasing in x^r. In particular, it is given by
[ y(x^b, x^r, δ̃, y_0, T) ] =
y_0 + T ( η^+ x^b - η_d x^r φ( - x^b/x^r) )
∀ (x^b, x^r) ∈ℝ×ℝ_+.
Note that x^r φ(-x^b/x^r) represents the perspective function of φ(-x^b), which is jointly convex in x^b and x^r because φ is convex <cit.>. For x^r = 0, the perspective function is defined as lim_x^r → 0^+ x^r φ(-x^b/x^r) and coincides thus with [x^b]^-.
We know from Proposition <ref> that the battery state-of-charge is strictly increasing in x^b, and thus it is unsurprising that its expected value is also strictly increasing in x^b. We emphasize, however, that the state-of-charge may display a complicated nonmonotonic dependence on x^r.
Nevertheless, Proposition <ref> reveals that the expected state-of-charge is nonincreasing in x^r, which means that, on average, providing frequency regulation causes energy losses and thereby discharges the battery. Even though the average frequency deviations vanish by virtue of Assumption <ref>, frequency regulation fails to be energy-neutral unless η^+ = 1 and η^- = 1. We will explain this phenomenon by reasoning about the power flow entering the battery as opposed to the power flow exiting the electricity grid. If there are no losses, then the power flow entering the battery and the power flow exiting the electricity grid coincide with x^b + δ(t) x^r. They thus follow the same probability distribution as the frequency deviations with the mean value shifted from 0 to x^b and the standard deviation scaled by x^r. Providing frequency regulation hence only increases the dispersion of the power flow entering the battery but does not affect its mean. In the general case, when η^+, η^- < 1, the power flow exiting the electricity grid follows the same probability distribution as before, but the power flow entering the battery follows a different probability distribution. In fact, charging losses compress the positive part of the original distribution, while discharging losses stretch the negative part of the original distribution. The losses thereby decrease the average power flow entering the battery. The higher the dispersion of the power flow, the more pronounced the decrease. Since the dispersion increases in x^r, the average power flow entering the battery and, by extension, the expected terminal state-of-charge of the battery decrease in x^r. Figure <ref> visualizes the distribution of the power flow entering the battery for x^b = 0 with and without losses.
The monotonicity properties of the expected terminal state-of-charge established in Proposition <ref> imply that the last constraint of problem (<ref>) determines x^b as an implicit function of x^r. Instead of reasoning about the state-of-charge of the battery directly, we will continue to reason about the power flow entering the battery, which is independent of the initial state-of-charge y_0 and of the length of the planning horizon T.
By defining the average expected charging rate and the average desired charging rate as
ẏ(x^b, x^r) = [ y(x^b, x^r, δ̃, y_0, T) ] - y_0/T = η^+ x^b - η_d x^r φ( - x^b/x^r)
and ẏ^⋆ = y^⋆ - y_0/T,
respectively, the constraint [y(x^b, x^r, δ̃, y_0, T)] = y^⋆ can be reformulated equivalently as ẏ(x^b, x^r) = ẏ^⋆. Since T>0, the expected charging rate ẏ inherits the concavity and monotonicity properties of the state-of-charge established in Proposition <ref>. In particular, if x^r = 0, then ẏ(x^b, 0) = η^+ [x^b]^+ - 1/η^- [x^b]^-. Hence, ẏ(x^b,0) = ẏ^⋆ is valid if and only if x^b = 1/η^+ [ẏ^⋆]^+ - η^- [ẏ^⋆]^-, which is fully determined by the desired charging rate ẏ^⋆ and by the charging and discharging efficiencies η^+ and η^-. As x^r increases, the expected charging rate ẏ(x^b, x^r) may decrease due to increased charging and discharging losses. The battery operator, however, can compensate this decrease by increasing x^b. Since ẏ is strictly increasing, continuous, and surjective onto ℝ for any fixed x^r, there is a unique x^b that satisfies the equation ẏ(x^b, x^r) = ẏ^⋆. This means that x^b can be expressed as an implicit function of x^r. This implicit function depends on the desired charging rate ẏ^⋆, the charging and discharging efficiencies, and the mean absolute deviation of the frequency deviations. The latter is defined as Δ = 𝔼[|ξ̃|] = 2φ(0), where the second equality can be proved via integration by parts.
[Mean absolute deviation]
We have Δ≤γ/T because all relevant frequency deviation trajectories reside in and have thus a mean absolute deviation no greater than γ/T. Formally,
ℙ[δ∈] = 1
Δ≤γ/T.
[Implicit function]
The constraint ẏ(x^b, x^r) = ẏ^⋆ defines a unique implicit function g:_+ → such that ẏ(g(x^r), x^r) = ẏ^⋆ for all x^r ∈ℝ_+. The function g is convex, continuous, and nondecreasing
with derivative
g'(x^r) = η_d φ(-x^b/x^r) + x^b/x^r F(-x^b/x^r)/η^+ + η_d F(-x^b/x^r)
(if it exists), where x^b = g(x^r). We have g'(x^r) = 0 for all x^r ∈ (0, | g(0) |). In addition, the asymptotic slope m = lim_x^r →∞ g'(x^r) ∈ [0,1) is the unique solution of the equation
m = (1 - η^+ η^-) φ(m).
Proposition <ref> implies that for any fixed x^r, the battery operator must buy the amount x^b = g(x^r) of power in order to meet the expected state-of-charge target. One can interpret g(0) as the amount of power needed to meet the target in the absence of frequency regulation. Accordingly, g(x^r) - g(0) reflects the amount of power needed to compensate the charging and discharging losses due to frequency regulation. These losses vanish for x^r = 0 and increase in x^r at a rate that is smaller than or equal to m.
The asymptotic slope m is of particular interest because it is an upper bound on the percentage of regulation power that the battery operator needs to purchase in order to cover the losses from providing frequency regulation.
The asymptotic slope m is convex and nonincreasing in the roundtrip efficiency η^+η^- and nondecreasing in the mean absolute deviation Δ of the frequency deviations.
Proposition <ref> further reveals that g'_+(0) = 0 whenever y_0 ≠ y^⋆.
Otherwise, we have g'_+(0) = m.
If y_0 = y^⋆,
then the function g is linear with slope m.
If y_0 = y^⋆, the losses increase exactly at rate m. Maybe surprisingly, the losses
are thus smaller when the initial state-of-charge differs from the target.
[Computability]
For any fixed x^r, the implicit function g can be evaluated as follows. If x^r = 0, then ẏ(g(0),0) = ẏ^⋆ readily implies that
g(0) = 1/η^+ [ẏ^⋆]^+ - η^- [ẏ^⋆]^-. If x^r > 0, we first compute m as the unique solution to equation (<ref>) by bisection on [0,1]. Next, we compute g(x^r) as the unique root of the function ẏ(g(x^r), x^r) - ẏ^⋆ on the interval [g(0), g(0) + mx^r], again by bisection. Once x^b = g(x^r) is known, we obtain g'(x^r) from equation (<ref>).
Using Propositions <ref> and <ref>, we can now reformulate problem (<ref>) as the one-dimensional deterministic optimization problem
P[ min_x^r ∈_+ 2>l
T(c^b g(x^r) - c^r x^r); x^r + g(x^r) ≤y̅^+; x^r - g(x^r) ≤y̅^-; x^r + max{T/γ g(x^r), g(x^r) } ≤y̅ - y_0/η^+ γ; x^r - min{T/γ g(x^r), g(x^r) } ≤η^- y_0/γ. ]
[Constraint and dimensionality reduction]
The problems (<ref>) and (<ref>) are equivalent.
Note that the objective function of problem (<ref>) is convex as the implicit function g is convex. The feasible set of (<ref>) can be represented concisely as X = {
x^r ∈ℝ_+ : ℓ(x^r) ≤ g(x^r) ≤ u(x^r)
}, where
ℓ(x^r) = max{
x^r - min{y̅^-, η^- y_0/γ}, γ/T x^r - η^- y_0/T}
and
u(x^r) = min{min{y̅^+, y̅ - y_0/η^+ γ} - x^r, y̅ - y_0/η^+ T - γ/T x^r
}.
Due to the lower bounds on the convex function g(x^r), the set X is nonconvex in general. However, if is monotonic, then there exists at most one intersection between ℓ and g, which means that the constraint ℓ(x^r) ≤ g(x^r) defines nevertheless a convex feasible set.
This is the case under the following assumption.
[Roundtrip efficiency]
We have η^+η^->1/3.
Assumption <ref> is non-restrictive. Indeed, all relevant electricity storage technologies, as identified by the <cit.>, have a roundtrip efficiency higher than 1/3.
If Assumption <ref> holds, then the set X is convex.
Lemma <ref> asserts that the feasible set X is a line segment under realistic parameter settings. As the objective function of problem (<ref>) is convex, an optimal solution x^r_∗ coincides either with a boundary point of the line segment X or with a stationary point of the objective function in the interior of the line segment. All three candidate solutions can be computed conveniently via bisection.
If Assumption <ref> fails to hold, then X consists of two disjoint line segments, and it becomes necessary to check five different candidate solutions, all of which can be computed by bisection.
A detailed discussion of this generalized setting is omitted because it has little practical relevance.
§ CANDIDATE SOLUTIONS
In the following, we will first examine the boundary points of the line segment X and then the stationary points of the objective function.
One can show that is strictly decreasing and that is strictly decreasing under Assumption <ref>. The line segment X is thus non-empty if and only if u(0) - g(0) ≥ 0 and g(0) - ℓ(0) ≥ 0. In this case, there exists a unique x^r_u such that u(x^r_u) = g(x^r_u) and a unique x^r_ℓ such that g(x^r_ℓ) = ℓ(x^r_ℓ). The line segment X is thus given by , where x̅^r = min{ x^r_ℓ, x^r_u}.
At least one of the points x^r_ℓ and x^r_u will be in [0, x̅̅̅^r], where x̅̅̅^r is the unique intersection between the strictly increasing lower bound ℓ and the strictly decreasing upper bound u, and can be computed by bisection.
The point x̅̅̅^r itself admits the closed form expression
x̅̅̅^r = min{ y̅^+ + y̅^-/2, y̅^+ + η^-/γ y_0/2, T y̅^+ + η^- y_0/γ + T, y̅^- + y̅ - y_0/η^+ γ/2, Ty̅^- + y̅ - y_0/η^+/γ + T,
y̅ + ( η^+ η^- T/γ - 1 ) y_0/η^+(γ + T), T/γy̅ - (T/γ - η^+ η^-) y_0 /η^+(γ + T)}.
The stationary points of the objective function of problem (<ref>) are such that the expected marginal cost Tc^b g'(x^r) of providing frequency regulation equals the expected marginal revenue Tc^r from providing frequency regulation. The set of all stationary points in X is thus X_⋆ = { x^r ∈ [0,x̅^r]: c^r/c^b∈∂ g(x^r) }, where ∂ g(x^r) denotes the subdifferential of g at x^r. Note that c^r/c^b > 0 because c^b > 0 and c^r > 0. If x̅^r = 0, then 0 is the only feasible solution to problem (<ref>).
If g'_-(x̅^r) < c^r/c^b,
then X_⋆ is empty, and x̅^r is the optimal solution to problem (<ref>) because the marginal revenue of providing frequency regulation is strictly higher than the marginal cost for all feasible x^r. Similarly, if g'_+(0) > c^r/c^b, then X_⋆ is again empty, and 0 is the optimal solution because the marginal cost of providing frequency regulation is strictly higher than the marginal revenue for all x^r ∈ (0, ∞]. Otherwise, X_⋆ is non-empty and may contain several stationary points, all of which are optimal solutions to problem (<ref>). In this case, we assume that the battery operator selects the smallest stationary point to avoid unnecessary battery usage. Theorem <ref> formalizes these results.
For x̅^r ≥ 0, the smallest optimal solution to problem (<ref>) is
x^r_∗ =
0 if x̅^r = 0 or x̅^r > 0 and g'_+(0) ≥c^r/c^b,
x̅^r if x̅^r > 0 and g'_-(x̅^r) < c^r/c^b,
minX_⋆ otherwise.
If ẏ^⋆ = 0, then g(x^r) = m x^r. Therefore, x^r_∗ = x̅^r if m < c^r/c^b and x^r_∗ = 0 otherwise.
If X_⋆ is non-empty, minX_⋆ can be found by bisection on X as g' is nondecreasing.
To compute the optimal solution x^r_∗, we first compute x̅^r. If x̅^r > 0, then we evaluate g'(0) and min∂ g(x̅^r). By Proposition <ref> and Lemma <ref>, we find g'(0) = 0 if ẏ^⋆≠ 0; = m otherwise.
Finally, if X_⋆ is non-empty, we compute its minimum.
Since x̅^r, g'_-(x̅^r), and minX_⋆ can all be computed by bisection, the optimal solution x^r_∗ can also be computed highly efficiently by bisection.
The expected marginal cost of providing frequency regulation depends on the desired charging rate ẏ^⋆, which is only known to the battery operator but unknown to the grid operator. Nevertheless, the grid operator knows that the expected marginal cost amounts to at most c^b m and can therefore infer that it is profitable for the battery operator to offer all available regulation power if c^r/c^b > m. Thanks to Lemma <ref>, we know that m is convex and nonincreasing in the roundtrip efficiency η^+η^- and nondecreasing in the mean absolute deviation Δ of the frequency deviations, but we do not know the explicit dependence of m on η^+η^- and Δ. In the next section, we will derive explicit lower and upper bounds on m that are tight for certain degenerate frequency deviation distributions. For these particular distributions, we will then derive analytical solutions to problem (<ref>).
§ ANALYTICAL SOLUTION
We now construct two discrete distributions ℙ_ξ and ℙ̅_ξ with the same mean absolute deviation as ℙ_ξ, which is given by Δ = 2φ(0).
Specifically, we define ℙ_ξ as a two-point distribution with mass 1/2 at -Δ and Δ, and ℙ̅_ξ as a three-point distribution with mass Δ/2 at -1 and 1, and mass 1-Δ at 0.
The super-cumulative distribution functions φ and φ̅ of ℙ_ξ and ℙ̅_ξ
with φ(-1) = 0 and φ̅(-1) = 0 are φ(ξ) = max{ 0, 1/2( ξ + Δ), ξ} and
φ̅(ξ) = max{ 0, Δ/2(ξ + 1), (1-Δ/2)ξ + Δ/2, ξ}, respectively.
We have φ(ξ) ≤φ(ξ) ≤φ̅(ξ) for all ξ∈ℝ.
We now define the asymptotic sensitivities m and m as the solutions to the nonlinear algebraic equations and m = (1 - η^+η^-) φ̅(m), respectively, which exist and are unique by Proposition <ref>. These equations admit the closed-form solutions
m = 1-η^+η^-/1+η^+η^-Δ and m = 1 - 1/1 + (1/η^+η^--1)Δ/2.
The discrete distributions ℙ_ξ and ℙ̅_ξ provide explicit bounds on the implicit function g(x^r).
These bounds, which we denote by g(x^r) and g̅(x^r), are obtained by solving the differential equation (<ref>) for ℙ_ξ and ℙ̅_ξ, respectively. Since φ and φ̅ are piecewise linear, the differential equations can be solved separately for each linear piece. Combining the results for the different pieces yields
g(x^r) = max{
g(0), m x^r + g(0) - 1 - η^+ η^-/1+η^+η^-| g(0) |}
and g̅(x^r) = max{
g(0), (1-η^+η^-)φ(0)x^r -[g(0)]^-/η^+η^-+(1-η^+η^-)(1-φ(0)), m x^r + η^+η^- [g(0)]^+ - [g(0)]^-/η^+η^-+(1-η^+η^-)φ(0)}.
We have g(x^r) ≤ g(x^r) ≤g̅(x^r) for all x^r ∈ℝ_+ and .
In order to calculate the optimal solution x^r_∗ under ℙ_ξ and ℙ̅_ξ, we need to compute the smallest stationary points, if they exist, as well as the boundary points of the
feasible sets X and X̅.
If they exist, the smallest stationary points occur either at kinks of the piecewise linear objective functions c^b g(x^r) - c^r x^r and c^b g̅(x^r) - c^r x^r or at x^r = 0 because g and g̅ are piecewise linear.
The left boundary points of both feasible sets are 0, as under ℙ_ξ. The right boundary points are min{ x^r_ℓ, x̅^r_u} and min{x̅^r_ℓ, x^r_u }, respectively, where x^r_ℓ, x̅^r_u, x̅^r_ℓ, and x^r_u are the unique points with
g( x^r_ℓ) = ℓ( x^r_ℓ), g(x̅^r_u) = u(x̅^r_u),
g̅(x̅^r_ℓ) = ℓ(x̅^r_ℓ),
and g̅( x^r_u) = u( x^r_u).
The points x^r_ℓ, x̅^r_u, x̅^r_ℓ, and x^r_u can be computed in closed form by evaluating the intersections of the affine functions ℓ and u with the affine functions that generate g and g̅. The right boundary points min{ x^r_ℓ, x̅^r_u} and min{x̅^r_ℓ, x^r_u } are then given by the minimum of six and eight rational functions of the problem parameters y_0, ẏ^⋆, y̅, y̅^+, y̅^-, η^+, η^-, Δ, γ, and T, respectively.
We omit the closed-form expressions because they are too cumbersome to be insightful in general.
In order to derive insightful solutions, we assume that y_0 = y^⋆ in the subsequent analysis.
In this special case, the losses due to frequency regulation are higher than for any other value of y_0 as explained in the discussion after Proposition <ref>. If it is profitable to provide frequency regulation in this special case, it will thus also be profitable to provide frequency regulation in any other case. As ẏ^⋆ = y^⋆ - y_0/T = 0, Lemma <ref> implies that g(x^r) = m x^r, g(x^r) = mx^r, and g̅(x^r) = m x^r. Under any frequency deviation distribution ℙ_ξ, the optimal solution to problem (<ref>) then satisfies x^r_∗ = x̅^r if m < c^r/c^b and x^r_∗ = 0 otherwise. Since ℓ and u are piecewise linear functions with two pieces each, the right boundary point x̅^r of the feasible set can be expressed as the minimum of only four rational functions of the problem parameters, and will admit an intuitive interpretation.
If y_0 = y^⋆, then
x̅^r = min{y̅^-/1-m, y̅^+/1+m, η^- y_0/γ(1-m), y̅ - y_0/η^+(γ + m T)}.
The first two terms in formula (<ref>) for x^r depend on the charging capacity y̅^+ and the discharging capacity y̅^-, while the last two terms depend on the battery capacity y̅ and the initial state-of-charge y_0. We say that the battery is power-constrained if x̅^r is equal to one of the first two terms. Otherwise, we say that the battery is energy-constrained. We now state the analytical solution.
[Analytical solution]
If y_0 = y^⋆, then an optimal solution to problem (<ref>) is
x^r_∗ =
0 if m ≥c^r/c^b,
x̅^r otherwise,
under any frequency deviation distribution ℙ_ξ. If ℙ_ξ = ℙ_ξ, then m = m. If ℙ_ξ = ℙ̅_ξ, then m = m.
As a direct consequence of Theorem <ref>,
if m < c^r/c^b, it is optimal to set x^r = x̅^r for any frequency deviation distribution with mean absolute deviation Δ, regardless of the shape of the distribution, because m≥ m. In the following, we describe the maximum amount x̅^r of regulation power that can be offered by power- and energy-constrained batteries for the case y_0 = y^⋆.
If the battery is power-constrained, then x̅^r depends on the charging and discharging efficiencies only through the marginal increase m in the expected power loss which, in turn, depends on these efficiencies only through
the roundtrip efficiency η^+η^-. Due to charging and discharging losses, the battery operator expects to lose energy while delivering frequency regulation and compensates the expected loss by purchasing the power mx^r from an electricity market, which decreases the effective charging capacity and increases the effective discharging capacity of the battery. Accounting for the effective charging and discharging capacities, the battery operator may dimension the discharging capacity y̅^- as a fraction 1-m/1+m of the charging capacity y̅^+ without restricting x̅^r.
If the battery is energy-constrained, then x̅^r depends not only on the roundtrip efficiency η^+η^-, through m, but also on the individual charging and discharging efficiencies. Charging losses increase the amount of energy that the battery can consume from the grid and therefore increase the effective storage capacity. Conversely, discharging losses decrease the amount of energy that the battery can deliver to the grid and therefore decrease the effective storage capacity. For given charging and discharging efficiencies, the initial state-of-charge y_0 determines how much energy the battery can consume from and deliver to the grid. As the battery operator must be able to both consume and deliver regulation power, x̅^r is maximized if the battery can absorb as much energy from the grid as it can deliver to the grid. This occurs at an initial state-of-charge of y^⋆_0 and results in the maximum amount x̅^r_⋆ of regulation power that can be offered by energy-constrained batteries, where
x̅^r_⋆ = min{y̅^-/1-m, y̅^+/1+m, η^-/γ/T(1 + η^+η^- - m) + η^+η^-my̅/T} and
y^⋆_0 = (1 - m) y̅/1 + η^+η^- + (η^+η^- T/γ - 1)m.
Assuming that y̅^- = 1-m/1+my̅^+, the battery is thus energy constrained if y̅^+/1+m > x̅^r_⋆, which occurs if the battery's charge rate C = y̅^+/y (C-rate) is no smaller than
C = 1/T·(1+m)η^-/γ/T(1+η^+η^–m)+η^+η^-m.
The C-rate expresses the percentage of the battery's storage capacity that can be consumed from the grid within one hour.
The initial state-of-charge y^⋆_0, which maximizes the amount of regulation power that energy-constrained batteries can provide, depends on the charging and discharging efficiencies only through the roundtrip efficiency η^+η^-. It increases from y̅/2 to y̅ as the roundtrip efficiency decreases from 1 to 0. The maximum amount x̅^r_⋆ of regulation power that can be provided by energy-constrained batteries is equal to y̅/2γ in the absence of charging and discharging losses. The storage capacity y̅ is divided by 2γ because the battery operator must be able to both consume and deliver all of the regulation power she promised for a total time of at least γ.
§ APPLICATIONS
In the following, we compare the profits that storage operators can earn by providing frequency regulation against the expected investment costs of lithium-ion batteries. After describing the parameters of our case study, we
analyze the marginal cost and profit of providing frequency regulation as well as the maximum amount of regulation power that storage operators can provide. Last, we discuss the profits that storage operators can earn over the planning horizon and, for the specific case of lithium-ion batteries, under what conditions on the total activation period γ and the length of the planning horizon T it can be profitable to invest in electricity storage for frequency regulation. We will point to the supplementary material (SM) for additional non-essential discussions.
§.§ Model Parametrization
We focus on storage operators who provide frequency regulation to the French grid operator and compute their profits based on historical frequency deviation data, on availability and delivery prices, and on wholesale and retail market prices. Frequency measurements, availability prices, and delivery prices are published by the French grid operator Réseau de Transport d'Electricité (RTE).[<https://clients.rte-france.com>] Wholesale market prices depend on how long before delivery electricity is traded. Since frequency regulation is traded up to one day before delivery, we use the prices of the day-ahead market, which are equal to the delivery prices published by RTE <cit.>.
Retail market prices vary from one electric utility company to another. We use the base tariff of Electricité de France, the largest French electric utility company. The French government regulates this particular tariff and publishes the corresponding prices.[Journal Officiel de la République Française: <https://legifrance.gouv.fr>]
We will compare the operating costs of storage technologies with different roundtrip efficiencies, namely hydrogen, redox flow batteries, vehicle-to-grid, pumped hydro, and stationary lithium-ion batteries. In assigning roundtrip efficiencies to storage technologies, we follow <cit.> for vehicle-to-grid and the World Energy Council <cit.> for all other storage technologies. Table <ref> lists the roundtrip efficiencies of the storage technologies. In order to judge whether it is profitable to invest in lithium-ion batteries for frequency regulation, we follow <cit.> and assume that investment costs in the year 2023 range from US$85 to US$165 per kWh of storage capacity with a lifetime of 10 years and from US$710 to US$860 per kW of charging and discharging capacity with a lifetime of 30 years. We assume an exchange rate of 1 = US$1.15 and annualize the investment costs with a yearly discount rate of 2%, which equals the long-term inflation target of the <cit.>.
We use 2019 electricity prices, , before the drop in prices during the Covid pandemic and the rise in prices since the war in Ukraine and the maintenance problems in the French nuclear power plant fleet. Based on these prices, we set the ratio c^r/c^b of the expected average price of regulation power to the expected market price of electricity to 0.251 for wholesale market prices and to 0.059 for retail market prices. In terms of frequency regulation, we consider the common European market for frequency containment reserves, which has a daily planning horizon and thus set T = 24 hours. We approximate the cumulative distribution function corresponding to ℙ_ξ by the symmetric logistic function F(ξ) = 1/1 + exp(-θξ) with θ = 2 ln(2)/Δ and mean absolute deviation Δ = 0.0816. Finally, we set ẏ^⋆ = 0 in all experiments. SM <ref> provides more details about the model parametrization. We provide all code and data at
<www.github.com/lauinger/cost-of-frequency-regulation-through-electricity-storage>.
§.§ Marginal Cost and Maximum Regulation Bid
The marginal cost mc^bT of providing frequency regulation is the marginal increase in the expected power loss m multiplied by the market price of electricity c^b and the length of the planning horizon T.
It is profitable to use a storage device at its full potential if mc^bT is lower than the marginal revenue Tc^r, , if and only if the ratio c^r/c^b of the expected average price of regulation power to the expected average market price of electricity exceeds the marginal increase m in the expected power loss.
Figure <ref> displays m as a function of the roundtrip efficiency for the estimated logistic distribution of frequency deviations, together with its lower bound m and its upper bound m, both parametrized to have the same mean absolute deviation as the logistic distribution. The bounds are tight when the roundtrip efficiency equals one and loosen as the roundtrip efficiency decreases. The upper bound loosens faster than the lower bound. For roundtrip efficiencies higher than 0.60, the lower bound, m = 1-η^+η^-/1+η^+η^-Δ, underestimates m by less than 4.59· 10^-4. At a roundtrip efficiency of 0.35, typical for inefficient hydrogen storage, m equals 4.30%. For inefficient redox flow batteries, with a roundtrip efficiency of 0.60, m decreases to 2.09%. For inefficient lithium-ion batteries, with a roundtrip efficiency of 0.85, m decreases further to 0.66%. Unsurprisingly, m vanishes for perfectly efficient storage devices. For storage operators buying electricity at retail prices, the ratio c^r/c^b was greater than 0.026 on all days in 2019. It would have therefore been profitable for them to use any of the storage technologies we consider at their full potential for frequency regulation, except for hydrogen storage. For storage operators buying electricity at wholesale prices, the ratio c^r/c^b was greater than 0.07 on all days in 2019, which means that it would have been profitable for them to use a hydrogen storage device at its full potential, too.
Although charging and discharging losses may not cause storage operators to withhold regulation power from the market, they still reduce the profit c^r - mc^b per unit of regulation power. Figure <ref> shows the average profit per unit of regulation power as a function of roundtrip efficiency for storage operators buying electricity at wholesale and retail prices in the year 2019. Losses reduce the profit by 19% from 0.90cts/kW·h to 0.73cts/kW·h for the most inefficient storage devices if electricity is bought at wholesale prices. If electricity is bought at retail prices, losses reduce the profit by 72% to 0.24cts/kW·h. The reduction is four times higher at retail than at wholesale prices. A hydrogen tank with η^-η^+ = 0.4 buying electricity at wholesale prices and an electric vehicle with η^-η^+ = 0.8 buying electricity at retail prices achieve the same profit per unit of regulation power.
Losses reduce not only the profit per unit of regulation power, but may also impact the amount of regulation power x̅^r a storage device can provide. If the storage device is energy-constrained, then charging losses increase the amount of energy that the storage device can consume from the grid, while discharging losses decrease the amount of energy that the storage device can deliver to the grid. In principle, charging losses may outweigh discharging losses and increase the normalized regulation bid compared to storage devices with no losses. In practice, however, discharging losses usually outweigh charging losses. As examples, we consider lithium-ion batteries, vehicle-to-grid, and hydrogen storage, all of them operating at an activation ratio of 0.2. Hydrogen storage is unlikely to be energy-constrained because hydrogen can be stored at low cost in steel tanks, or at even lower cost in salt caverns <cit.>. Nevertheless, we include hydrogen in our comparison as an example of storage devices with low roundtrip efficiencies. For lithium-ion batteries with charging and discharging efficiencies of 0.92 each, losses reduce the normalized regulation bid from 1 to 0.98. For vehicle-to-grid with a charging efficiency of 0.88 and a discharging efficiency of 0.79, loosely based on <cit.>, losses reduce the normalized regulation bid to 0.91. For hydrogen storage with a charging efficiency of 0.80 and a discharging efficiency of 0.58, based on <cit.>, losses reduce the normalized regulation bid to 0.77. For more details on the impact of charging and discharging losses on x̅^r, see SM <ref>.
§.§ Profits
We will now analyze the profit that an energy-constrained storage device may earn per unit of storage capacity. First, we describe the operating profit made over the planning horizon of length T. For lithium-ion batteries, we then calculate the effective yearly profit as the difference between the operating profits made over one year and the annualized investments costs of the battery.
The operating profit is the product of the profit per unit of regulation power and the amount of regulation power the storage device can deliver. Formally, the operating profit is thus equal to
(c^r - m c^b) ·η^- y̅/γ/T(1 + η^+η^–m) + η^+η^-m.
It may be somewhat surprising that the operating profit depends on the length of the planning horizon T only through the activation ratio γ / T. Intuitively, one may expect the operating profit to increase linearly with the length of the planning horizon because a longer planning horizon should allow the storage operator to sell the same amount of regulation power for a longer period of time. This reasoning is correct if the storage device is power-constrained. For energy-constrained storage devices, however, the storage operator can only deliver a fixed amount of regulation energy. The length of the planning horizon
does therefore not influence the operating profit. Figure <ref> summarizes the impact of losses on the profits per unit of regulation power, on the maximum normalized regulation power, and on operating profits at wholesale and retail electricity prices for the lithium-ion battery, vehicle-to-grid, and hydrogen storage examples that we considered earlier. SM <ref> explains in detail how operating profits depends on the charging and discharging efficiencies.
The effective yearly profit determines whether it is worthwhile for a storage operator to invest into storage devices for frequency regulation. Based on the cost and lifetime data of lithium-ion batteries in Section <ref>, we estimate the annualized costs of lithium-ion batteries in the year 2023 to range from 8.2/kWh to 16.0/kWh for energy storage capacity and from 27.6/kW to 33.4/kW for charging and discharging capacity.
Figure <ref> shows the effective yearly profit for lithium-ion batteries with charging and discharging efficiencies of 0.92, buying electricity at wholesale prices, as a function of the length of the planning horizon, for activation ratios of 0.1 and 0.2, for low annualized investment costs of 8.2/kWh and 27.6/kW, and for high annualized investment costs of 16.0/kWh and 33.4/kW. At the current 24 hour planning horizon, lithium-ion batteries are profitable only at an activation ratio of 0.1 and low investment costs. Given that we only considered the cost of the battery itself but no additional costs related to installation, maintenance, administration, or land lease, investing in lithium-ion batteries for frequency regulation does not seem to be profitable in the near future. In the medium term, lithium-ion batteries may become sufficiently low-cost <cit.> to be used for frequency regulation. Besides falling battery prices, grid operators might opt for an activation ratio of 0.1 rather than 0.2 to make the use of energy storage for frequency regulation more profitable. This would roughly double the operating profits from frequency regulation, but it would also shrink the uncertainty set and therefore make grid operators more vulnerable to extreme frequency deviations that could cause black-outs.
Alternatively, grid operators could reduce the length of the planning horizon T. If the planning horizon were to be reduced from 24 hours to 4 hours, for example, the operating profits accrued over a one-year period would increase by a factor 6. Similarly, the minimum C-rate required for the battery to be energy-constrained and thus the costs for charging and discharging capacity would also increase by a factor 6. The increase in operating profits is well worth the increased charger costs. Under the reduced planning horizon, lithium-ion batteries could achieve an effective yearly profit of 10per kWh of storage capacity, even at high investment costs and high activation ratios. A shorter planning horizon for frequency regulation does not necessarily make grid operators more vulnerable to extreme frequency deviations and is already common practice in intraday wholesale electricity markets.[<https://www.entsoe.eu/network_codes/cacm/implementation/sidc/>]
Adopting intraday markets for frequency regulation as well would make energy-constrained storage devices more competitive with power-constrained flexibility providers, such as thermal power plants and pumped hydro storage.
The increased competition may decrease the total cost of frequency regulation, which is ultimately borne by the public.
§ CONCLUSIONS
The <cit.> estimates that they will need to store electricity in batteries with a total power of up to 240GW by the year 2050. Lithium-ion batteries are considered a promising source of frequency regulation, thanks to their fast dynamics. The investment costs of lithium-ion batteries have declined sharply in recent years, but we find that they are not yet low enough for lithium-ion batteries to be profitable in the frequency regulation market. Since Europe plans to increasingly rely on batteries in the future, their use should become profitable.
We identify two policy options that make electricity storage in general and battery storage in particular more profitable in the frequency regulation market. First, regulators can decrease the marginal costs of frequency regulation by making it easier for small and medium-sized storage devices to access wholesale electricity markets. This is one of the aims of Order 845 by the US <cit.>.
Second, regulators can decrease the length of the planning horizon, which is currently one day in the common European frequency regulation market. We show that the amount of regulation power that storage devices can provide may be constrained
by their storage capacity and their initial state-of-charge. In this case, the profits from frequency regulation over the lifetime of the storage devices are inversely proportional to the length of the planning horizon. The planning horizon could be shortened by adopting intraday markets for frequency regulation, which already exist for the wholesale of electricity.
Acknowledgements D.L. thanks Emilia Suomalainen, Jaâfar Berrada, François Colet, Willett Kempton, and Yannick Perez for helpful discussions, and the Institut Vedecom for funding.
1
abbrvnat
1.5
§ PROOFS
This appendix contains the proofs of all essential theorems, propositions, and lemmas in the main text. All other proofs are relegated to the supplementary material.
The proof of Proposition <ref> relies on the following lemma, which involves the set _↓ of all nonincreasing left-continuous functions in .
For any x^r ≥ 0, we have
max_δ∈, t ∈ y(x^b, x^r, δ, y_0, t)
=
max_δ∈^+_↓, t ∈ y(x^b, x^r, δ, y_0, t).
We first show that the upper bound on the charging power and the upper bound on the state-of-charge are valid for all frequency deviation trajectories δ∈ and all time instants t ∈ if and only if they are valid for the particular frequency deviation trajectory δ^(+), defined through δ^(+)(t) = 1 if t ≤γ and δ^(+) = 0 otherwise, and both time instants t ∈{γ,T}.
The upper bound on the charging power is valid for all δ∈ and all t ∈ if and only if it is valid for the maximum charging power that can be achieved by any δ∈ and any t ∈. We have
max_δ∈, t ∈ y^+(x^b, x^r, δ(t))
= max_δ∈^+, t ∈ y^+(x^b, x^r, δ(t))
= max_δ∈^+, t ∈ x^b + δ(t) x^r
= x^b + x^r,
where the first equality holds because y^+ is nondecreasing in δ(t) and because is symmetric. In fact, for any δ∈, we have |δ|∈, and the maximum charging power for |δ| is at least as high as the one for δ. The second equaliy holds because y^+ is linear in δ(t) whenever δ(t) ≥ 0. The last equality holds because δ(t) ≤ 1 for all δ∈D^+ and t ∈T, and because the upper bound is attained at δ = δ^(+) and t = γ, for example. Thus, assertion (i) follows.
For the upper bound on the state-of-charge, we first use Lemma (<ref>) to obtain
max_δ∈, t ∈ y(x^b, x^r, δ, y_0, t)
=
max_δ∈^+_↓, t ∈ y(x^b, x^r, δ, y_0, t).
If x^b + x^r < 0, then the battery is discharging for all t ∈. The upper bound on the state-of-charge is thus valid if y_0 ≤y̅, which we stated as a condition in Proposition <ref>.
Otherwise, if x^b + x^r ≥ 0, one can show that
max_δ∈^+_↓, t ∈ y(x^b, x^r, δ, y_0, t)
= {=4pt1.2[ max_t^c, δ^c y_0 + t^c η^+ (x^b + δ^c x^r); x^b + δ^c x^r ≥ 0,
t^c δ^c ≤γ, 0 ≤δ^c ≤ 1, 0 ≤ t^c ≤ T. ].
To this end, we first show that any feasible solution to the maximization problem on the left-hand side of (<ref>) gives rise to a feasible solution to the maximization problem on the right-hand side with the same or a larger objective value. For any δ∈^+_↓ and t ∈, we construct the last time at which the battery is still charged as t^c = max_t' ∈ [0,t]{t': x^b + δ(t') x^r ≥ 0}, which exists because δ is left-continuous nonincreasing and hence upper semi-continuous. We also construct the average frequency deviation signal during the charging process as δ^c = 1/t^c∫_0^t^cδ(t') dt', which satisfies x^b + δ^c x^r ≥ 0 because x^b + δ(t') x^r ≥ 0 for all t ≤ t^c because δ is nonincreasing. As δ∈^+_↓, we have t^c δ^c = ∫_0^t^cδ(t') dt' ≤∫_δ(t') dt' ≤γ. In addition, we have 0 ≤ t^c ≤ T as t ∈ and 0 ≤δ^c ≤ 1 as δ∈^+_↓. Since the state-of-charge y is nondecreasing in δ, which is nonincreasing in time, we have
y(x^b, x^r, δ, y_0, t)
≤
y(x^b, x^r, δ, y_0, t^c)
=
∫_0^t^c y_0 + η^+ (x^b + δ(t')x^r) dt'
=
y_0 + t^c η^+(x^b + δ^c x^r),
where the second equality holds because δ is integrated against a constant function and can thus be set to its average value δ^c over the integration horizon.
Given a feasible solution (t^c, δ^c) to the maximization problem on the right-hand side of (<ref>), we can also construct a feasible solution to the left-hand side with the same objective value by setting t = t^c and δ(t') = δ^c if t' ≤ t^c and =0 otherwise. It is clear that t ∈ since 0 ≤ t^c ≤ T. In addition, δ is nonnegative and left-continuous nonincreasing as 0 ≤δ^c ≤ 1. Finally, ∫_δ(t') dt' = t^c δ^c ≤γ ensures that δ∈^+_↓. The construction satisfies again t^c = max_t' ∈ [0,t]{t': x^b + δ(t') x^r ≥ 0} and δ^c = 1/t^c∫_0^t^cδ(t') dt'. The equality of the objective values thus follows from (<ref>).
In summary, we have shown that the two problems in (<ref>) have indeed the same maximum.
The objective function of the right-hand side problem in (<ref>) is nondecreasing in δ^c and t^c since η^+ ≥ 0, x^r≥ 0, and x^b + δ^c x^r ≥ 0. It is thus optimal to make δ^c t^c as large as possible, , setting it to γ, because the box constraints on δ^c and t^c only imply a weaker upper bound of T ≥γ on t^c δ^c. By substituting δ^c = γ / t^c we arrive at the equality
. =1pt1.2[ max_t^c, δ^c y_0 + t^c η^+ (x^b + δ^c x^r); x^b + δ^c x^r ≥ 0,
t^c δ^c = γ,
0 ≤δ^c ≤ 1,
0 ≤ t^c ≤ T ]}
=
{=1pt1.2[ max_t^c y_0 + η^+ (t^c x^b + γ x^r); t^c x^b + γ x^r ≥ 0, γ≤ t^c ≤ T. ].
We will now analyze the one-dimensional linear program on the right-hand side of (<ref>). If x^b ≥ 0, then the inequality t^c x^b + γ x^r ≥ 0 is always valid and it is optimal to make t^c as large as possible, , setting it to T. Conversely, if x^b < 0, then it is optimal to make t^c as small as possible, , setting it to γ. In this case t^c satisfies again the inequality t^c x^b + γ x^r ≥ 0 because x^b + x^r ≥ 0. Combining all of the above arguments, we thus obtain
max_δ∈, t ∈ y(x^b, x^r, δ, y_0, t)
=
y_0 + η^+ ( max{γ x^b, T x^b } + γ x^r ),
leading to assertion (iii). One easily verifies that δ^(+) attains the maximum on the left-hand side.
Using similar arguments, one can show that the upper bound on the discharging power and the lower bound on the state-of-charge hold for all frequency deviation signals δ∈ and all time instants t ∈ if and only if they hold for the particular frequency deviation signal δ^(-) = - δ^(+) and all time instants t∈{γ, T}. We omit the details for the sake of brevity.
The proof of Proposition <ref> relies on the following symmetry property of φ.
For all z ∈, we have φ(z) = φ(-z) + z.
We first prove equation (<ref>) for x^r > 0. We have
[
y(x^b, x^r, δ̃, y_0, T)
]
=
y_0 + T [
1/T∫_η^+ [ x^b + δ̃(t) x^r]^+
- 1/η^-[
x^b + δ̃(t) x^r
]^-
dt
]
= y_0 + T x^r ∫_-1^1 η^+ [ x^b/x^r + ξ]^+ - 1/η^-[ x^b/x^r + ξ]^- ℙ_ξ(dξ),
where the second equality follows from the definition of ℙ_ξ. Setting z = x^b/x^r to simplify notation, we find
∫_-1^1 [ z + ξ ]^+ ℙ_ξ(dξ)
= ∫_-z^1 ( z + ξ ) ℙ_ξ(dξ)
= z F(ξ) |_-z^1 + ξ F(ξ) |_-z^1 - ∫_-z^1 F(ξ) dξ
= (z + 1) F(1) + φ(-z) - φ(1)
= z + φ(-z) .
The second equality follows from integration by parts, whereas the fourth equality holds because F(1) = φ(1) = 1. In fact, as φ(-1) = 0 by construction, Lemma <ref> implies that
φ(1) = φ(-1) +1 = 1.
Following a similar reasoning and keeping in mind that F(-1) = 0, we obtain
∫_-1^1 [ z + ξ ]^- ℙ_ξ(dξ)
= -∫_-1^-z ( z + ξ ) ℙ_ξ(dξ)
= (z - 1) F(-1) + φ(-z) - φ(-1) = φ(-z).
Substituting these expressions into (<ref>) yields equation (<ref>).
For x^r = 0, the expectation 𝔼[y(x^b,0, δ̃, y_0, T)] is trivially equal to y_0 + T (η^+ [x^b]^+ - 1/η^- [x^b]^-), which corresponds to the formula given in Proposition <ref> thanks to the definition of the perspective function. In fact,
lim_x^r → 0^+ x^r φ( - x^b/x^r)
= x^b (
lim_x^r → 0^+∂/∂ x^b x^r φ(-x^b/x^r)
)
= x^b (
lim_x^r → 0^+ - F( -x^b/x^r)
)
= [ x^b ]^-.
We now establish several useful properties of the expected terminal state-of-charge. Note first that φ is convex, Lipschitz continuous, and almost everywhere differentiable because it is a super-cumulative distribution function. The expected terminal state-of-charge is jointly concave in x^b and x^r because -x^rφ(-x^b/x^r) is the negative perspective of the convex function φ(-x^b) and therefore concave <cit.>. As a perspective of a Lipschitz continuous convex function, the expected terminal state-of-charge is also globally continuous.
To see that the expected terminal state-of-charge is strictly increasing in x^b for every x^r > 0, note that
∂/∂ x^b[
y(x^b, x^r, δ̃, y_0, T)
]
= T( η^+ + η_d F( - x^b/x^r) ) > 0
∀ (x^b, x^r) ∈ℝ×ℝ_++
because η^+ > 0, η_d ≥ 0, and F is nonnegative. For x^r = 0, 𝔼[y(x^b,0, δ̃, y_0, T)] = y_0 + T (η^+ [x^b]^+ - 1/η^- [x^b]^-), which is strictly increasing in x^b since η^+ > 0 and η^- > 0.
Similarly, to see that
the expected terminal state-of-charge is nondecreasing in x^r, we note that
∂/∂ x^r[
y(x^b, x^r, δ̃, y_0, T)
]
=
- η_d T( φ(-x^b/x^r) + x^b/x^rF(-x^b/x^r))
≤ 0
∀ (x^b, x^r) ∈ℝ×ℝ_++.
To prove the inequality, we set z = -x^b / x^r and show that the function -η_d (φ(z) - z F(z)) is nonnegative. As φ(z) = 0 for all z ≤ -1, Lemma <ref> implies that φ(z) = z for all z ≥ 1. Thus, we have φ(z) - zF(z) = 0 for all | z |≥ 1. If z ∈ [-1, 0], then φ(z) - zF(z) ≥ 0 because φ and F are both nonnegative. Finally, if z ∈ [0,1] then we first note that
φ(1) = φ(z) + ∫_z^1 F(z') dz'
≤φ(z) + ∫_z^1 F(1) dz'
= φ(z) + F(1)(1-z).
Hence, we have
0 = φ(1) - F(1)
≤φ(z) + F(1)(1-z) - F(1)
= φ(z) - z F(1)
≤φ(z) - z F(z)
for every z ∈ [0,1], where the second inequality follows from the monotonicity of F.
Finally, the expected terminal state-of-charge is unbounded above in x^b because
lim_x^b →∞ y_0 + T(
η^+ x^b - η_d x^r φ( -x^b/x^r)
)
= lim_x^b →∞ y_0 + T η^+ x^b
= ∞,
where the first equality holds because φ(-x^b/x^r) = 0 for all x^b ≥ x^r.
The average expected charging rate ẏ is a positive affine transformation of the expected terminal state-of-charge. Proposition <ref> thus immediately implies that ẏ is continuous
and jointly concave in x^b and x^r. In addition, ẏ is strictly increasing and unbounded above in x^b, and nonincreasing in x^r. As ẏ is concave and strictly increasing in x^b, it is also unbounded below in x^b. Overall, ẏ is continuous and unbounded below and above in x^b, which means that the equation ẏ(x^b,x^r) = ẏ^⋆ has at least one solution x^b for any given x^r ∈ℝ_+. As ẏ is strictly increasing in x^b, this solution is also unique. The constraint ẏ(x^b, x^r) = ẏ^⋆ defines therefore a unique implicit function g:ℝ_+ →ℝ such that ẏ (g(x^r), x^r) = ẏ^⋆ for all x^r ∈ℝ_+.
As ẏ(x^b, x^r) is nonincreasing in x^r and strictly increasing in x^b, the equality ẏ(g(x^r), x^r) = ẏ^⋆ remains valid if and only if the implicit function g is nondecreasing.
As ẏ(x^b, x^r) is concave in x^b and x^r, the superlevel set is convex. As ẏ is strictly increasing in x^b, a point (x^b,x^r) satisfies ẏ(x^b, x^r) ≥ẏ^⋆ if and only if x^b ≥ g(x^r). The set C thus coincides with the epigraph of g. The convexity of C then implies that g is a convex function <cit.>.
The proof of Proposition <ref> reveals that the partial derivatives of the expected terminal state of charge with respect to x^b and x^r exist on ℝ and ℝ_++, which implies that the partial derivatives of ẏ also exist on ℝ and ℝ_++. Since ẏ is partially differentiable and continuous, the univariate function g will be continuous and differentiable almost everywhere. By the implicit function theorem <cit.>, the derivative of g is given by
g'(x^r)
= -∂ẏ(x^b, x^r)/∂ x^r/∂ẏ(x^b, x^r)/∂ x^b
= η_d φ(-x^b/x^r) + x^b/x^r F(-x^b/x^r)/η^+ + η_d F(-x^b/x^r)
if it exists. To show that g'(x^r) = 0 for every x^r ∈ (0, | g(0) |), assume that g(0) ≠ 0, and note that
ẏ(g(0), | g(0) |)
= η^+ g(0) - η_d | g(0) | φ( -g(0)/| g(0)|)
= η^+ g(0) - η_d [g(0)]^-
= ẏ(g(0), 0),
where the first and third equalities follow from Proposition <ref>, and the second equality holds because φ(z) = 0 for z ≤ -1 and φ(z) = z for z ≥ 1. This implies that ẏ (g(| g(0) |), | g(0) |) = ẏ(g(0), 0), and thus g(| g(0) |) = g(0). As g is nondecreasing, it must be constant throughout the interval [0, | g(0) |], which means that g'(x^r) = 0 for all x^r in the interior of that interval.
Note that the asymptotic slope of g is given by m = lim_x^r →∞ g'(x^r) = lim_x^r →∞g(x^r)/x^r because g is convex.
The limit exists and is bounded below because g is convex. In addition, we have
0 = lim_x^r →∞ẏ^⋆/x^r
= lim_x^r →∞ẏ(g(x^r), x^r)/x^r
= lim_x^r →∞η^+ g(x^r)/x^r - η_d φ( - g(x^r)/x^r),
implying that lim_x^r →∞g(x^r)/x^r < +∞ as lim_x^r →∞φ(-g(x^r)/x^r) = 0. Thus, m = lim_x^r →∞g(x^r)/x^r is finite, and m is a solution to the equation η^+ μ - η_d φ(-μ) = 0. As η_d = 1/η^- - η^+ and by Lemma <ref>, this equation is equivalent to m = (1 - η^+η^-)φ(m). It admits a unique solution within the interval [0,1), because the function s(μ) = μ - (1 - η^+η^-)φ(μ) is nondecreasing in μ, nonpositive for μ = 0, and strictly positive for μ = 1. The function s is nondecreasing as its derivative is nonnegative because F(m) ≤ 1 for all m ∈ℝ and because η^+η^- ∈ (0,1].
We know from Proposition <ref> that the asymptotic slope m is the unique solution to the equation s(μ) = 0, where s(μ) = μ - (1-η^+η^-)φ(μ). We have s(0) = -(1-η^+η^-)φ(0) ≤ 0 and s'(μ) = 1 - (1-η^+η^-)F(μ). As F(μ) ∈ [0,1] for all μ∈ℝ and as η^+η^-∈ (0,1], s' is nonnegative and s is nondecreasing in μ. An increase in Δ = 2φ(0) can decrease (but not increase) the intercept s(0), but it does not influence the slope s'. An increase in η^+η^- can increase (but not decrease) the intercept s(0) and the slope s'. The zero-crossing of s is thus nondecreasing in Δ and nonincreasing in η^+η^-, and so is m.
To show that m is convex and nonincreasing in the roundtrip efficiency η^+η^-, we first note that
m = min_μ{μ : μ≥ (1-η^+η^-)φ(μ)}
as it is never optimal to set μ > (1-η^+η^-)φ(μ) since φ is nondecreasing and continuous.
Next, we characterize φ as a pointwise supremum of affine functions.
As φ
is a continuous convex function, it is closed. By the envelope representation theorem, any closed convex function is the pointwise supremum of all affine functions below it <cit.>. For φ, specifically, we have φ(μ) = 0 for all μ≤ -1 and φ(μ) = μ for all μ≥ 1. It thus suffices to consider all affine functions a μ + b with a ≥ 0 and b ≥ 0 such that φ(μ) ≥ a μ +b. The highest possible slope of any such function is the highest possible slope of φ, which is 1. Similarly, the highest possible intercept of any such function is the intercept of φ, which is φ(0). Let A = { (a, b) ∈ [0, 1] × [0, φ(0)] : φ(μ) ≥ a μ + b ∀μ∈ℝ} be the set of all admissible coefficients for the affine functions. By the envelope representation theorem, we have φ(μ) = max_(a,b) ∈A a μ + b for all μ∈ℝ.
Sustituting this expression of φ into equation (<ref>) yields
m = min_μ{μ : μ≥ (1-η^+η^-) max_(a,b) ∈A a μ + b }
= min_μ{μ : μ≥ (1-η^+η^-) a μ + b ∀ (a,b) ∈A}
= min_μ{μ : (1 - a(1-η^+η^-))μ≥ (1 - η^+η^-)b ∀ (a,b) ∈A}
= max_(a,b) ∈Aς(a,b,η^+η^-),
where ς(a,b,η^+η^-) = 1 - η^+ η^-/1 - a (1 - η^+ η^-)b.
The first equality follows directly from substitution. The second equality holds because the inequality in the optimization problem is valid for all (a,b) ∈A if and only if it is valid for a pair (a,b) ∈A that maximizes the right-hand side of the inequality. Note that the embedded maximization in the inequality does indeed maximize the right-hand side since 1 - η^+η^- ≥ 0. The fourth equality holds
because 1- a(1-η^+η^-) > 0 as a ≤ 1 and η^+η^- ∈ (0,1].
We will now show that ς is convex in η^+η^- for all (a,b) ∈A, which will later imply that m is also convex in η^+η^-. Note that ς is twice differentiable in η^+η^-. In fact,
∂/∂ (η^+η^-) ς(a, b, η^+η^-)
= - b/(1 - a (1 - η^+η^-))^2 and ∂^2/∂ (η^+η^-)^2 ς(a, b, η^+η^-)
= 2ab/(1 - a(1-η^+η^-))^3.
As a ≥ 0 and b ≥ 0 for all (a,b) ∈A, the first and second derivatives are always nonpositive and nonnegative, respectively, which shows that ς is convex and nonincreasing in η^+η^-. Since the pointwise maximum of convex functions is a convex function <cit.>, the asymptotic slope m is thus also convex and nonincreasing in η^+η^-.
If ẏ^⋆ = 0, then we have
ẏ(g(x^r), x^r) = 0 ẏ(g(x^r), x^r)/x^r = η^+ g(x^r)/x^r - η_d φ( - g(x^r)/x^r) = 0 g(x^r) = m x^r
for all x^r > 0, where the second equivalence holds because m is the unique solution to η^+ m - η_d φ(-m) = 0, which is equivalent to m = (1 - η^+η^-)φ(m), as revealed by the proof of Proposition <ref>. For x^r = 0, we have g(x^r) = 0 = m x^r because g is continuous on ℝ_+. Thus, g(x^r) = m x^r.
The robust constraints are replaced by their deterministic counterparts from Proposition <ref>. The constraint on the expected terminal state-of-charge is enforced implicitly by expressing the decision variable x^b as the function g(x^r), characterized in Proposition <ref>.
The feasible set X is convex if the function q(x^r) = g(x^r) - ℓ(x^r) is monotonic, in which case there exists at most one intersection between g and ℓ. In the following, we show that Assumption <ref> implies that q is strictly decreasing. The slope of q is maximal when the slope of g is maximal and the slope of ℓ is minimal. The maximal slope of g is m, while the minimal slope of ℓ is γ/T. The function q is strictly decreasing if its maximal slope is strictly negative, which is the case if m < γ/T. As φ(m) = φ(-m) + m by Lemma <ref>, we have indeed
m = (1 - η^+ η^-) φ(m)
= ( 1/η^+η^- - 1 ) φ(-m)
≤( 1/η^+η^- - 1 ) φ(0)
< 1/2( 1/η^+η^- - 1 ) γ/T≤γ/T.
The first inequality holds because φ is nondecreasing and m ≥ 0. The strict inequality holds because φ(0) < γ/2T by Remark <ref>. The last inequality holds because η^+η^- ≥1/3 by Assumption <ref>.
Problem (<ref>) minimizes the net cost T(c^b g(x^r) - c^r x^r) over X = [0, x̅^r]. Since g is convex by Proposition <ref>, its derivative g' is nondecreasing (where it exists). The net marginal cost T(c^b g'(x^r) - c^r) is thus nondecreasing.
As g is also proper and closed, Theorem 24.1 by <cit.> implies that the right and left derivative functions g'_+ and g'_- are nondecreasing and that g'_+ is a pointwise upper bound on g'_-.
If x̅^r = 0, the only feasible and therefore optimal solution is x^r_∗ = 0. If x^r > 0, we distinguish three cases based on the values of g'_+(0) and
g'_-(x̅^r).
If g'_+(0) ≥c^r/c^b, the marginal profit is nonpositive for all feasible values of x^r, and x^r_∗ = 0 is the smallest optimal solution.
Conversely, if
g'_-(x̅^r ) < c^r/c^b,
then the marginal profit is strictly positive for all feasible values of x^r and x^r_∗ = x̅^r is the only optimal solution.
Finally, if g'(0)_+ < c^r/c^b and
g'_-(x̅^r ) ≥c^r/c^b, then the set X_⋆ of roots to the net marginal cost in (0,x̅^r] is nonempty and compact because g is convex and continuous. Any root would be an optimal solution and, in particular, the smallest root x^r_∗ = minX_⋆ is an optimal solution. This solution exists because X_⋆ is compact.
As ξ̃ is supported on [-1,1]
we have for all |ξ|≥ 1. For any |ξ| < 1, φ(ξ) ≤φ(ξ) as φ is the piecewise maximum of three affine functions that are tangent to the convex function φ at ξ = -1, = 0, and = 1. Conversely, φ̅(ξ) ≥φ(ξ) since φ̅ is the piecewise maximum of two linear interpolations of φ from ξ = -1 to ξ = 0 and from ξ = 0 to ξ = 1.
For any x^r ≥ 0, the constraint ẏ(x^b, x^r) = ẏ^⋆ implies that g(x^r), g(x^r), and g̅(x^r) are the unique roots of the functions s_φ, s_φ, and s_φ̅, respectively, where s_φ is defined as
s_φ(x^b) = ẏ^⋆ + η_d φ(-x^b/x^r)x^r - η^+ x^b
and the functions s_φ and s_φ̅ are defined similarly. By definition, we have 0 = s_φ( g(x^r)) ≤ s_φ( g(x^r)), where the inequality holds because φ( g(x^r)) ≤φ( g(x^r)), see Lemma <ref>, and η_d x^r ≥ 0. Since s_φ and s_φ are strictly decreasing by Proposition <ref>, we must have g(x^r) ≥ g(x^r). A similar reasoning shows that g̅(x^r) ≥ g(x^r). As g, g, and g̅ are convex, the inequality g(x^r) ≤ g(x^r) ≤g̅(x^r) implies the inequality on the asymptotic slopes m≤ m ≤m.
If y_0 = y^⋆, then ẏ^⋆ = y^⋆ - y_0/T = 0 and so g(x^r) = m x^r by Lemma <ref>. Hence, the constraints in Problem (<ref>) obey the following equivalences.
x^r + g(x^r) ≤y̅ ^+
x^r ≤y̅^+/1+m
x^r - g(x^r) ≤y̅^-
x^r ≤y̅^-/1-m
x^r + max{T/γg(x^r), g(x^r) }≤y̅ - y_0/η^+ γ
x^r ≤y̅ - y_0/η^+(γ + mT)
x^r - min{T/γ g(x^r), g(x^r) }≤η^- y_0/γ
x^r ≤η^- y_0/γ(1-m)
The last two equivalences hold because T/γ≥ 1 and g(x^r) = m x^r ≥ 0 since any feasible x^r must be nonnegative and since m is nonnegative by Proposition <ref>.
If y_0 = y^⋆, then ẏ^⋆ = y^⋆ - y_0/T = 0 and so g(x^r) = m x^r by Lemma <ref>. Hence, g'_+(0) = g'_-(x̅^r) = m. Theorem <ref> implies that x^r_∗ = 0 if m ≥c^r/c^b and = x̅^r otherwise.
§ SUPPLEMENTARY MATERIAL
As supplementary material, we first provide additional descriptions and analysis of the case study presented in Section <ref> of the main paper. Next, we provide all proofs that are not essential enough to be included in Appendix <ref> of the main paper.
§.§ Detailed Model Parameters
§.§.§ Frequency Regulation
The technical term for the type of frequency regulation we consider is frequency containment reserves (FCR). France participates in a common European market for frequency containment reserves with a daily planning horizon.[<https://entsoe.eu/network_codes/eb/fcr>] We thus set T = 24 hours. In its regulation on frequency containment reserves the European Commission specifies that the “minimum activation period to be ensured by FCR providers [is not to be] greater than 30 or smaller than 15 minutes” and that storage operators “shall ensure the recovery of [their] energy reservoirs as soon as possible, within 2 hours after the end of the alert state” <cit.>, where an activation period designates a period of consecutive extreme frequency deviations δ(t) ∈{-1,1}.
The total activation period γ to be ensured over a period of 24 hours is thus between 2.75 hours and 5 hours.
The uncertainty set D contains all frequency deviation signals that correspond to a given total activation period γ. Some of these signals may exhibit activation periods that are longer than the minimum activation period prescribed by the European Commission. The uncertainty set D is therefore a conservative approximation of the regulation by the European Commission. The strength of the delivery guarantee required by the European Commission can nevertheless be measured by the activation ratio γ/T, that is, the fraction of time during which a storage operator must be able to provide all the regulation power she promised. When studying the profits a storage operator can reap from frequency regulation over the lifetime of an electricity storage device, we will vary the length of the planning horizon T while keeping the activation ratio constant. In line with the regulation by the European Commission, we will consider activation ratios of 0.1 and 0.2.
§.§.§ Electricity Prices
In the year 2019, the expected average price of regulation power was the same as the expected average availability price because 𝔼1/T∫_Tδ̃(t) p̃^d(t) dt vanished. In fact, the average value of 1/T∫_Tδ(t) p^d(t) dt over all days was -8.77· 10^-5.
The minimum availability, wholesale market, and retail market prices in any half-hour interval were 0.41cts/kW·h, -2.49cts/kWh, and 14.5cts/kWh, respectively.
Averaged over each day, the minimum daily average availability, wholesale market, and retail market prices were 0.41cts/kW·h, 0.37cts/kWh, and 14.5cts/kWh, respectively, which are all strictly positive. The ratio of daily average availability prices to daily average market prices of electricity was between 0.070 and 2.168 with an average of 0.251 for wholesale market prices, and between 0.026 and 0.133 with an average of 0.059 for retail market prices. Figure <ref> shows the empirical cumulative distribution function of this ratio for wholesale and retail market prices. When studying the profits from frequency regulation over the planning horizon T and over the lifetimes of electricity storage devices,
we will set the ratio of the expected average price of regulation power c^r to the expected average market price of electricity c^b to c^r/c^b = 0.251 for wholesale market prices and to c^r/c^b = 0.059 for retail market prices.
The desired charging rate ẏ^⋆ for meeting the terminal state-of-charge target influences the quantity and the price of regulation power that a storage operator can offer. Ideally, the storage operator would be able to meet the terminal state-of-charge target y^⋆ not just in expectation but exactly. In the following, we assume that this is the case and that the terminal state-of-charge target stays constant from one planning horizon to another. The desired charging rate vanishes therefore during any given planning horizon. In this case, the implicit function g is the linear function g(x^r) = mx^r by Proposition <ref>. The marginal cost of providing frequency regulation is thus Tmc^b.
If the desired charging rate was nonzero, then the marginal cost would be lower as the slope of g would converge to m only asymptotically. We thus overestimate the marginal cost of providing frequency regulation when assuming that ẏ^⋆ = 0. Regardless of the particular value of ẏ^⋆, it is therefore always profitable for the storage operator to sell as much regulation power as possible if the marginal revenue Tc^r of providing frequency regulation is higher than the marginal cost Tmc^b given ẏ^⋆ = 0, , if c^r/c^b > m.
§.§.§ Frequency Deviation Distribution
Over the years 2017 to 2019, the cumulative distribution function corresponding to ℙ_ξ can be approximated by a symmetric logistic function F with a maximum error of 0.018 with respect to the empirical cumulative distribution function constructed from about 9.5 million frequency recordings with a 10 second resolution. This justifies Assumption <ref>. Based on the logistic approximation, the cumulative distribution function and the characteristic function are given by F(ξ) = 1/1+exp(-θξ) and φ(ξ) = ln(1 + exp(θξ))/θ, respectively, where θ = 2ln(2)/Δ and Δ = 2φ(0) = 0.0816, which satisfies the condition Δ≤γ/T in Assumption <ref> as γ/T≥ 0.1. The coefficient θ was chosen such that frequency deviations have the same mean absolute deviation Δ under the logistic distribution as under the empirical distribution.
In principle, F should be a truncated logistic function as the support of ℙ_ξ, Ξ = [-1,1], is bounded. The truncation error, however, is just 2.5· 10^-8, which we deem to be negligible. We find that the mean absolute deviation of the frequency recordings is smaller than 0.1 on 96.7% of all days. If the uncertainty set D is parametrized by γ = 0.1T, the empirical frequency deviation signals fall thus outside of D on 3.3% of all days. On these days, the storage operator may stop delivering regulation power once the total absolute deviation ∫_0^t |δ(t') | dt' of the frequency deviation signal exceeds γ. In principle, the frequency deviation distribution should thus be estimated based on past frequency deviation signals with mean absolute deviation capped at the activation ratio γ/T. The distribution estimated directly on past frequency deviation signals will differ most from the distribution estimated on capped past frequency deviation signals if the activation ratio is equal to 0.1 rather than 0.2. In this case, the maximum difference between the two distributions is less than 0.001, which we consider negligible compared to the maximum difference of 0.018 between the empirical cumulative distribution function and the logistic function F.
Figure <ref> shows the empirical cumulative distribution function and its logistic approximation.
§.§ Detailed Results
§.§.§ Impact of Charging and Discharging Losses on the Amount of Regulation Power
Charging and discharging losses may impact the amount of regulation power x̅^r a storage device can provide. We have seen in Section <ref> that, if ẏ^⋆ = 0, then the storage device may be either energy-constrained or power-constrained.
If the storage device is power-constrained, then the storage operator can account for the roundtrip efficiency by dimensioning the discharging capacity y̅^- as a fraction 1-m/1+m of the charging capacity y̅^+ without restricting x̅^r. The fraction is 1 at a roundtrip efficiency of 1, and decreases to 0.92 as the roundtrip efficiency decreases to 0.35.
If the storage device is energy-constrained, then charging losses increase the amount of energy that the storage device can consume from the grid, while discharging losses decrease the amount of energy that the storage device can deliver to the grid. The storage operator can offer most regulation power x̅^r_⋆ to the grid operator if the initial state-of-charge is such that she can consume as much energy from the grid as she can provide to the grid. For roundtrip efficiencies in (0,1), the optimal initial state-of-charge y_0^⋆ is nondecreasing in the activation ratio. Given activation ratios between 0.1 and 0.2, y^⋆_0/y̅ is between 0.52 and 0.53 for lithium-ion batteries with a roundtrip efficiency of 0.85, between 0.57 and 0.60 for redox flow batteries with a roundtrip efficiency of 0.60, and between 0.66 and 0.69 for hydrogen storage with a roundtrip efficiency of 0.35.
Figure <ref> shows that the normalized regulation power x̅^r_⋆ / y̅/2γ is indeed nonincreasing in the charging efficiency η^+ and nondecreasing in the discharging efficiency η^-. The decrease in η^+ is more pronounced and the increase in η^- is less pronounced if the activation ratio is 0.2 rather than 0.1. Starting from a roundtrip efficiency of 1 and an activation ratio of 0.2, the normalized regulation power increases from 1 to 1.45 as the charging efficiency decreases from 1 to 0.35, and decreases from 1 to 0.51 as the discharging efficiency decreases from 1 to 0.35. At an activation ratio of 0.1, the normalized regulation power increases to only 1.37 as the charging efficiency decreases to 0.35, and decreases slightly further to 0.48 as the discharging efficiency decreases to 0.35. Although the normalized regulation power is lower, the absolute regulation power is considerably higher at an activation ratio of 0.1 rather than 0.2, because the normalization constant y̅/2γ is inversely proportional to γ.
§.§.§ Operating Profits
We established in Section <ref> that the profit per unit of regulation power increases with the roundtrip efficiency, while the maximum amount of regulation power increases with the discharging efficiency but decreases with the charging efficiency. The operating profit thus increases with the discharging efficiency, but it is unclear how it depends on the charging efficiency. Intuitively, the higher the market price of electricity, the more pronounced the increase of the profit per unit of regulation power in the charging efficiency. Similarly, the higher the activation ratio, the more pronounced the decrease of the maximum amount of regulation power in the charging efficiency. Figure <ref> shows that, for activation ratios of 0.2 and 0.1, the operating profit increases with the charging efficiency at retail electricity prices but not at wholesale electricity prices. In practice, however, the combined effect of charging and discharging losses is usually a decrease in the operating profit, even at wholesale prices. As examples, we consider again lithium-ion batteries, vehicle-to-grid, and hydrogen storage, with the same charging and discharging efficiencies as in Section <ref>. At an activation ratio of 0.2, the operating profit is 2.25cts/kWh in the absence of charging and discharging losses, regardless of the market price of electricity. For lithium-ion batteries, the operating profit reduces to 2.15cts/kWh at wholesale prices and to 1.96cts/kWh at retail prices. For vehicle-to-grid, the operating profit decreases further to 1.92cts/kWh at wholesale prices and to 1.53cts/kWh at retail prices. For hydrogen, finally, the operating profit falls to 1.49cts/kWh at wholesale prices and to 0.81cts/kWh at retail prices. If the activation ratio halves from 0.2 to 0.1, the operating profits roughly double.
§.§ Additional Proofs
The proof is similar to the one of Proposition 1 by <cit.>, which analyzes the state-of-charge of an electric vehicle battery providing frequency regulation. The difference is that we do not model any electricity consumption for driving. By definition,
y(x^b, x^r, δ, y_0, t) =
y_0 + ∫_0^t η^+ [x^b + δ(t') x^r ]^+ - 1/η^-[x^b + δ(t') x^r ]^- dt'
= y_0 + ∫_0^t min{η^+( x^b + δ(t')x^r ), 1/η^-( x^b + δ(t')x^r ) } dt',
where the second equality holds because 0 < η^+ ≤1/η^-. As η^+ > 0, η^- >0, and x^r ≥ 0, both η^+(x^b + δ(t')x^r) and 1/η^-(x^b + δ(t')x^r) are nondecreasing in δ(t') and strictly increasing in x^b. The minimum of two (nondecreasing/strictly increasing) affine functions is a concave (nondecreasing/strictly increasing) function <cit.>. The function y is thus concave strictly increasing in x^b, concave in x^r, concave nondecreasing in δ, and affine nondecreasing in y_0.
The claim follows immediately from the definition of .
We first note that can be replaced with ^+ on the left-hand side of equation (<ref>) because y is nondecreasing in δ by Proposition <ref> and because is symmetric by Lemma <ref>.
By construction, ^+_↓⊆^+. The claim thus follows if, for every signal δ∈^+, we can construct a rearranged signal δ_↓∈^+_↓ such that
y(x^b, x^r, δ, y_0, t) ≤
y(x^b, x^r, δ_↓, y_0, t) ∀ t ∈.
We
fix an arbitrary δ∈^+ and define its rearrangement
δ_↓(t) = sup{ρ∈ [0,1] : ∫_ 1_δ(t') ≥ρ dt' ≥ t}
involving the indicator function 1_δ(t') ≥ρ = 1 if δ(t') ≥ρ, and = 0 otherwise. Note that the maximization problem is feasible because δ∈ [0,1]. We will now show that δ_↓∈^+_↓.
To this end, we first note that, by definition, δ_↓(t) ∈ [0,1] for all t ∈. The signal δ_↓ is also nonincreasing because the feasible set of the maximization problem becomes smaller as t grows.
In addition, δ_↓ is left-continuous. To see this, consider the random variable Y = δ(t'), where t' follows the uniform distribution on T. Then, δ_↓(t) = sup{ρ∈ [0,1] : ℙ[Y ≥ρ] ≥t/T}, which is left-continuous because the function ℙ[Y ≥ρ] is left-continuous and nonincreasing.
Since δ is Riemann integrable, it is also Lebesgue integrable. By the definition of the Lebesgue integral, we have
∫_ 1_δ(t) ∈B dt = ∫_ 1_δ_↓(t) ∈B dt
for any Borel set B⊆ [0, 1], which implies that
∫_δ_↓(t) dt = ∫_δ(t) dt
≤γ,
where the inequality holds because δ∈^+. In summary, we have thus shown that δ_↓∈^+_↓.
We will now show the inequality in (<ref>). The state-of-charge function can be written as
y(x^b, x^r, δ, y_0, t)
= y_0 + ∫_0^t min{η^+ (x^b + δ(t') x^r), 1/η^-(x^b + δ(t') x^r) } dt'
= y_0 + ∫_0^T min{η^+ (x^b + δ(t') x^r), 1/η^-(x^b + δ(t') x^r) }· 1_t' ≤ t dt'
≤ y_0 + ∫_0^T min{η^+ (x^b + δ_↓(t') x^r), 1/η^-(x^b + δ_↓(t') x^r) }· 1_t' ≤ t dt'
= y(x^b, x^r, δ_↓, y_0, t),
where the first equality follows from the definition of y and from 0 < η^+ ≤1/η^-. The second equality follows from the definition of the indicator function. The inequality is a variant of the Hardy-Littlewood rearrangement inequality <cit.>, which applies because 1_t' ≤ t is nonincreasing and because the integrand ẏ(δ(t')) = min{η^+ (x^b + δ(t') x^r), 1/η^- (x^b + δ(t') x^r) }
is nondecreasing in δ(t') and thus admits the nonincreasing rearrangement ẏ(δ_↓(t')).
We first prove that the symmetry of ℙ_ξ implies that F(z) + F(-z) = 1 + ℙ_ξ[{z}] for all z ∈. To see this, note that
F(z) + F(-z)
= ℙ_ξ((-∞, z]) + ℙ_ξ((-∞,-z])
= ℙ_ξ((-∞, z]) + ℙ_ξ([z,∞))
= 1 + ℙ_ξ[{z}].
Thus, h(z) = F(z) - 1/2(1 + ℙ_ξ[{z}]) = 1/2(1 + ℙ_ξ[{z}]) - F(-z) = -(F(-z) - 1/2(1 + ℙ_ξ[{-z}])) = -h(-z) is an odd function, which implies that
φ(z) = ∫_-∞^z F(z') d z'
= ∫_-∞^-z F(z') d z' + ∫_-z^z F(z') - 1/2(1 + ℙ_ξ[{z'}]) + 1/2(1 + ℙ_ξ[{z'}]) dξ
= φ(-z) + ∫_-z^z1/2(1 + ℙ_ξ[{z'}]) dξ
= φ(-z) + z.
The last equality holds as ∫_-z^zℙ_ξ[{z'}]) d z' = 0. Indeed, the function ℙ_ξ[{z'}]) is nonzero only for countably many values of z' and is thus almost surely zero with respect to the Lebesgue measure.
|
http://arxiv.org/abs/2306.07103v1
|
20230612132748
|
Spectral Closure for the Linear Boltzmann-BGK Equation
|
[
"Florian Kogelbauer",
"Ilya Karlin"
] |
math.AP
|
[
"math.AP",
"math-ph",
"math.MP"
] |
]Spectral Closure for the Linear Boltzmann-BGK Equation
ETH Zürich, Department of Mechanical and Process Engineering, Leonhardstrasse 27, 8092 Zürich, Switzerland
[email protected]
ETH Zürich, Department of Mechanical and Process Engineering, Leonhardstrasse 27, 8092 Zürich, Switzerland
[email protected]
We give an explicit description of the spectral closure for the three-dimensional linear Boltzmann-BGK equation in terms of the macroscopic fields, density, flow velocity and temperature. This results in a new linear fluid dynamics model which is valid for any relaxation time. The non-local exact fluid dynamics equations are compared to the Euler, Navier–Stokes and Burnett equations. Our results are based on a detailed spectral analysis of the linearized Boltzmann-BGK operator together with a suitable choice of spectral projection.
[
Ilya Karlin
^1Institute of Science and Technology Austria, Am Campus 1, 3400 Klosterneuburg, Austria.
^2 Department of Applied Physics, Eindhoven University of Technology, Eindhoven, The Netherlands.
^3 L-NESS, Physics Department, Politecnico di Milano, via Anzani 42, 22100, Como, Italy.
^4 Department of Materials, University of Oxford, Oxford OX1 3PH, United Kingdom.
^5 Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark.
^6 Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Científicas (ICMM-CSIC), Madrid, Spain.
^7 NanoLund and Solid State Physics, Lund University, Box 118, 22100 Lund, Sweden.
^8 Department of Physics, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway.
^* Corresponding authors: [email protected] and [email protected] .
July 31, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Since the invention of kinetic theory by Boltzmann <cit.> and Maxwell <cit.>, the fundamental question arose: What is the connection between kinetic equations and the equations for the motion of continua? Or, to phrase it differently: Can the governing equations of fluid dynamics be rigorously derived from kinetic theory?
This problem has a long history. Famously, in his speech at the International Congress of Mathematics in Paris in 1900, Hilbert proposed a program to derive the passage from the atomistic view of fluids and gases to the motion of continua <cit.>. One interpretation of this challenge, known as Hilbert's sixth problem in this context, aims to prove the convergence of kinetic models, such as the Boltzmann equation, to known hydrodynamic models such as the Euler and the Navier–Stokes equations <cit.>.
The derivation of hydrodynamics from kinetic models is often regarded as a closure problem where one seeks a self-consistent expression of the fluxes in the balance equations for primitive (macroscopic) fields of mass density, momentum density and energy density. On a formal level, a well-established approach to the closure problem is the Chapman–Enskog expansion <cit.>, where a Taylor series in powers of the Knudsen number (the molecular mean free path to a characteristic flow scale ratio) is invoked. The lower-order approximations lead to compressible Euler and Navier–Stokes–Fourier systems. The undeniable success of the Chapman–Enskog method is rooted in the evaluation of the phenomenological transport coefficients, viscosity and thermal conductivity for a one-component gas, in terms of the microscopic interaction potential between particles, as well as prediction of the thermodiffusion effect in gas mixtures <cit.>. On the other hand, extension of the Chapman–Enskog approximation beyond the classical Navier–Stokes–Fourier order, the Burnett and super-Burnett approximations <cit.>, encountered difficulties. Even in the simplest regime, while linearized around a global equilibrium, the higher-order hydrodynamic closure may exhibit an instability, as first shown by Bobylev <cit.> for the Burnett and the super-Burnett approximations for Maxwell's molecules. Since the global equilibrium is stable by way of the dissipative nature of the Boltzmann equation, Bobylev's instability is an artifact brought about by the Chapman–Enskog procedure. The problem of higher-order hydrodynamics is exacerbated in the non-linear regime. Indeed, as pointed out by Slemrod <cit.>, convergence of a singular expansion to the leading-order equation is by no means obvious: the formation of shocks might be an obstacle to global uniform convergence in the sense of solutions <cit.>. Furthermore, the expansion of a non-local operator in frequency space in terms of (local) differential operators may be problematic. As a remedy, Rosenau suggested a non-local closure <cit.> based on rational functions rather than polynomial approximations to the Chapman–Enskog solution.
A different approach is to address the problem of hydrodynamics from kinetics as a problem of invariant manifolds. This viewpoint was first suggested in a short paper by McKean <cit.> and expanded in a series of works by Gorban & Karlin <cit.>.
For model systems (Grad's moment systems <cit.>), it was shown that the method of invariant manifold is equivalent to exact summation of the Chapman–Enskog series to all orders <cit.>.
We term the latter case exact hydrodynamics since, once achieved, it furnishes the complete characterization of the hydrodynamic limit of the kinetic equation and hence, the rigorous and exact closure. In this setting, the problem remains non-trivial even in the linear case, for infinite-dimensional problems. Accurate numerical solutions were found in <cit.>, on the level of the linear Boltzmann-BGK kinetic model <cit.>, and extended to a finite-moment approximation of the linear Boltzmann equation for Maxwell's molecules in <cit.>.
In this work, the derivation of the exact (valid to all admissible scales) linear hydrodynamics is considered in two consecutive steps.
First, the slow invariant manifold is identified as the linear subspace spanned over the hydrodynamic spectrum of the linearized Boltzmann-BGK operator. This is achieved on the basis of the explicit solution to the eigenvalue problem presented recently in Kogelbauer & Karlin <cit.>. Let us refer to <cit.> for qualitative results on the spectra of general linear kinetic operators, including the existence of hydrodynamic branches, critical wave number local expansions for small wave numbers. In <cit.>, explicit spectral calculations have been performed for several kinetic models, including explict expressions of critical wave numbers and branch merging.
However, the knowledge of the spectrum alone is not the final step towards the derivation of hydrodynamic equations. The next step is the projection of the dynamics onto the slow manifold in terms of primitive variables, density, momentum and energy (or temperature). To that end, we derive the hydrodynamic projection in two independent ways. First, we demonstrate that all information about the projection is essentially encoded in a function of eigenvalues, which we call spectral temperature. This direct computation uses specific features of the BGK model. On the other hand, the hydrodynamic projection can be equivalently derived on the basis of the Riesz spectral projector, a more general route applicable to a variety of linear kinetic problems. Both approaches are shown to be consistent with one another, and resulting in the unique hydrodynamic projection. Let us emphasize that we derive a closed-form expression for the transport coefficients in wave space (transport operators in physical space) in terms of eigenvalues.
The structure of the paper is as follows: Preliminaries in Sec. <ref> include the notation and nomenclature, in particular, the plasma dispersion function is introduced. Some useful properties of the plasma dispersion function necessary for the spectral analysis of the linearized Boltzmann-BGK operator are collected in Appendix <ref> for the sake of completeness. In Sec. <ref>, following the invariant manifold formulation of the closure problem <cit.>, we introduce the closure operator for a generic linear kinetic equation. While the majority of derivations of hydrodynamics proceed in terms of primitive variables to solve the invariance equation, our approach is different. We first recognize the slow invariant manifold from the analysis of the spectral problem and, secondly, induce the dynamics on this manifold in terms of primitive variables by a coordinate change from spectral variables to hydrodynamic fields. The realization of this program starts in Sec. <ref> where we first review analytical results on the spectral problem of the linearized Boltzmann-BGK model <cit.>. Sec. <ref> is devoted to the explicit construction of the coordinate change from spectral variables to macroscopic variables, involving a single analytic function depending on eigenvalues, called spectral temperature. In Sec. <ref>, we present the exact hydrodynamic equations for each wave vector in frequency space, as well as in physical space. While, classically, the closure is obtained through transport coefficients relating the dynamics of the macroscopic variables to each other, the exact hydrodynamic equations involve transport operators with finite frequency support acting on the corresponding variables. Finally, in Sec. <ref>, we compare the exact non-local hydrodynamics to local approximations such as the Euler equation, the Navier–Stokes–Fourier system and the Burnett system. In particular, we recover the approximate slow dynamics obtained through the Chapman–Enskog expansion. We conclude with a discussion in Sec. <ref>.
§ NOTATION AND BASIC DEFINITIONS
For a wave vector 𝐤∈ℤ^3, 𝐤=(k_1,k_2,k_3), we denote its wave number as
k:=|𝐤|=√(k_1^2+k_2^2+k_3^2).
For a given wave vector 𝐤≠ 0, we define a coordinate system with a component parallel and with two components orthogonal to 𝐤 by splitting any vector 𝐯∈ℝ^3 as
𝐯= 𝐯_∥+𝐯_⊥,
where 𝐯_∥=1/k^2(𝐯·𝐤)𝐤 and 𝐯_⊥=-1/k^2𝐤×(𝐤×𝐯), which satisfies 𝐯^⊥·𝐤 =0. This can be achieved by a rotation matrix 𝐐_𝐤 satisfying 𝐤=𝐐_𝐤(k,0,0)^T to give
𝐯=𝐐_𝐤(v_∥,v_⊥ 1,v_⊥ 2),
where v_∥ = 𝐯·𝐤 and (v_⊥ 1,v_⊥ 2) are the components of the unit base vectors of 𝐯_⊥. The matrix 𝐐_𝐤 can be determined by, e.g. the Rodrigues' rotation formula <cit.>:
𝐐_𝐤 =(
[ k_1/k -k_2/k -k_3/k; k_2/k 1-k_2^2/k^2+k_1 k -k_2 k_3/k^2+k_1 k; k_3/k -k_2 k_3/k^2+k_1 k 1-k_3^2/k^2+k_1 k; ]).
For later calculations, we also define the 5× 5 block-diagonal matrix
𝐐̃_𝐤 = (1,𝐐_𝐤,1).
We introduce the plasma dispersion function as the integral
Z(ζ) = 1/√(2π)∫_ℝe^-v^2/2/v-ζ dv,
for any ζ∈ℂ∖ℝ. The function Z is analytic on each half plane {(ζ)>0} and {(ζ)<0} and satisfies the complex differential equation
dZ/dζ = -ζ Z-1.
As the name suggest, function (<ref>) appears in plasma physics in the context of Landau damping <cit.>. We collect further useful properties of the plasma dispersion function in the Appendix <ref>.
Let ℋ denote a Hilbert space and let 𝐓:ℋ→ℋ be a linear operator with domain of definition 𝒟(ℋ). We denote the spectrum of 𝐓 as σ(𝐓) and its resolvent set as ρ(𝐓).
The main operator ℒ_𝐤 of this paper (to be defined later) will be defined on the Hilbert space
ℋ_𝐯 = L^2_𝐯(ℝ^3,e^-|𝐯|^2),
together with the inner product
⟨ f, g ⟩_𝐯 = (2π)^-3/2∫_ℝ^3 f(𝐯) g^*(𝐯) e^-|𝐯|^2/2 d𝐯.
For later calculation and to ease notation, we define the following set of basis vectors:
e_0(𝐯) = (2π)^-3/4,
e_1(𝐯) = (2π)^-3/4 v_1,
e_2(𝐯) = (2π)^-3/4 v_2,
e_3(𝐯) = (2π)^-3/4 v_3,
e_4(𝐯) = (2π)^-3/4|𝐯|^2-3/√(6),
which satisfy the orthonormality relation
⟨ e_i, e_j ⟩_𝐯 = δ_ij, for 0 ≤ i,j ≤ 4,
where δ_ij is the Kronecker's delta. We bundle the basis functions (<ref>) into a single vector
𝐞=(e_0,e_1,e_2,e_3,e_4).
For a one-body distribution function f: 𝕋^3×ℝ^3× [0,∞)→ℝ^+, we introduce the moments of the distribution function f as
𝐌^(n)(𝐱,t)=∫_ℝ^3f(𝐱,𝐯,t) 𝐯^⊗ nd𝐯,
where 𝐯^⊗ 0=1, 𝐯^⊗ 1=𝐯 and
𝐯^⊗ n=𝐯⊗...⊗𝐯_n-times,
for n≥ 2 is the n-th tensor power. The moment defined in (<ref>) is an n-th order symmetric tensor, depending on space and time.
Given a vector X=(x_1,…,x_n), we denote the set of cyclical permutations of X as
↻ (x_1,…,x_n) = {(x_1,x_2,…,x_n),(x_2,x_3,…,x_n,x_1),…,(x_n,x_1,…,x_n-1)}.
For a matrix A, we denote its adjugate as (A), which satisfies A (A)=(A).
We denote the strip between -a and 0 as
ℛ_a = {z∈ℂ: -a< z< 0}⊂ℂ.
§ THE CLOSURE PROBLEM FOR THE LINEAR BGK EQUATION
In this section, we recall the classical closure problem for kinetic equations in general and illustrate it on the BGK equation in particular. First, we formulate the governing equations suitable for our setting and illustrate the closure problems for the hierarchy of moment equations. Subsequently, we define the closure operator and outline the relation of the existence of a (slow) invariant manifold with an exact closure relation.
We will be interested in the three-dimensional BGK equation linearized around a global Maxwellian:
∂ f/∂ t+𝐯·∇_𝐱 f=-1/τL_ BGK[f],
for the deviation relative to the global Maxwellian f: 𝕋^3×ℝ^3× [0,∞)→ℝ, f=f(𝐱,𝐯,t) and the BGK collision operator
L_ BGK[f](𝐱,𝐯,t)=(f(𝐱,𝐯,t)-ℙ_5[f](𝐱,𝐯,t)).
The projection operator ℙ_5:ℋ_𝐯→ℋ_𝐯 is defined as
ℙ_5f = ∑_j=0^4 ⟨ f, e_j ⟩_𝐯 e_j,
i.e., the projection onto the first five basis vectors (<ref>). Clearly, (<ref>) defines an orthogonal projection with respect to (<ref>). Integrating equation (<ref>) in 𝐱 shows that the five basis functions (<ref>) are center modes (and the dynamic in these directions is conserved), since
L_ BGK[e_j]=0, for 0 ≤ j ≤ 4.
Expanding f in a Fourier series
f(𝐱,𝐯)= ∑_|𝐤|=0^∞f̂(𝐤,𝐯) e^𝐱·𝐤,
for the Fourier coefficients
f̂(𝐤,𝐯) = 1/(2π)^3∫_ℝ^3 f(𝐱,𝐯)e^-𝐱·𝐤 d𝐱,
the linear operator in (<ref>) can be unitarily conjugated to the family of operators
ℒ_𝐤 = -𝐯·𝐤 - 1/τ(1-ℙ_5),
indexed by the wave vector 𝐤.
Because of the normalization of the basis functions (<ref>), the relation to the macroscopic variables density ρ, velocity 𝐮 and temperature T is given by
ρ = ⟨ f, e_0⟩_𝐯 = (2π)^-3/2∫_ℝ^3 f(𝐯) e^-|𝐯|^2/2 d𝐯,
𝐮 = ⟨ f, (e_1,e_2,e_3)⟩_𝐯 = (2π)^-3/2∫_ℝ^3 f(𝐯)𝐯 e^-|𝐯|^2/2 d𝐯,
T =√(2/3)⟨ f, e_4⟩_𝐯 = (2π)^-3/2∫_ℝ^3 f(𝐯)|𝐯|^2-3/3 e^-|𝐯|^2/2 d𝐯,
which we bundle into a single vector
𝐡 = ([ ρ; 𝐮; √(3/2)T ]).
Because of the orthonormality relations (<ref>), we prefer to work with the basis (e_0,...,e_4). To account for the prefactor √(2/3) in the definition of the temperature in (<ref>) in the final (physically meaningful) dynamical equations, we have to multiply the last entries accordingly.
§.§ The Closure Problem
Let us recall the classical closure problem for kinetic equations illustrated on the BGK equation.
Multiplying the BGK equation (<ref>) with 𝐯^⊗ n and integrating in velocities gives the following hierarchy of moment equations
∂/∂ t𝐌^(n) = - ∇·𝐌^(n+1) -1/τ𝐌^(n) + 1/τ𝐌^(n)_ eq,lin,
where
𝐌^(n)_ eq,lin = 𝐌^(0)⟨𝐯^⊗ n, 1 ⟩ + ⟨𝐯^⊗ n, 𝐯·𝐌^(1)⟩ + 1/3(𝐌^(2)-Id_3× 3)⟨𝐯^⊗ n, |𝐯|^2-3/3⟩.
Ideally, we would like to obtain a closed system for the first few moments, thus allowing for a consistent dynamical description of macroscopic variables. As equation (<ref>) illustrates, however, will the rate of change of 𝐌^(n) always be affected by the flux of the next moment ∇·𝐌^(n+1) (the term 𝐌^(n)_eq,lin only comprises moments up to order two). Consequently, there is no way to obtain a self-consistent moment system from the full dynamics of the kinetic model (<ref>).
As a way out of this inconvenient matter of facts, we can constrain the dynamics of our system (<ref>) by assuming that the full dynamics is given parametrically as a function of, say, the five macroscopic variables
f(𝐱,𝐯,t) = F(ρ(𝐱,t),𝐮(𝐱,t),T(𝐱,t);𝐯).
Writing equation (<ref>) a bit more abstractly as
∂ f/∂ t = ℒ[f], ℒ = -𝐯·∇_𝐱-1/τ L_ BGK,
and denoting ℙ_5^⊥ = 1-ℙ_5, assumption (<ref>) corresponds to the existence of a linear operator 𝒞:range ℙ_5→range ℙ_5^⊥, called closure operator, such that
f=(1+𝒞)ℙ_5 f,
and the dynamics of the macroscopic variables ℙ_5 f can be written self-consistently as
∂ℙ_5 f/∂ t = ℙ_5ℒ(1+𝒞)ℙ_5 f,
while the closure operator 𝒞 satisfies the condition of exact closure:
(𝒞ℙ_5-ℙ_5^⊥)ℒ(1+𝒞)= 0.
Indeed, applying ℙ_5^⊥ to equation (<ref>) and using assumption (<ref>), we obtain
ℙ_5^⊥ℒ(1+𝒞)ℙ_5 f = ∂/∂ tℙ_5^⊥ f = 𝒞∂ℙ_5 f/∂ t.
Using now the reduced dynamics (<ref>), we arrive at
ℙ_5^⊥ℒ(1+𝒞)ℙ_5 f = 𝒞ℙ_5ℒ(1+𝒞)ℙ_5 f,
which is equivalent to (<ref>).
Since the operator ℒ can be written as the direct sum over operators ℒ_𝐤 for 𝐤∈ℤ^3, we effectively seek a closure operator for each wave vector by writing f̂_𝐤 = (1+𝒞_𝐤)ℙ_5 f̂_𝐤 with
∂ℙ_5 f̂_𝐤/∂ t = ℙ_5ℒ_𝐤(1+𝒞_𝐤)ℙ_5 f̂_𝐤,
and the condition of being an exact closure at each wave vector:
(𝒞_𝐤ℙ_5-ℙ_5^⊥)ℒ_𝐤(1+𝒞_𝐤)= 0,
for 𝐤∈ℤ^3.
Equation (<ref>) for the closure operator is a special case of the invariance equation, and is the cornerstone of any derivation of hydrodynamics from kinetic theory. For example, the Chapman–Enskog method is based on a Taylor series expansion in terms of a small parameter ϵ after a rescaling τ→ϵτ while the method of invariant manifold <cit.> uses Newton-type iterations. Note that, even in the simplest linear setting addressed here, the invariance equation (<ref>) is non-linear (quadratic) in the unknown closure operator. In <cit.>, a numerical solution to (<ref>) was obtained for the linear Boltzmann-BGK kinetic model.
Below, we shall circumvent solving the invariance equation (<ref>) directly. Instead, the exact closure operator shall be determined in two steps. First, we identify the slow invariant manifold based on the properties of the spectrum of ℒ_𝐤 (for details on the spectrum of the BGK equation, we refer to the following section). Indeed, for a certain range of wave numbers 0<k<k_ crit,min, we will see that the spectral properties of ℒ_𝐤 allow us to define a closure operator 𝒞_𝐤 which has the property that a general solution (restricted to 𝐤) approaches the dynamics of the 𝒞_𝐤-constrained ensemble exponentially fast in time. Second, we shall find a unique projection of the dynamics onto the slow invariant manifold in terms of primitive variables. The latter step is the main aspect of this work.
§ SPECTRUM OF THE LINEAR BGK EQUATION AND SPECTRAL CLOSURE
In this section, we first recall the properties of the BGK spectrum derived in <cit.>. This involves the three families of modal branches (diffusion, acoustics and shear), as well as their asymptotic behavior for small wave number. Then we define the hydrodynamic manifold as the eigenspace associated to the hydrodynamic modes and define the spectral closure. This will be achieved by a change of coordinates from spectral variables to macroscopic variables, thus providing a solution to (<ref>).
Because the hydrodynamic modes are slow, any trajectory of distribution functions (deviations relative to the global Maxwellian in the linear case) will approach this linear manifold exponentially fast in time. Consequently, the general moment dynamics (<ref>) will be approximate exponentially well in time with the self-consistent moment system derived from the spectral closure operator.
§.§ Properties of the Spectrum
In order to define the spectral closure for the BGK system, we recall the most important implications of the detailed spectral analysis performed in <cit.>, including a complete description of the eigenvalues above the essential spectrum for each wave number as zeros of a holomorphic spectral function, as well as the Taylor expansion in wave number. We define the hydrodynamic manifold as a linear combination of eigenvectors and derive the spectral dynamics on the manifold.
The spectrum of the linearized BGK operator ℒ with relaxation time τ around a global Maxwellian is given by
σ(ℒ) = {-1/τ+ℝ}∪⋃_N∈Modes⋃_|𝐤|<k_ crit,N{λ_N(τ|𝐤|)},
where Modes={shear, diff, ac, ac*} corresponding to the shear mode, the diffusion mode and the pair of complex conjugate acoustic modes. The essential spectrum is given by the line λ=-1/τ, while the discrete spectrum consists of a finite number of discrete, isolated eigenvalues. Along with each family of modes, there exists a critical wave number k_crit,N, limiting the range of wave numbers for which λ_N exists. The modes {diff, ac, ac*} all have algebraic multiplicity one, while the shear mode has algebraic and geometric multiplicity two.
The eigenvalues (in dependence of the wave number k and the relaxation time τ) are given as zeros of the spectral function:
Σ_k,τ(λ) = 1/6( kτ)^5(Z(ζ)-τ k)^2
×(ζ+6 k^3 τ ^3-ζ (ζ^2+5) k^2 τ ^2+2 (ζ ^2+3)k τ -4 Z^2 (ζ )((ζ ^2+1) k τ -ζ)
+Z(ζ ) (ζ ^2-(ζ^4+4 ζ ^2+11) k^2 τ ^2+2 k τζ ^3 -5) ) )|_ζ = τλ+1/kτ.
For a proof, we refer to <cit.>.
A typical argument plot of spectral function (<ref>) is shown in Figure <ref>. For k=0, the function (<ref>) collapses to a multiple of λ^5, recovering the center spectrum (conserved quantities) of (<ref>), see (<ref>). Increasing k, the zeros of Σ_k,τ branch out and decrease monotonically in their real parts.
Critical wave numbers for the branches are found to be <cit.>,
k_ crit(λ_ shear) = √(π/2)1/τ≈ 1.25331/τ,
k_ crit(λ_ diff) ≈ 1.3560 1/τ,
k_ crit(λ_ ac)= k_ crit(λ^*_ ac)≈ 1.3118 1/τ,
In particular, we note that the critical wave number of the shear mode is minimal, which implies that all three branches exists for 0<k<k_ crit and we set
k_ crit, min = k_ crit(λ_ shear) = √(π/2)1/τ.
The eigenvalues admit the following asymptotic expansions in terms of the wave number:
λ_ diff(k) = -τ k^2+9/5τ^3k^4+𝒪(k^6),
λ_ shear(k) = -τ k^2+τ^3 k^4+𝒪(k^6),
λ_ ac(k) = √(5/3)k -τ k^2 +7τ^2/6√(15)k^3+62/45τ^3k^4+𝒪(k^5),
which can be seen from Taylor expanding λ in k and comparing powers in (<ref>), see also <cit.>.
Since zero is a five-fold degenerate eigenvalue for k=0, we do not expect - in general - that the eigenvalues depend analytically on k. Spectral perturbation theory only guarantees the expansion in a Puiseux series, i.e., a Taylor series in k^1/5. For the BGK equation, however, the fractional terms cancel out and only powers in k remain, which is consistent with Ellis & Pinsky <cit.>.
While the structure of the Boltzmann-BGK spectrum (<ref>) agrees with and is a special case of a more general linear Boltzmann equation <cit.>, the knowledge of the spectral function (<ref>) allows to discern more detailed analytical information about the hydrodynamic spectrum, in particular, the accurate estimate for the critical wave numbers (<ref>).
Figure <ref> shows the dependence of the modes on wave number in comparison to their leading-order polynomial approximation in (<ref>), which correspond to Euler and Navier–Stokes equations.
There exists exactly five discrete, isolated eigenvalues of the operator ℒ_𝐤 with an associated five-dimensional eigenspace, which we will call the hydrodynamic manifold. This manifold will serve as our constraint to define a closure operator. Because the five eigenvalues are above the essential spectrum, any solution restricted to the given wave number, will decay exponentially fast to the hydrodynamic manifold, rendering it a slow manifold.
In the following, we will be interested in the linear subspace generated by the eigenfunctions associated to Λ_ BGK = {λ_ diff,λ_ ac,λ_ ac^*, λ_ shear} at each wave number k. To ease notation, we bundle the eigenvalues in a vector (counted with multiplicity):
λ = (λ_ diff,λ_ ac,λ_ ac^*,λ_ shear,λ_ shear),
and define the diagonal matrix
Λ = diag(λ).
Also, we denote the set and vector of simple eigenvalues as
Λ_ simple = {λ_ diff,λ_ ac,λ_ ac^*}, λ_ simple = (λ_ diff,λ_ ac,λ_ ac^*).
For each wave vector 𝐤 with 0<k<k_ crit,min, the eigenspace associated to the modes spans a five-dimensional linear subspace, which we call the hydrodynamic manifold:
ℳ_ hydro,𝐤 = span_λ∈Λ_ simplef̂_λ(𝐤) ⊗span{f̂_λ_ shear,1(𝐤),f̂_λ_ shear,2(𝐤) }.
The hydrodynamic manifold (<ref>) is invariant with respect to the flow generated by (<ref>). We write
f̂_ hydro(𝐤,𝐯,t) = ∑_λ∈Λ_ simpleα_λ(t) f̂_λ(𝐯,𝐤)
+ α_λ_ shear,1(t)f̂_λ_ shear,2(𝐯,𝐤)
+α_λ_ shear,2(t)f̂_λ_ shear,2(𝐯,𝐤),
for a solution on ℳ_ hydro. The vector
α = (α_λ_ diff,α_λ_ ac,α_λ^*_ ac,α_λ_ shear, 1,α_λ_ shear,2),
is comprised of spectral variables or spectral coefficients.
§.§ Spectral Closure for the Boltzmann-BGK Equation
Given the hydrodynamic manifold ℳ_ hydro,𝐤 for 0<k<k_ crit,min, which is spanned by the eigenvectors f̂_λ, we define the spectral closure as,
𝒞_ spectral : range ℙ_5|_ℳ_ hydro,𝐤→range ℙ_5^⊥|_ℳ_ hydro,𝐤,
𝒞_ spectral (ℙ_5f̂_λ) = ℙ_5^⊥f̂_λ.
The closure operator (<ref>), defined only on the ℳ_ hydro,𝐤, maps - for each eigenvalue - the first five moments of an eigenvector to the orthogonal complement on the same eigenvector. The closure operator (<ref>) is defined with respect to the spectral basis, whereas the closure formalism (<ref>) is defined with respect to macroscopic variables. To obtain the corresponding change of coordinates, let us denote the first five elements of a simple eigenfunction f̂_λ as
η(λ) = ℙ_5f̂_λ,
while we write
η_1(λ_ shear) = ℙ_5f̂_λ_shear,1, η_2(λ_ shear) = ℙ_5f̂_λ_shear,2.
Taking projections in (<ref>), we have
𝐡̂_ hydro(𝐯,t) = ∑_λ∈Λ_ simpleα_λ(t) ℙ_5 f̂_λ(𝐯,𝐤) + α_λ_ shear,1ℙ_5f̂_λ_ shear,1(𝐯,𝐤)+α_λ_ shear,2ℙ_5f̂_λ_ shear,2(𝐯,𝐤),
where we have suppressed the explicit dependence on the wave vector 𝐤.
To ease notation, we define the 5× 5 matrix of spectral basis vectors for the BGK equation (see Theorem <ref>) as
𝐇 := [η(λ_ diff),η(λ_ ac),η(λ_ac^*),η_1(λ_ shear),η_2(λ_ shear) ],
which allows us to write the macroscopic variables on the hydrodynamic manifold as
𝐡̂_ hydro =𝐇α.
Since (<ref>) is composed of eigenvectors entirely, the evolution on the hydrodynamic manifold in terms of spectral variables simply becomes
d α/dt = Λα,
i.e., the spectral dynamics diagonalize completely since geometric and algebraic multiplicity are equal for each mode.
To define the spectral closure for macroscopic variables, we define
F_λ = [f̂_λ_ diff,f̂_λ_ ac,f̂_λ_ ac^*,f̂_λ_ shear,1,f̂_λ_ shear,2],
which implies that 𝐇=ℙ_5F_λ. Based on the coordinate change (<ref>) and (<ref>), the closure operator in macroscopic variables then reads
𝒞_ spectral𝐡̂_ hydro = 𝒞_ spectral𝐇α
= ℙ_5^⊥F_λα
= ℙ_5^⊥F_λ𝐇^-1𝐡̂_ hydro
provided that the inverse exists (which will be elaborated in Section <ref>).
We emphasize that the spectral closure for the BGK equation (<ref>) is an exact (invariant) closure in the sense of (<ref>) by construction. Indeed, we find that evaluating (<ref>) on 𝐡̂_ hydro gives:
(𝒞_ spectralℙ_5-ℙ_5^⊥)ℒ_𝐤(1+𝒞_ spectral)𝐡̂_ hydro = (𝒞_ spectralℙ_5-ℙ_5^⊥)ℒ_𝐤(1+𝒞_ spectral)𝐇α
= (𝒞_ spectralℙ_5-ℙ_5^⊥)ℒ_𝐤F_λα
= (𝒞_ spectralℙ_5-ℙ_5^⊥)F_λΛα
= (𝒞_ spectral𝐇-ℙ_5^⊥F_λ)Λα
=0,
where in the third step, we have used that columns of F_λ are eigenvectors of ℒ_𝐤.
Because of the existence of a critical wave number for each branch of eigenvalues (<ref>), a full set of five eigenvalues only exists up to k_ crit,min. For k>k_ crit,min, the modes vanish one by one, which implies that the full set of five macroscopic variables cannot be resolved uniquely any longer. In particular, the matrix 𝐇 might not be defined as a square matrix (<ref>), but much rather as a rectangular matrix. Also, the inverse appearing in (<ref>) has to be understood as a generalized inverse (e.g., pseudo-inverse). Implications of this degeneracy shall not be further discussed in this paper.
Next section will be devoted to the explicit calculation of the basis matrix 𝐇 (the invertability of 𝐇 will be discussed in Section <ref>). Through a simple linear change of coordinates, we then obtain the dynamics for the macroscopic moments.
§ FROM SPECTRAL COORDINATES TO MACROSCOPIC VARIABLES
In this section, we construct the exact spectral closure for the BGK equation based on the knowledge of the spectrum (<ref>). We derive the coordinate change from spectral parameters to the primitive, macroscopic variables (i.e., the basis matrix 𝐇) in two consistency ways: First, we derive a general algebraic from of the first five moments of a simple eigenfunction from the interplay of the linear transport and collision operators. This analysis will be specific for the BGK equation and depends on the specific form of the projection operator (<ref>).
Secondly, we use analytical spectral calculus and Riesz projections to obtain the same result and to show consistency of the two approaches. We emphasize that the approach via spectral projections, although equivalent, can be be applied to a more general setting as well.
Before we proceed, let us collect some notation and results from <cit.> regarding the spectral problem of the linearized Boltzmann-BGK operator (<ref>), which will be useful in the following two subsections.
We define the Green's function matrices as
G_L(z,n,m) = ⟨ (τ𝐯·𝐤-ℙ_5-z)^-1e_n,e_m⟩_𝐯,
G_S(z,n,m) = ⟨ (τ𝐯·𝐤-z)^-1e_n,e_m⟩_𝐯,
which satisfy the equation <cit.>,
G_L=G_S+G_LG_S.
Assuming ( -G_S)≠ 0, equation (<ref>) can be solved to get
G_L=G_S( -G_S)^-1 = ( -G_S)^-1 - .
From <cit.> we know that the matrix z↦ G_S(z) can be conjugated using the rotation matrix (<ref>):
. G_S(z)|_z= kτζ- = 1/τ k𝐐̃_𝐤 (G(ζ)-τ k)𝐐̃_𝐤^T,
where the matrix G(ζ) reads,
G(ζ) = [ Z(ζ) 1+ζ Z(ζ) 0 0 ζ +(ζ^2-1)Z(ζ)/√(6); 1+ζ Z(ζ) ζ + ζ^2 Z(ζ) 0 0 ζ^2+(ζ^3-ζ)Z(ζ)/√(6); 0 0 Z(ζ) 0 0; 0 0 0 Z(ζ) 0; ζ +(ζ^2-1)Z(ζ)/√(6) ζ^2+(ζ^3-ζ)Z(ζ)/√(6) 0 0 ζ^3-ζ+(ζ^4-2ζ^2+5)Z(ζ)/6 ],
while ζ↦ Z(ζ) is the plasma dispersion function (<ref>).
Furthermore, the spectral function (<ref>) is related to Green's matrix G_S (<ref>) as <cit.>,
Σ_k,τ(ζ) = . (G_S(z)-)|_z= kτζ.
With (<ref>), (<ref>), (<ref>) and (<ref>), the determinant in (<ref>) is evaluated easily to get the closed-form expression for the spectral function (<ref>) obtained in <cit.>.
§.§ Spectral-to-Hydrodynamic Coordinate Transform by Spectral Temperature
An eigenvector f̂_λ of (<ref>) with eigenvalue λ satisfies the equation
-𝐯·𝐤f̂_λ -1/τf̂_λ +1/τℙ_5 f̂_λ = λf̂_λ,
or, equivalently,
f̂_λ = ℙ_5f̂_λ/τλ + 1 + τ𝐯·𝐤.
We emphasize that the numerator in (<ref>) is always non-zero for the range of k for which λ is defined since
-1/τ<λ(k) <0,
for 0<k<k_ crit,min.
Projecting equation (<ref>) via ℙ_5 gives the following implicit equation for the first five entries of an eigenvector:
η(λ) = ⟨𝐞(𝐯)·η(λ)/τλ + 1 + τ𝐯·𝐤,𝐞(𝐯)⟩_𝐯.
Writing η = (η_1,η_2,η_3,η_4,η_5) and integrating (<ref>) over 𝐯 implies the relation
-𝐤·(η_1,η_2,η_3) = λη_1,
or, equivalently, in terms of the splitting (<ref>):
(η_1,η_2,η_3) = λη_1/k^2𝐤 + (η_1,η_2,η_3)_⊥.
Using the Green's functions matrices (<ref>), we can rewrite (<ref>) as a functional eigenvalue problem:
η(λ) = G_S(-τλ-1) η(λ),
which, in the light of (<ref>), can be rewritten as
𝐐̃_𝐤^Tη∈(G(ζ)- k τ),
for (1+τλ)=k τζ. Given the structure of (<ref>) and the definition of the modes through (<ref>), we immediately see that
η_1(λ_shear)=𝐐̃_𝐤([ 0; 0; 1; 0; 0 ]), η_2(λ_shear)=𝐐̃_𝐤([ 0; 0; 0; 1; 0 ]),
since the middle block in G(ζ) decouples from the other part of the matrix (which of course corresponds to the factorisation in (<ref>)).
To obtain the structure of the simple eigenvectors, we first assume, by rescaling f̂_λ accordingly, that
⟨f̂_λ, 1 ⟩_𝐯 = 1.
We note that, using (<ref>), that columns one two and five of (<ref>) are scalar multiples of each other for (η_1,η_2,η_3)_⊥=0. Consequently, for a simple eigenvalue λ, we can set
η(λ) = ([ 1; λ/k^2𝐤; θ(λ) ]),
for some function λ↦θ(λ). We call the basis vectors (<ref>) for a simple eigenvalue k-aligned.
Since the two eigenvectors for the shear mode are complete explicit (<ref>), all non-trivial information of the spectral closure is encoded in the function λ↦θ(λ), which we call spectral temperature. Since we know the structure of a simple eigenvector (<ref>), we can derive a formula for the spectral temperature as follows. Using (<ref>), we take an inner product of (<ref>) with e_4 to find
θ(λ) = ⟨1+λ/k^2𝐤·𝐯+θ(λ)e_4(𝐯)/τλ + 1 + τ𝐯·𝐤, e_4(𝐯)⟩_𝐯,
which can be solved to
θ(λ) = 1/k^2⟨k^2+λ𝐤·𝐯/τλ + 1 + τ𝐯·𝐤,e_4(𝐯)⟩_𝐯/1-⟨e_4(𝐯)/τλ + 1 + τ𝐯·𝐤,e_4(𝐯)⟩_𝐯.
Expression (<ref>) can then be evaluated explicitly for each simple λ. We refer to Subsection <ref> for an explicit formula and properties of θ.
In the next subsection, we will give an alternative derivation of (<ref>) and (<ref>) using spectral projections to emphasize consistency.
§.§ Spectral-to-Hydrodynamic Coordinate Transform by Riesz Projections
In the following, we show that the the spectral basis (<ref>) and (<ref>) obtained in the previous section can be derived equivalently through spectral calculus. Indeed, for any set of discrete, isolated eigenvalues k↦Λ_ BGK(k)⊂ℂ, depending on wave number, we can define the Riesz projection as
ℙ_Λ = -1/2π∮_Γ(Λ_ BGK)(ℒ_𝐤-w)^-1 dw,
where Γ(Λ_ BGK) is a simple contour in the complex plain, encircling the full spectral set Λ_ BGK={λ_ diff,λ_ ac,λ_ ac^*,λ_ shear} once in positive direction. From analytical spectral calculus <cit.>, we know that (<ref>) is indeed a projection, whose range is given by the invariant subspace (generalized eigenspace) associated to Λ_ BGK. In particular, we see from (<ref>), it follows that
𝐇 = -1/2πℙ_5∮_Γ(Λ_ BGK) (ℒ_𝐤-w)^-1 dw ℙ_5.
Here, we have assumed that the five basis vectors 𝐞 are mapped to five linearly independent vectors on ℳ_ hydro(Λ) (which is indeed the case for the BGK equation). For a more general kinetic model, this might not be the case and a non-invertability of the spectral basis in terms of the macroscopic variables would indicate a restriction of the hydrodynamic dynamics for the given range of wave numbers.
Setting w=-1/τ(z+1), the resolvent transforms according to
(ℒ_𝐤-w)^-1 = (-𝐯·𝐤-1/τ+1/τℙ_5+1/τ(z+1))^-1
=-τ(τ𝐯·𝐤-ℙ_5-z)^-1.
Using the second resolvent identity together with (<ref>) and dw=-1/τdz, we can then write:
ℙ_Λe_m = -1/2π∮_Γ(Λ_ BGK)(ℒ_𝐤-w)^-1e_m dw
= -1/2π∮_Γ(Λ_τ)(τ𝐯·𝐤-ℙ_5-z)^-1e_m dz
= -1/2π∮_Γ(Λ_τ)(τ𝐯·𝐤-z)^-1e_m+ (τ𝐯·𝐤-z)^-1ℙ_5(τ𝐯·𝐤 - ℙ_5 - z)e_m dz
= -1/2π∮_Γ(Λ_τ)(τ𝐯·𝐤-z)^-1∑_j=0^4⟨(τ𝐯·𝐤-ℙ_5-z)^-1e_m,e_j⟩_𝐯e_j dz
=-1/2π∮_Γ(Λ_τ)∑_j=0^4G_L(z,m,j)(τ𝐯·𝐤-z)^-1e_j dz,
where we have set
Γ(Λ_ BGK) = -1/τ (Γ(Λ_τ)+1).
With the notation (<ref>), we have that w∈ℛ_a z ∈ℛ_1. Using relation (<ref>) between the Green's function matrices G_S and G_L together with the fact that z↦ G_S(z) is holomorphic in ℛ_1, we arrive at
⟨ℙ_λ e_m,e_n⟩_𝐯 =-1/2π∮_Γ(Λ_τ)∑_j=0^4G_L(z,m,j)⟨(τ𝐯·𝐤-z)^-1e_j,e_n⟩_𝐯 dz
= -1/2π∮_Γ(Λ_τ)∑_j=0^4G_L(z,m,j)G_S(z,j,n) dz
= -1/2π∮_Γ(Λ_τ) (G_L(z)G_S(z))_n,m dz
= -1/2π∮_Γ(Λ_τ) G_L(z,n,m) dz
= 1/2π∮_Γ(Λ_τ) [(G_S(z)-)^-1]_n,m dz.
By relation (<ref>) and by applying the Residue Theorem, we find that
⟨ℙ_λ e_m,e_n⟩_𝐯 = -1/2π∮_Γ(Λ_τ) [(-G_S(z))^-1]_n,m dz
=-1/2π∮_Γ(Λ_τ)Σ_k,τ^-1(z)( - G_S(z)) dz
= ∑_λ_τ∈Λ_τRes_z→λ_τΣ_k,τ^-1(z)(G_S(z)-).
For a simple eigenvalue λ_τ, the function Σ_k,τ^-1 has a pole of order one at λ_τ and
⟨ℙ_λ e_m,e_n⟩_𝐯∼( G_S(λ_τ)-),
where ∼ indicates equality up to multiplication by a complex number, while at the shifted shear mode λ_shear,τ, the function Σ_k,τ^-1 has a pole of order two and
⟨ℙ_λ e_m,e_n⟩_𝐯 = Res_z→λ_shear,τΣ_k,τ^-1(z)(G_S(z)-)
= lim_z→λ_shear,τd/dz[(z-λ_shear,τ)^2Σ_k,τ^-1(z)(G_S(z)-)]
=lim_z→λ_shear,τ[(z-λ_shear,τ)^2Σ_k,τ^-1(z)]'(G_S(λ_shear,τ)-)
+lim_z→λ_shear,τ(z-λ_shear,τ)^2Σ_k,τ^-1(z)[( G_S(z)-)]'
∼lim_z→λ_shear,τ[( G_S(z)-)]',
where prime denotes the derivative d/dz, and where we have used that (A)=0 if A≥ 2.
The formula
(AB)=(B)(A),
in combination with (<ref>) allows us to simplify (<ref>) further:
(G_S(z)-) = 1/(τ k)^4𝐐̃_𝐤(G(ζ)-τ k)𝐐̃_𝐤^T.
For a simple isolated eigenvalue λ, the kernel of G is one dimensional and hence, there exists a complex function ζ↦ g(ζ) and a complex vector function ζ↦𝐚(ζ) such that
(G(ζ)-τ k ) = g(ζ)𝐚(ζ)⊗𝐚^T(ζ),
and consequently,
(G_S(z)-) = 1/(τ k)^4𝐐̃_𝐤(G(ζ)-τ k)𝐐̃_𝐤^T
= g(ζ)/(τ k)^4𝐐̃_𝐤 (𝐚(ζ)⊗𝐚^T(ζ))𝐐̃_𝐤^T
=g(ζ)/(τ k)^4 [𝐐̃_𝐤𝐚(ζ)⊗ (𝐐̃_𝐤𝐚(ζ))^T].
From (<ref>) it suffices to know one row or column of (G(ζ)-τ k) to deduce (G_S(ζ)-) completely. Indeed, the last column of (G(ζ)-τ k) can be calculated easily and we set
𝐚(ζ) = (
[ τ k/√(6)(ζ+(ζ^2-1)Z(ζ)); 1/√(6)(1+ kτζ)(ζ+(ζ^2-1)Z(ζ)); 0; 0; -1-k^2τ^2- k τζ -( k τ+ζ + k τζ^2 )Z(ζ) ]).
A lengthy but elementary calculation shows that,
.d/dζ (G(ζ)-τ k)|_ζ=τλ_ shear+1/kτ = [ 0 0 0 0 0; 0 0 0 0 0; 0 0 A 0 0; 0 0 0 A 0; 0 0 0 0 0; ],
for the non-zero complex number
A=-i λ_ shear(k^4 τ^4 +(λ_ shearτ)^4 +(τλ_ shear )^3+λ_ shearτ^3k^2)/6 k,
which gives, again, the two basis vectors (<ref>) for the eigenspace associated with the shear mode.
In conclusion, we see that the approach via Riesz projections is equivalent to direct calculations performed in the previous section. Indeed, dividing the vector in (<ref>) by its first entry, we recover the form (<ref>) of the basis vectors for the simple eigenvalues - the exact form of the fifth entry will be determined in the following section. Similarly, we have shown that evaluating the complex residue around the two-fold degenerate eigenvalue λ_ shear in (<ref>) produces the same basis vectors as (<ref>). In the following section, we will put these basis vectors together to give an explicit description of the coordinate change from spectral variables to macroscopic variables.
§ HYDRODYNAMIC EQUATIONS FROM SPECTRAL CLOSURE
In this section, we derive the evolution equations for the macroscopic variables (<ref>) explicitly, based on the change of coordinates (<ref>). First, we analyze the spectral temperature in more detail. As a next step, we describe the transport coefficients arising in the hydrodynamic equations qualitatively and show explicitly how they relate to the eigenvalues.
§.§ Properties of the Spectral Temperature
In this subsection, we derive an explicit expression of the spectral temperature and prove some symmetry properties. To this end, we could either evaluate the quotient (<ref>) or just divide (<ref>) by its first entry. Indeed, consistency of the two expressions can be checked easily and we proceed by dividing (<ref>) by τ k/√(6)(ζ+(ζ^2-1)Z(ζ)) to recover the k-aligned form (<ref>) with explicit spectral temperature
θ(λ) = .√(6)(k^2 τ ^2+ζ k τ +Z(ζ ) (ζ +(ζ ^2+1) k τ)+1)/k τ(ζ +(ζ ^2-1) Z(ζ ))|_ζ = τλ+1/τ k
= √(6)((k^2 τ ^2-τλ (τλ +1)) Z( (τλ +1)/k τ)- k τ(k^2 τ ^2-τλ))/(k^2 τ
^2+(τλ +1)^2) Z( (τλ +1)/k τ)- k τ (τλ +1).
Function (<ref>) is an analytic function on the strip ℛ_1/τ, see Figure <ref>.
Using (<ref>) from Appendix <ref>, we find that
Z(τλ+1/kτ)^* =[ .√(π/2) e^-ζ^2/2[(ζ)-(-ζ/√(2))] |_ζ = τλ+1/kτ]^*
= - .√(π/2) e^-ζ^2/2[-(ζ)-(ζ/√(2))] |_ζ = -τλ^*+1/kτ
= -.√(π/2) e^-ζ^2/2[-(-ζ)-(-ζ/√(2))] |_ζ = τλ^*+1/kτ
= - Z(τλ^*+1/kτ),
which implies that
θ(λ)^* = [√(6)((k^2 τ ^2-τλ (τλ +1)) Z( (τλ +1)/k τ)- k τ(k^2 τ ^2-τλ))/(k^2 τ
^2+(τλ +1)^2) Z( (τλ +1)/k τ)- k τ (τλ +1)]^*
= -√(6)((k^2 τ ^2-τλ^* (τλ^* +1)) Z( (τλ^* +1)/k τ)+ k τ(k^2 τ ^2-τλ^* ))/-(k^2 τ
^2+(τλ^* +1)^2) Z( (τλ^* +1)/k τ)+ k τ (τλ^* +1)
= θ(λ^*).
In particular, we conclude that θ|_ℝ⊆ℝ. This symmetry property will be useful in the following subsection, where we evaluate the spectral temperature on the simple eigenvalues to determine the change of coordinates (<ref>) explicitly.
§.§ Hydrodynamics in k-space
Using the k-aligned basis vectors (<ref>) and (<ref>), the change of coordinates from spectral to macroscopic variables (<ref>) takes the form,
𝐇 = 𝐐̃_𝐤[ 1 1 1 0 0; /kλ_ diff /kλ_ ac /kλ_ ac^* 0 0; 0 0 0 1 0; 0 0 0 0 1; θ(λ_ diff) θ(λ_ ac) θ(λ_ ac^*) 0 0 ].
Its determinant is given by
𝐇 = /k[( λ_ diff-λ_ ac^*) θ(λ_ ac) + (λ_ ac - λ_ diff)θ(λ_ ac^*) + (λ_ ac^*-λ_ ac) θ(λ_ diff)]
=/k(-2(λ_ ac)θ(λ_ diff)+2[(λ_ ac - λ_ diff)θ(λ_ ac^*)])
= 2/k((λ_ ac)θ(λ_ diff)-[(λ_ ac - λ_ diff)θ(λ_ ac^*)]),
which defines a real-valued function of wave number. A plot k↦𝐇(k) is shown in Figure <ref>, which already indicates the 𝐇 is invertible for all wave numbers 0≤ k≤ k_ crit,min.
Using (<ref>) and the invertibility of 𝐇, the dynamics for the macroscopic variables on the hydrodynamic manifold are then given by
∂𝐡̂_ hydro/∂ t = 𝐇Λ𝐇^-1𝐡̂_ hydro.
We remark that the change of coordinates (<ref>) on the hydrodynamic manifold does not involve any expression depending on the shear mode λ_ shear explicitly (see (<ref>)).
The closure operator, however, will inevitably involve terms that depend on λ_ shear as well. In particular, the hydrodynamics (<ref>) depend on the shear mode through Λ.
We note that the hydrodynamics (<ref>) could be extended beyond the minimal critical wave number by setting λ_N(k) =-1/τ for k>k_ crit,N. Even though, strictly speaking, the eigenvalue does not exist beyond that point, we can, nonetheless, define the hydrodynamic equations by requiring the decay rate of a mode to coincide with the overall minimal decay rate -1/τ.
A cumbersome but elementary calculation shows that
𝐇Λ𝐇^-1 = 𝐐̃_𝐤[ 0 - k 0 0 0; C_1 C_2 0 0 C_3; 0 0 λ_ shear 0 0; 0 0 0 λ_ shear 0; C_4 C_5 0 0 C_6 ]𝐐̃_𝐤^T,
for the following cyclic quantities
C_1 = 1/k^2𝐇∑_(λ_1,λ_2,λ_3)∈↻λ_ simpleλ_1λ_3(λ_1-λ_3)θ(λ_2),
C_2 = /k𝐇∑_(λ_1,λ_2,λ_3)∈↻λ_ simple (λ_1^2-λ_3^2)θ(λ_2),
C_3 = -1/k^2𝐇∏_(λ_1,λ_2,λ_3)∈↻λ_ simple (λ_1-λ_2),
C_4 = /k𝐇∑_(λ_1,λ_2,λ_3)∈↻λ_ simpleλ_2 (λ_1-λ_3)θ(λ_1)θ(λ_3),
C_5 = 1/𝐇∑_(λ_1,λ_2,λ_3)∈↻λ_ simple(λ_3 - λ_1)θ(λ_1)θ(λ_3),
C_6 = -/k𝐇∑_(λ_1,λ_2,λ_3)∈↻λ_ simpleλ_1θ(λ_1)(λ_2-λ_3),
where we have used the notation for cyclical permutations outlined in (<ref>).
In Appendix <ref>, an explicit expansion of the quantities in (<ref>) is performed. We can show that C_1,C_3 and C_4 are purely imaginary numbers, whereas C_2,C_5 and C_6 are purely real numbers. Thus we set
C_1 = c_1, C_2 = c_2, C_3 = c_3, C_4 = c_4, C_5 = c_5, C_6 = c_6,
for c_j∈ℝ, 1≤ j≤ 6, and the full hydrodynamics become
∂𝐡̂_ hydro/∂ t =𝐐̃_𝐤[ 0 - k 0 0 0; c_1 c_2 0 0 c_3; 0 0 λ_ shear 0 0; 0 0 0 λ_ shear 0; c_4 c_5 0 0 c_6 ]𝐐̃_𝐤^T𝐡̂_ hydro.
Figure <ref> depicts the coefficients (<ref>) in dependence of wave number (compared to the Navier–Stokes–Fourier approximation, see Section <ref>).
In summary, the hydrodynamic equations in k-space are explicitly related to the spectral problem for the linear part through the six transport coefficients {c_j}_1≤ j≤ 6.
§.§ Hydrodynamics in real space
Let us transform equation (<ref>) back to physical coordinates. To this end, we note that
𝐐_𝐤diag(C_2,λ_ shear,λ_ shear)𝐐_𝐤^T = λ_ shear_3× 3 + 𝐐_𝐤diag(C_2-λ_ shear,λ_ shear,λ_ shear)𝐐_𝐤^T
=λ_ shear_3× 3 + 1/k^2(C_2-λ_ shear)𝐤⊗𝐤^T,
where we have used the definition of 𝐐_𝐤 in (<ref>). Consequently, in physical space, the right-hand side of equation (<ref>) translates to a linear integral operator, which can be written as
∂/∂ t([ ρ; 𝐮; T ]) = ([ -∇·𝐮; I_1(Δ)∇ρ + I_ shear(Δ)𝐮+I_2(Δ)∇(∇·𝐮) + I_3(Δ)∇ T; I_4(Δ)ρ + I_5(Δ)∇·𝐮 + I_6(Δ) T ]),
where the integral operators {I_j}_1≤ j≤ 6 and I_ shear are related to (<ref>) via Fourier series, multiplication/division by √(3/2) and the rotation matrix (<ref>). The differential operates are to be understood with respect to 𝐱 and Δ=∇·∇ is the Laplacian. We also have used symmetry properties k↦ -k from the explicit form in Appendix <ref> and the symmetry of eigenvalues.
Let us summarize the derivation of the non-local, exact hydrodynamics (<ref>). The construction begins the evaluation of the eigenvalues by finding zeros of the spectral function (<ref>). Since they are given explicitly as the solutions of a transcendental equation, we can analyse them use them in the further analysis. Indeed, up to the minimal critical wave number, there always exists a five-dimensional invariant plane spanned by the eigenvectors of ℒ_𝐤. Using these eigenfunctions, we can construct the spectral closure (<ref>) and define the coordinate change (<ref>) from spectral variables to macroscopic variables. Once this is achieved, we can write down the exact hydrodynamic equations on the hydrodynamic manifold (<ref>), which will attract all generic trajectories exponentially fast. The properties of the transport coefficients (<ref>) can then be analysed in detail.
Since we posed the governing equations (<ref>) on the three-dimensional torus, only finitely-many wave numbers contribute in the hydrodynamic equation (<ref>). Consequently, the action of the integral operators {I_j}_1≤ j≤ 6 and I_ shear can be written the convolution with an integral kernel of the form
K_j(𝐱)= ∑_k=0^k_ crit,minK̂_j(k) e^𝐱·𝐤,
where 1≤ j≤ 6 or j= shear and for coefficients K̂_j(k)∈ℝ. Because of the symmetry properties of the eigenvalues, we actually have that K̂_j(k)=K̂_j(k^2), corresponding to the dependence of I_j on the Laplacian Δ only. An effective approximation of the coefficient functions K̂_j will be discussed in a forthcoming paper.
The quantities c_2,λ_ shear are viscosity terms, while the term c_6 can be regarded as non-local (wave-number dependent) version of the thermal diffusivity. We may re-introduce units by scaling according to
𝐱↦ L^-1𝐱, k↦ L k, 𝐯↦ v^-1_ thermal𝐯, τ↦ t_ thermal^-1τ_ relax,
for a specific length scale L, the thermal velocity v_ thermal, the thermal time t_ thermal and the relaxation time τ_ relax. Given the Boltzmann constant k_ B≈ 10^-23 m^2 kg s^-2 K^-1, a specific particle mass m and a reference temperature T_0, the thermal quantities are defined as
t_ thermal = L√(m/k_ B T_0), v_ thermal = √(k_ B/m T_0).
We also define the mean-free path length as
l_ mfp = τ L = τ_ relax v_ thermal.
The macroscopic variables re-scale according to
ρ↦ρ_0^-1ρ, 𝐮↦ v_ thermal^-1𝐮, T↦ T_0^-1T,
where ρ_0 is a reference density and T_0 is a reference temperature. We note that the transport coefficients (<ref>) can be written as
c_1(k,τ) = k c̃_1[(τ k)^2], c_2(k,τ) = τ^-1c̃_2[(τ k)^2], c_3(k,τ) = k c̃_3[(τ k)^2],
c_4(k,τ) = τ^-1c̃_4[(τ k)^2], c_5(k,τ) = k c̃_5[(τ k)^2], c_6(k,τ) = τ^-1c̃_6[(τ k)^2],
see also the asymptotic expansions (<ref>) in the next section.
Finally, the hydrodynamic equations (<ref>) can be cast in the form
∂ρ/∂ t = -ρ_0∇·𝐮,
∂𝐮/∂ t = k_ BT_0/mρ_0ℐ_1[l_ mfp^2Δ]∇ρ + 1/τ_ relaxℐ_ shear(l_ mfp^2Δ)𝐮+l_ mfp^2/τ_ relaxℐ_2(l_ mfp^2Δ)∇(∇·𝐮) +k_ B/mτ_ relaxℐ_3(l_ mfp^2 Δ)∇ T ,
∂ T/∂ t = T_0/ρ_0τ_ relaxℐ_4[l_ mfp^2Δ]ρ+ T_0ℐ_5[l_ mfp^2Δ](∇·𝐮)+ 1/τ_ relaxℐ_6[l_ mfp^2Δ] T,
where the integral operators {ℐ_j}_1≤ j≤ 6 and ℐ_ shear are defined through Fourier series and (<ref>).
§ COMPARISON TO EXISTING FLUID MODELS: SMALL WAVE-NUMBER LIMIT
In this section, we compare the exact hydrodynamic system (<ref>) to fluid models derived from the Chapmann–Enskog expansion. Because of the coupling between the wave number and the relaxation time through eigenvalues and the k-aligned spectral basis (<ref>), the terms of the Chapmann–Enskog series correspond to an expansion in wave number (<ref>). We write
λ(k) = ∑_n=1^∞λ_n k^n,
for any of the four modal branches, for the Taylor expansion of a mode in terms of wave number. Invoking (<ref>) into the the spectral temperature (<ref>) and using the asymptotic expansion (<ref>) (in the limit k→ 0), we can expand
θ(λ(k)) ∼√(6)((k^2 τ ^2-τλ (τλ +1)) Z( (τλ +1)/k τ)- k τ(k^2 τ ^2-τλ))/(k^2 τ
^2+(τλ +1)^2) Z( (τλ +1)/k τ)- k τ(τλ +1)
∼√(6)((k^2 τ ^2-τλ (τλ +1)) (-kτ/(τλ+1)-(kτ)^3/[(τλ+1)]^3+𝒪(k^5))- k τ(k^2 τ ^2-τλ))/(k^2 τ
^2+(τλ +1)^2) (-kτ/(τλ+1)-(kτ)^3/[(τλ+1)]^3+𝒪(k^5))- k τ(τλ +1)
∼ -√(3/2)(λ _1^2+1)-√(3/2) k λ _1(τλ
_1^2+2 λ _2+3 τ)
-√(3/2) k^2 (3 τλ _1^2 (λ _2+τ)+2 λ _1λ _3+3 τλ _2+λ _2^2+3 τ ^2)
+√(3/2) k^3 (3 τ ^3 λ _1^3+3 τλ _1(-2 τλ _2-λ _2^2+τ ^2)-3 τλ _1^2 λ _3-λ _3(2 λ _2+3 τ))
+𝒪(k^4),
for k sufficiently small.
Plugging (<ref>) together with (<ref>) into (<ref>) leads to the following asymptotic expansions for the closure coefficients (<ref>):
C_1 ∼i k (357 k^6 τ ^6+991 k^4 τ ^4-1620 k^2 τ ^2-900)/60 (7 k^2 τ ^2+15) +h.o.t.,
C_2 ∼k^2 τ(203 k^4 τ ^4+520 k^2 τ ^2-600)/30 (7 k^2 τ ^2+15)+h.o.t.,
C_3 ∼ -i k (7 k^2 τ ^2+30)^2/60 (7 k^2 τ ^2+15)+h.o.t.,
C_4 ∼k^4 τ ^3 (4437 k^4 τ ^4-89 k^2 τ ^2-3000)/90 (7 k^2 τ ^2+15) +h.o.t.,
C_5 ∼ -i k (2523 k^6 τ ^6+1670 k^4 τ ^4+360 k^2 τ ^2+450)/45 (7 k^2 τ ^2+15)+h.o.t.,
C_6 ∼ -k^2 τ(203 k^4 τ ^4+1150 k^2 τ ^2+750)/30
(7 k^2 τ ^2+15) +h.o.t.,
for small k, here h.o.t. indicates terms of higher order in k, either polynomials or rational functions of k. Expanding the quotients in (<ref>) in Taylor series around zero, we obtain
C_1 ∼ - k-4/3τ^2k^3+𝒪(k^5),
C_2 ∼ -4/3k^2τ +16/9τ^3k^4+𝒪(k^6),
C_3 ∼ -√(2/3) k+ 𝒪(k^5),
C_4 ∼ -√(2/3)10/3τ^3k^4+𝒪(k^6),
C_5 ∼ -√(2/3)k-1/3√(2/3)τ^2 k^3+𝒪(k^5),
C_6 ∼ -5/3τ k^2-16/9τ^3k^4+𝒪(k^6),
for small k.
At first order in k, (<ref>) shows that we have recovered the Euler equation,
∂/∂ t([ ρ̂; 𝐮̂; T̂ ])_ Euler=𝐐̃_𝐤[ 0 - k 0 0 0; - k 0 0 0 - k; 0 0 0 0 0; 0 0 0 0 0; 0 -2/3 k 0 0 0 ]𝐐̃_𝐤^T([ ρ̂; 𝐮̂; T̂ ])
while at second order in k, we recover the Navier–Stokes equation in wave space:
∂/∂ t([ ρ̂; 𝐮̂; T̂ ])_ Navier-Stokes =𝐐̃_𝐤[ 0 - k 0 0 0; - k -4/3τ k^2 0 0 - k; 0 0 -τ k^2 0 0; 0 0 0 -τ k^2 0; 0 -2/3 k 0 0 -5/3τ k^2 ]𝐐̃_𝐤^T([ ρ̂; 𝐮̂; T̂ ])
Transforming back according to (<ref>) and summing to Fourier series gives the well-known expressions
∂/∂ t([ ρ; 𝐮; T ])_ Euler = ([ -∇·𝐮; -∇ (ρ+T); -2/3∇·𝐮 ])
as well as
∂/∂ t([ ρ; 𝐮; T ])_ Navier-Stokes = ([ -∇·𝐮; -∇ (ρ+T)+τΔ𝐮+τ/3∇(∇·𝐮); -2/3∇·𝐮+5/3τΔ T ])
Let us comment on the third-order approximation of (<ref>) in k, the Burnett equation:
∂/∂ t([ ρ̂; 𝐮̂; T̂ ])_ Burnett =𝐐̃_𝐤[ 0 - k 0 0 0; - k -4/3τ^2k^3 -4/3τ k^2 0 - k; 0 0 -τ k^2 0 0; 0 0 0 -τ k^2 0; 0 -2/3k -2/9τ^2k^3 0 0 -5/3τ k^2 ]𝐐̃_𝐤^T([ ρ̂; 𝐮̂; T̂ ])
or, equivalently, in physical space
∂/∂ t([ ρ; 𝐮; T ])_ Burnett = ([ -∇·𝐮; -∇ (ρ + T)+4/3τ^2∇Δρ+τΔ𝐮+τ/3∇(∇·𝐮); -2/3∇·𝐮 - 2/9τ^2Δ∇·𝐮 +5/3τΔ T ])
We compare equation (<ref>) with p. 105 equation (34) in <cit.>, the non-dimensional linearized Burnett equation for a more general ellipsoid-statistical BGK (ES-BGK) kinetic model of Holway <cit.>,
∂ρ/∂ t +∇·𝐮=0,
∂𝐮/∂ t + ∇(ρ+T)-Δ𝐮+1/3∇(∇·𝐮)-4/3∇Δρ-3b/3∇Δ T = 0,
3/2∂ T/∂ t +∇·𝐮-5(1-b)/2Δ T-(1-b)(1-5b)/3Δ∇·𝐮,
where parameter b is related to the Prandtl number Pr = 1/1-b. Since the ES-BGK model reduces to the BGK model with Pr = 1 for the BGK equation, we find that equation (<ref>) is exactly the same as system (<ref>) for b=0 and τ=1.
As a special feature of the BGK equation, we find that the coefficient k↦ C_3(k) (<ref>) happens to be very close to the Euler approximation, deviating only at order k^5. This explains that there is no contribution of temperature in the non-classical terms in the Burnett approximation, which also implies that the Burnett approximation is globally stable. Indeed, the cubic terms enter only through C_1 and C_5, which are purely imaginary, and thus only contribute to the higher-order wave motion, not altering the amplitude dynamics. This, of course, is merely a coincidence for the BGK system, since the Shakhov, Maxwell and Hard-Sphere model are not expected to share that property, leading to the Burnett stability. We conjecture that this is due to the temperature coupling back to the velocity dynamics with opposite sign, producing an unstable term.
Since the equations (<ref>) are linear and derived as the invariant dynamics of a globally well-posed system (<ref>) (which is linear itself), the exact hydrodynamics are obviously hyperbolic as a Cauchy problem. The decay rates of solutions on the hydrodynamic manifold, however, will be weaker than the decay rate of a general solution, as the hydrodynamic eigenvalues are negative but larger than the general relaxation time in real part.
§ CONCLUSIONS AND FURTHER PERSPECTIVES
We have given an explicit and complete description of the full, non-local hydrodynamic closure of the BGK equation. Based on an explicit description of the spectrum of the linear BGK operator <cit.>, we obtain an invariant, slow manifold as the space spanned by the hydrodynamic eigenvectors. On this manifold, we are able to explicitly define a closure operator relating the spectral dynamics to the dynamics of the macroscopic variables (density, velocity and temperature) through a linear change of coordinates. The full non-local dynamics are compared to the Euler, the Navier–Stokes–Fourier and the Burnett equations (which may be obtained through the Chapman–Enskog expansion) and full consistency is demonstrated in the small wave-number regime.
The explicit form of the transport coefficients in (<ref>) allows us to derive effective approximations in frequency space through polynomials matching both derivatives of the eigenvalues close to zero and the essential spectrum in combination with cut-off functions in wave number. These effective approximations will be non-local as well (involving the convolution with a Dirichlet-type integral kernel), while considerably simplifying the form of the transport coefficients, thus rendering them an interesting candidate for linear gaseous hydrodynamics across all Knudsen numbers.
§ ACKNOWLEDGEMENT
This work was supported by European Research Council (ERC) Advanced Grant
834763-PonD. Computational resources at the Swiss National Super Computing
Center CSCS were provided under the grant s1066.
§ DECLARATION OF INTEREST
The authors declare that there is no conflict of interests.
§ EXPLICIT FORM OF THE CLOSURE COEFFICIENTS
In this section we expand the cyclical expressions for the closure coefficients (<ref>) explicitly. This allows us to infer the purely imaginary/real nature in (<ref>).
We expand the first relation in (<ref>),
C_1 = 1/k^2𝐇[ λ_ diffλ_ ac^*(λ_ diff-λ_ ac^*)θ(λ_ ac)+λ_ acλ_ diff(λ_ ac-λ_ diff)θ(λ_ ac^*)+λ_ ac^*λ_ ac(λ_ ac^*-λ_ ac)θ(λ_ diff) ]
= 2/k^2𝐇[ λ_ diff[λ_ ac^*(λ_ diff-λ_ ac)θ(λ_ ac)]-|λ_ ac|^2(λ_ ac)θ(λ_ diff) ],
to find that C_1 is purely imaginary.
We expand the second relation in (<ref>),
C_2 = /k𝐇[(λ_ diff^2-(λ_ ac^*)^2)θ(λ_ ac)+(λ_ ac^2-λ_ diff^2)θ(λ_ ac^*)+((λ_ ac^*)^2-λ_ ac^2)θ(λ_ diff)]
= -2/k𝐇[[(λ_ diff^2-(λ_ ac^*)^2)θ(λ_ ac)]-2(λ_ ac)(λ_ ac)θ(λ_ diff)],
to find that C_2 is purely real.
We expand the third relation in (<ref>),
C_3 = -1/k^2𝐇(λ_ diff-λ_ ac)(λ_ ac-λ_ ac^*)(λ_ ac^*-λ_ diff)
= 2/k^2𝐇|λ_ diff-λ_ ac|^2(λ_ ac),
to find that C_3 is purely imaginary.
We expand the fourth relation in (<ref>),
C_4 = /k𝐇[λ_ ac(λ_ diff-λ_ ac^*)θ(λ_ ac^*)θ(λ_ diff)+λ_ diff(λ_ ac^*-λ_ ac)θ(λ_ ac^*)θ(λ_ ac)
+λ_ ac^*(λ_ ac-λ_ diff)θ(λ_ ac)θ(λ_ diff) ]
= 2/k𝐇[[λ_ ac(λ_ diff-λ_ ac)θ(λ_ ac)θ(λ_ diff)]+λ_ diff(λ_ ac)|θ(λ_ ac)|^2 ],
to find that C_4 is purely real.
We expand the fifth relation in (<ref>),
C_5 = 1/𝐇[ (λ_ ac^*-λ_ diff)θ(λ_ diff)θ(λ_ ac^*)+(λ_ ac-λ_ ac^*)θ(λ_ ac)θ(λ_ ac^*)+(λ_ diff-λ_ ac)θ(λ_ diff)θ(λ_ ac) ]
=2/𝐇[θ(λ_ diff)[θ(λ_ac)(λ_ diff-λ_ ac)]+(λ_ ac)|θ(λ_ac)|^2 ],
to find that c_5 is purely imaginary.
We expand the sixth relation in (<ref>),
C_6 = -/k𝐇[λ_ diffθ(λ_ diff)(λ_ ac-λ_ ac^*)+λ_ acθ(λ_ ac)(λ_ ac^*-λ_ diff)+λ_ ac^*θ(λ_ ac^*)(λ_ diff-λ_ ac) ]
= 2/k𝐇[λ_ diffθ(λ_ diff)(λ_ ac)+[λ_ acθ(λ_ ac)(λ_ ac^*-λ_ diff)] ],
to find that C_6 is purely real.
§ PROPERTIES OF THE PLASMA DISPERSION FUNCTION Z
In the following, we collect some properties of the plasma dispersion function Z, defined through the integral expression (<ref>). In our presentation, we will closely follow the calculations performed in <cit.>.
First, let us derive an expression of the integral (<ref>) in terms of less exotic functions. To this end, we rely on the identities in <cit.>. Let
w(ζ)=e^-ζ^2(1-(-ζ)), ζ∈ℂ,
which satisfies the functional identity
w(-ζ)=2e^-ζ^2-w(ζ), ζ∈ℂ.
Function (<ref>) is called Faddeeva function and is frequently encountered in problems related to kinetic equations <cit.>.
We then have that
w(ζ)=/π∫_ℝe^-s^2/ζ-s ds, ζ>0,
and, by relation (<ref>), we have for ζ<0:
/π∫_ℝe^-s^2/ζ-s ds =-/π∫_ℝe^-s^2/(-ζ)+s ds
=-/π∫_ℝe^-s^2/(-ζ)-s ds
=-w(-ζ)
=e^-ζ^2[-1-(-ζ)].
Consequently, we obtain
∫_ℝ1/s-ζe^-s^2/2 ds =∫_ℝe^-s^2/s-ζ/√(2) ds
=π/π∫_ℝe^-s^2/ζ/√(2)-s ds
=π e^-ζ^2/2[1-(-ζ/√(2))], if ζ>0,
π e^-ζ^2/2[-1-(-ζ/√(2))], if ζ<0,
where in the first step, we have re-scaled s↦√(2)s in the integral.
Written more compactly, we arrive at
Z(ζ)=√(π/2) e^-ζ^2/2[(ζ)-(-ζ/√(2))], ζ≠ 0.
An an argument plot together with an modulus-argument plot of Z are shown in Figure <ref>.
Clearly, Z is discontinuous across the real line (albeit that Z|_ℝ exists in the sense of principal values as the Hilbert transform of a real Gaussian <cit.>). The properties
|Z(ζ)|≤√(π/2), for ζ∈ℂ∖ℝ,
0< Z(ζ)<π for (ζ)>0,
-π < Z(ζ) <0 for (ζ)<0,
are easy to show and can be read off from the plots (<ref>) directly as well.
We also note that
lim_ζ→ 0,ζ>0 Z(ζ) = √(π/2),
lim_ζ→ 0,ζ<0 Z(ζ) = -√(π/2),
as can be seen from (<ref>).
Function (<ref>) satisfies an ordinary differential equation (in the sense of complex analytic functions) on the upper and on the lower half-plane. Indeed, integrating (<ref>) by parts gives
1 = 1/√(2π)∫_ℝ(v-ζ)e^-v^2/2/v-ζ dv=-ζ Z+1/√(2π)∫_ℝve^-v^2/2/v-ζ dv
=-ζ Z-1/√(2π)∫_ℝe^-v^2/2/(v-ζ)^2 dv=-ζ Z-d/dζZ,
which implies that Z satisfies the differential equation
d/dζZ= -ζ Z-1,
for ζ∈ℂ∖ℝ. Formula (<ref>) can also be used as a recurrence relation for the higher derivatives of Z.
Since we will be interested in function (<ref>) for ζ positive and negative as global functions, we define
Z_+(ζ) = √(π/2) e^-ζ^2/2[1-(-ζ/√(2))],
Z_-(ζ) = √(π/2) e^-ζ^2/2[-1-(-ζ/√(2))],
for all ζ∈ℂ. Both functions can be extended to analytic functions on the whole complex plane via analytic continuation.
Recall that the error function has the properties that
(-ζ)=-(ζ), (ζ^*)=(ζ)^*,
for all ζ∈ℂ, which implies that for x∈ℝ,
( x)=-(- x)=-( x)^*,
i.e, the error function maps imaginary numbers to imaginary numbers. Defining the imaginary error function,
(ζ):=-(ζ),
for ζ∈ℂ, which, by (<ref>) satisfies |_ℝ⊂ℝ, it follows that for x∈ℝ:
Z_+(x)= -√(π/2)e^-x^2/2(x/√(2)), Z_+(x)= -√(π/2)e^-x^2/2,
similarly for Z_-(x).
Next, let us prove the following asymptotic expansion of Z_+:
Z_+(ζ) ∼ -∑_n=0^∞(2n-1)!!/ζ^2n+1, for |(ζ)|≤π/2-δ, ζ→∞ ,
for any 0<δ≤π/2, see also <cit.>. The proof will be based on a generalized version of Watson's Lemma <cit.>. To this end, let us define the Laplace transform
ℒ[f](ζ) = ∫_0^∞ f(x) e^-ζ x dx, ζ∈ℂ,
of an integrable function f: [0,∞)→ℂ.
[Generalized Watson's Lemma]
Assume that (<ref>) exists for some ζ=ζ_0∈ℂ and assume that f admits an asymptotic expansion of the form
f(x) =∑_n=0^N a_n x^β_n-1 + o(x^β_N-1), x>0, x→ 0,
where a_n∈ℂ and β_n∈ℂ with β_0>0 and β_n>β_n-1 for 1≤ n≤ N.
Then ℒ[f](ζ) admits an asymptotic expansion of the form
ℒ[f](ζ) =∑_n=0^N a_n Γ(β_n)ζ^-β_n +o(ζ^-β_N), v, ζ→∞,
for any real number 0<δ≤π/2, where Γ is the standard Gamma function.
For a proof of the above Lemma, we refer e.g. to <cit.>. Classically, Lemma (<ref>) is applied to prove that the imaginary error function admits an asymptotic expansion for x∈ℝ of the form
(x)∼e^x^2/√(π)x∑_k=0^∞(2k-1)!!/(2x^2)^k, for x>0, x→∞,
see also <cit.>, based on the classical version of Watson's Lemma, whose assumptions are, however, unnecessarily restrictive <cit.>.
For completeness, we recall the derivation of (<ref>) based on Lemma <ref>. First, let us rewrite as a Laplace transform using the change of variables t=√(1-s) with dt=ds/2√(1-s)
(ζ) =∫_0^1d/dt(tζ) dt=2ζ/√(π)∫_0^1 e^t^2ζ^2 dt = 2ζ/√(π)∫_0^1 e^ζ^2(1-s) ds/2√(1-s)
= ζ e^ζ^2/√(π)∫_0^1 1/√(1-s) e^-sζ^2 ds=ζ e^ζ^2/√(π)∫_0^∞χ_[0,1](s)/√(1-s) e^-sζ^2 ds.
From the Taylor expansion of the Binomial function, we know that
1/√(1-s)=∑_n=0^∞-1/2n (-s)^n=∑_n=0^∞ 4^-n2nns^n,
which allows us to apply Lemma (<ref>) with β_n=n+1 and a_n=4^-n2nn, thus leading to
(ζ) ∼ζ e^ζ^2/√(π)∑_n=0^∞ 4^-n2nnΓ(n+1) ζ^-2(n+1)
∼e^ζ^2/√(π)∑_n=0^∞(2n)!/4^nn!ζ^-2n-1
∼e^ζ^2/ζ√(π)∑_n=0^∞(2n-1)!!/(2ζ)^n,
for ζ→∞ and |(ζ)|≤π/2-δ, 0<δ≤π/2. This is consistent with formula (<ref>) for the limit along the real line. Finally, we arrive at the following asymptotic expansion for Z:
Z_+(ζ)∼√(π/2)e^-ζ^2/2-∑_n=0^∞(2n-1)!!/ζ^2n+1, for |(ζ)|≤π/2-δ, ζ→∞,
which is, of course, equivalent to
Z_+(ζ) ∼ -∑_n=0^∞(2n-1)!!/ζ^2n+1, for |(ζ)|≤π/2-δ, ζ→∞ ,
since |e^-ζ^2|^2=e^-2(x^2-y^2)→ 0 for ζ=x→∞.
abbrv
|
http://arxiv.org/abs/2306.03935v1
|
20230606180118
|
Inferring interpretable dynamical generators of local quantum observables from projective measurements through machine learning
|
[
"Giovanni Cemin",
"Francesco Carnazza",
"Sabine Andergassen",
"Georg Martius",
"Federico Carollo",
"Igor Lesanovsky"
] |
quant-ph
|
[
"quant-ph",
"cond-mat.dis-nn",
"cond-mat.quant-gas",
"cond-mat.stat-mech"
] |
Institute for Solid State Physics and Institute of Information Systems Engineering, Vienna University of Technology, 1040 Vienna, Austria
Max Planck Institute for Intelligent Systems, Max-Planck-Ring 4, 72076 Tübingen, Germany
Wilhelm Schickard Institut für
Informatik, Maria-von-Linden-Straße 6
72076 Tübingen
School of Physics and Astronomy and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems, The University of Nottingham, Nottingham, NG7 2RD, United Kingdom
To characterize the dynamical behavior of many-body quantum systems, one is usually interested in the evolution of so-called order-parameters rather than in characterizing the full quantum state.
In many situations, these quantities coincide with the expectation value of local observables, such as the magnetization or the particle density.
In experiment, however, these expectation values can only be obtained with a finite degree of accuracy due to the effects of the projection noise. Here, we utilize a machine-learning approach to infer the dynamical generator governing the evolution of local observables in a many-body system from noisy data. To benchmark our method, we consider a variant of the quantum Ising model and generate synthetic experimental data, containing the results of N projective measurements at M sampling points in time, using the time-evolving block-decimation algorithm. As we show, across a wide range of parameters the dynamical generator of local observables can be approximated by a Markovian quantum master equation. Our method is not only useful for extracting effective dynamical generators from many-body systems, but may also be applied for inferring decoherence mechanisms of quantum simulation and computing platforms.
Inferring interpretable dynamical generators of local quantum observables from projective measurements through machine learning
Igor Lesanovsky
July 31, 2023
===============================================================================================================================
Introduction.—
Reconstructing Hamiltonian operators or dynamical generators from physical properties of a quantum system is a problem of current interest. For instance, inverse methods can be applied to identify quantum Hamiltonians associated with a given ground state <cit.> and interacting many-body theories can be obtained from the knowledge of correlation functions <cit.>. In many settings, one is merely interested in reconstructing effective equations of motion for a subsystem S embedded in a larger “environment" E, as it happens, for open quantum systems <cit.>. Furthermore, in the framework of quantum simulation <cit.>, it is very important to understand the (effective) dynamical equations under which artificial quantum systems actually evolve and by how much these differ from the desired ones <cit.>. This is relevant for improving state-of-the-art hardware <cit.> and the identification of noise models. Another interesting instance concerns the evolution of order parameters, often constructed from local observables, such as the particle density or the magnetization.
Machine learning (ML) approaches appear to be particularly suited for this task <cit.>. For instance, quantum process tomography with generative adversarial methods <cit.>, neural networks <cit.>, and recurrent neural networks <cit.> have been developed. These approaches are very promising but have two main drawbacks: they require a great number of measurements, and they treat ML algorithms as black boxes, thus lacking in physical interpretation.
Simpler methods are capable of learning Hamiltonians from fewer local measurements <cit.>, yet they typically rely on a a priori ansatz for the functional form of the Hamiltonian or of the dissipation.
A more general approach is to fit an open quantum system (OQS) dynamics by learning the Nakajima-Zwanzig equation <cit.> through transfer tensor techniques <cit.> or by learning convolutionless master equations <cit.>. However, these approaches require a full state tomography at different time steps, which is prohibitive to achieve in experiments.
Ultimately, current methods thus either rely on an ad hoc ansatz, or demand data which is not experimentally accessible, or lack in physical interpretability (which is actually becoming highly desirable <cit.>).
In this work, we show how to use ML methods to infer the effective dynamical generator of a subsystem from a finite set of local measurements at randomly selected times, which inevitably produce noisy data due to projection noise. To illustrate our approach, we consider a many-body spin system [cf. Fig. <ref>(a)], which is ubiquitous in the context of experiments with trapped ion or Rydberg atom quantum simulators <cit.>. By using synthetic (experimental) data generated by tensor-network based algorithms we infer a physically consistent Markovian dynamical generator <cit.> governing the evolution of a small subsystem. Our method, which works reliably across a wide range of parameters – even in some instances outside the weak coupling limit – yields interpretable results which may be used to infer noise models on quantum simulators or to study thermalization dynamics in many-body systems.
Setting.—
The system we consider is a 1D quantum spin chain consisting of L spins arranged on a circular lattice, as depicted in Fig. <ref>(a). The chain is partitioned into a subsystem S, here formed by two adjacent spins, and the environment E, that is, the remainder of the spin chain. We assume the whole system to evolve unitarily, through the many-body Hamiltonian
H_S+E = Ω/2∑_i=1^Lσ_i^x + V( ∑_i=1^L-1 n_i n_i+1 + n_L n_1) .
The first term in the equation above describes a transverse “laser" field, while the second one accounts for nearest-neighbor interactions. Here, σ_i^α denotes the α Pauli matrix for the i^th spin and we have defined the projector n = 1+σ^z/2. The above Hamiltonian is of practical interest for experiments with Rydberg atoms <cit.> and essentially encodes an Ising model in the presence of transverse and longitudinal fields. We simulate the time evolution of the whole system by means of the time-evolving block-decimation (TEBD) algorithm [see Fig. <ref>(b)].
In our setting, the information on the state of S is obtained by a finite number, N, of projective measurements, taken at randomly selected times t_1,... t_M [see Fig. <ref>(c)]. From this noisy data we want to infer the open quantum dynamics of the reduced state ρ_S(t) of subsystem S. Formally, this dynamics is obtained as the partial trace of the evolution of the full many-body state, i.e., ρ_S(t)= Tr_E(U_t ρ_S+EU_t^†), where U_t=e^-iHt, ρ_S+E is the initial state of the system and Tr_E denotes the trace over the environment degrees of freedom. In general, such a dynamics is rather involved and may show non-Markovian effects or it may be nonlinear for generic initial states ρ_S+E <cit.>. Here, we restrict ourselves to learning a Markovian dynamics for ρ_S(t), but more general approaches are certainly possible <cit.>. The goal is then to identify the time-independent generator ℒ, yielding the Markovian quantum master equation evolution <cit.>,
ρ̇_S(t) = ℒ[ρ_S(t)]
that optimally describes the dynamics of S. This simple form has the advantage that it is straightforwardly interpretable, i.e., it allows to read off the Hamiltonian and decoherence processes (see further below).
Data generation.— We simulate the time evolution of the system for times t∈[0, Ω T] by means of matrix product states and the time-evolving block-decimation algorithm <cit.> [see Fig. <ref>(b)], which allows us to study systems of up to 50 spins.
We generate 30 trajectories obtained by initializing the system in state ψ=⊗_k=1^L |0⟩, with σ^z|0⟩=-|0⟩ and perturbing the subsystem S through a random two-spin unitary Û_rand distributed with the Haar measure.
As the system evolves in time, after each time-step Ω dt=0.01, we calculate the expectation value of all the 15 independent observables {𝕀_1 ⊗σ_2^x, 𝕀_1 ⊗σ_2^y, ..., σ_1^x ⊗σ_2^y, ..., σ_1^z ⊗σ_2^z }/2 of the subsystem S, which uniquely identify the reduced subsystem state.
With the reduced state at hand, we can emulate experimental measurements of the above local observables. For each trajectory, we select M random times in the time-window [0,Ω T] and, for each of these points in time, we perform N measurements in all the relevant basis in order to produce a noisy estimate of the reduced state.
Such a procedure is sketched in Fig. <ref>(c), in which the histogram depicts the counting of N=10 measurement outcomes for a single time-point, whereas the orange dots represent the experimental expectation value.
ML architecture and training.—
The generator ℒ defining the quantum master equation (<ref>) can be parametrized as ℒ=ℋ+𝒟, where
ℋ[·] = -i [H,·] , H= ∑_i=2^d^2θ^H_i F_i ,
𝒟[·] = 1/2∑_i,j=2^d^2 c_ij ([F_i, · F^†_j] + [F_i ·, F^†_j]) .
Here, we have introduced the Hermitian orthonormal basis {F_i}_i=1^16, for the operators of the subsystem S. Note that we choose F_1 to be proportional to the identity operator, specifically F_1 = 𝕀/√(d). Moreover, we write the Hamiltonian in terms of the “fields" θ^H_i, and the dissipative contribution 𝒟 in a non-diagonal form, fully specified by the so-called Kossakowski matrix c_ij. The latter must be positive semi-definite in order for the open quantum dynamics to be completely positive. This constraint can be “hard coded" by setting c = Z^† Z, for a complex matrix Z = θ^X+iθ^Y, with θ^X and θ^Y being real-valued.
In the same spirit, we decompose the reduced state ρ_S on the basis {F_i}_i=1^16 as <cit.>
ρ_S = 𝕀/d + ∑_i=2^16 F_i v_i ,
which defines the coherence vector v_i = (F_i ρ_S). Notice the condition (ρ_S)=1 implies v_1 = 1/√(d), that we take outside the sum. The coherence vector encodes the full information about the density matrix and provides a representation of it as a vector.
This is quite convenient, from a numerical point of view, as it allows us to write the action of the generator on states as the action of the matrix 𝐋 on coherence vectors. Therefore, in this representation, the quantum master equation (<ref>) becomes
d 𝐯(t)/d t = 𝐋𝐯(t) = ( 𝐇 + 𝐃 ) 𝐯(t) ,
where 𝐇_ij=-(ℋ[F_i]F_j) and 𝐃_ij=(𝒟[F_i]F_j) are real-valued matrices.
As explicitly shown in the Supplemental Material (see Ref. <cit.>), the matrix 𝐇=𝐇(θ^H_i) depends linearly on the parameters θ^H_i, while the matrix 𝐃=𝐃(θ^X_ij,θ^Y_ij) depends quadratically on θ^X_ij and θ^Y_ij.
We build a simple neural network <cit.>, here called Lindblad dynamics approximator (LDA), as the exponential of the matrices 𝐇 and 𝐃,
𝐌(θ, t) = e^t [ 𝐇(θ^H_i) + 𝐃(θ^X_ij,θ^Y_ij) ] ,
that is the structure of the Lindblad time propagator.
We train the LDA to learn the Lindblad representation 𝐋 from (synthetic) experimental data. In the training procedure, we feed the LDA with the initial conditions 𝐯_in = 𝐯(0) and the time of the measurement t, and optimize the parameters θ={θ^H_i,θ^X_ij,θ^Y_ij} such that 𝐯_out≃𝐯(t)=𝐌(θ,t)𝐯_in.
Training over a finite time t is crucial when working with experimental data. Indeed, training the LDA to propagate the coherence vector only over an infinitesimal time-step dt <cit.>, i.e., 𝐯_in = 𝐯(t) and 𝐯_out = 𝐯(t+dt), is bound to fail as soon as the noise is larger than the variation of the coherence vector.
The loss used for the optimization of the LDA parameters is the mean-squared error function (𝔼_D indicates average over the dataset)
MSE(θ) = 𝔼_D[ || M(θ, τ)𝐯(0) - 𝐯(τ) ||^2 ] .
We additionally consider a regularization term on the parameters penalizing nonzero elements of θ. The rationale behind this term is twofold:
first, it yields a more stable training procedure, especially for small training datasets;
second, it keeps the learned generator as simple as possible for enhanced interpretability. The total loss function is thus
Loss(θ) = MSE(θ) + α_11 (|| θ^X ||_1 + || θ^Y||_1) + α_12 ||θ^H||_1 ,
where O_1=∑_i,j |O_ij|.
The NN parameters θ are optimized by means of Adam optimization algorithm <cit.>, with a scheduled learning rate, where it decays by a factor of 0.05 at 100th and 50th epochs from the final one.
After the training is completed, to test the correctness of the learned generator, we produce r new exact trajectories and compare them with our prediction.
To have a quantitative measure of the performance of the ML algorithm, we compute the following error function
ϵ(N,M) := 1/r∑^r_i=11/T∫^T_0ρ^i_ML(t) - ρ^i_S(t)_2^2/ρ^i_S(t)_2^2 t ,
where ρ^i_ML(t) is the prediction for the state of the subsystem obtained from our ML algorithm, ρ^i_S(t) represents the synthetic data for a given choice of N and M, and O_2^2= Tr(O^† O)[The code for the generation of the artificial data and the training of the ML algorithm is made available at <https://github.com/giovannicemin/lindblad_dynamics_approximator>.
].
Benchmarking the algorithm.—
Before training the algorithm on data for the many-body model in Eq. (<ref>), we benchmark its ability to learn a Lindblad generator within a well-controlled setting.
To this end we consider a two-spin Lindblad generator, which in its diagonal form, is specified by the following Hamiltonian and jump operators [cf. Fig. <ref>(a)]
H = Ω/2 (σ_1^x + σ_2^x ) + V n_1 n_2 ,
J_1 = √(γ)σ_1^- , J_2 = √(γ)σ_2^- ,
J_3 = √(κ) n_1 , J_4 = √(κ) n_2 .
The jump operators effectively describe the effects of the environment on the subsystem. In particular, J_1, J_2 encode decay from |1⟩ to |0⟩ while J_3,J_4 encode dephasing.
The data generation procedure follows the protocol described above, with two exceptions: the initial conditions are given by a random density matrix ρ_rand, and the data for testing have a doubled time-window [ 0, 2Ω T ]. In this way, we can also test the ability of the algorithm to extrapolate to unseen times. Since the network is, in principle, able to perfectly learn a Markovian Lindblad generator, we expect the extrapolation to be accurate.
The results are reported in Fig. <ref> (b-c). The color map [panel (b)] shows the error, ϵ(N,M), averaged over r=10 test trajectories, for 100 different combinations of N and M.
First, we observe the overall trend of the error to decrease as N× M increases. However, the plot is not symmetric with respect to the line M=N. In fact, for higher values of M (and fixed N× M), the ML algorithm has a more stable training, and hence a better performance. This is due to the fact that the choice of the M time-points is random, and a small value of M has a high probability of yielding data which is not representative of the dynamics. On the other hand, higher values of M yields more representative data of the trajectory, despite relatively small values of N.
For high values of N and M, as expected, the error becomes small.
The ML algorithm can thus successfully learn a Lindbladian, with a precision that approximately depends on the product N× M.
Many-body setting.—
Having established the capability of the ML algorithm to exactly learn Lindblad generators from experimental data, we now address the many-body scenario described by Eq. (<ref>). In this case, the reduced dynamics of the state ρ_ S(t) can feature non-Markovian effects. The ML algorithm will thus, by construction, learn the “optimal" Markovian description of the system. Whether this description will accurately describe the subsystem or not depends on the relevance of non-Markovian effects.
In Fig. <ref>(a), we show results for weak interactions V=0.1Ω. In this case, the ML algorithm learns a dynamical generator which faithfully reproduces the time evolution of the coherence vector. This suggests that the subsystem dynamics is, in this weak-coupling regime, essentially Markovian. Notably, the Lindblad generator can inferred even when N and M are small.
As already observed during the benchmarking, the training procedure is less stable for the models in the bottom-right corner of Fig. <ref>(a), compared to the top left, confirming that larger M values are better than larger N values at fixed N× M.
In Fig. <ref>(b) we show instead results for strong interactions, V=2Ω. Quite surprisingly, also in this case, the ML algorithm learns an effective Lindbladian which reproduces very well the subsystem dynamics. Also in this case, the latter has a Markovian character. A possible explanation for this is that strong interactions lead to a faster decay of time correlations in the environment, which thus renders the subsystem dynamics Markovian.
Due to the faster oscillations in this regime, more sampling points in time are needed than the weakly interacting case, Fig. <ref>(a), for a same accuracy.
In both cases, for small N × M, while the model cannot recover the exact dynamics, it nonetheless provides an average over the fast oscillations [cf. Figs. <ref>(a)-<ref>(b)]. The data for the whole coherence vector are reported in the supplemental material <cit.>, where we also show additional results on the case V=0.5Ω. In the latter case, the error is higher, due to non-Markovian effects which appear to be non-negligible <cit.> in this intermediate regime.
Conclusions.—
We have presented a simple ML algorithm able to learn a physically consistent and interpretable dynamical generator starting from (synthetic) experimental data. We have shown that it can yield faithful results for both weak and strong interactions. A wider range of systems could be included by relaxing the assumed Markovianity of the learned generator, namely by allowing it to be time-dependent ℒ(t). Some approaches already exist (see, e.g., Refs. <cit.>), but they require an enormous amount of data and lack physical interpretability.
Our ML method is interpretable, hence it can be used to “read out" the underlying dynamical processes (see Supplemental Material <cit.>). In fact, the learned matrix 𝐋 gives direct access to the parameters θ^H_i,θ^X_ij,θ^Y_ij, which represent the Hamiltonian and the jump operators of the subsystem S.
In the future, it would be interesting to understand whether feeding the ML algorithm with a sampling of the full coherence vector is necessary or whether a bona fide dynamics can still be learned when leaving out information about certain observables.
Acknowledgments.— We acknowledge financial support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy—EXC-Number 2064/1-Project Number 390727645, under Project No. 449905436 and through the Research Unit FOR 5413/1, Grant No. 465199066. This project has also received funding from the European Union’s Horizon Europe research and innovation program under Grant Agreement No. 101046968 (BRISQ). FC is indebted to the Baden-Württemberg Stiftung for the financial support of this research project by the Eliteprogramme for Postdocs.
SUPPLEMENTAL MATERIAL
Inferring interpretable dynamical generators of local quantum observables from projective measurements through machine learning
Giovanni Cemin,^1 Francesco Carnazza,^1 Sabine Andergassen,^2
Georg Martius,^3,4 Federico Carollo,^1 and Igor Lesanovsky^1,5
^1Institut für Theoretische Physik, Universität Tübingen,
Auf der Morgenstelle 14, 72076 Tübingen, Germany
^2Institute for Solid State Physics and Institute of Information Systems Engineering, Vienna University of Technology, 1040 Vienna, Austria
^3Max Planck Institute for Intelligent Systems, Max-Planck-Ring 4, 72076 Tübingen, Germany
^4 Fachbereich Informatik, Universität Tübingen
^5School of Physics and Astronomy and Centre for the Mathematics
and Theoretical Physics of Quantum Non-Equilibrium Systems,
The University of Nottingham, Nottingham, NG7 2RD, United Kingdom
§ I. MATRIX REPRESENTATION OF 𝐋
We here report a brief derivation of the explicit matrix elements appearing in eq. (<ref>). The stating point is the coherence vector v_i = (F_i ρ_S). The time derivative is simply given by
d/dt v_i (t) = ( F_i d/dtρ_S (t) ) = ( F_i ℒ[ρ_S(t)] ) = ( ℒ^*[F_i] ρ_S (t) )
where in the second equality we used eq. (<ref>). In the last equality ℒ^* is the dual map of ℒ, i.e. the one evolving observables instead of states.
By expanding the dual map ℒ^*[F_i] = ∑_j=1^d^2 ( ℒ^*[F_i] F_j )F_j, and substituting into eq. (<ref>), one obtains
d/dt v_i (t) = ∑_j=1^d^2 ( ℒ^*[F_i] F_j ) (F_j ρ_S ) = [𝐋𝐯(t)]_i ,
where we defined the matrix 𝐋 as 𝐋_ij≡( ℒ^*[F_i] F_j ).
By exploiting the cyclic property of the trace, one can explicitly calculate ℒ^*[F_i] starting from the known action of ℒ[ρ]. Explicitly we obtain
(ℒ^*[F_k]F_l) = ( i[H, F_k] F_l + 1/2∑_i,j=2^d^2 c_ij ( F^†_j[F_k,F_i]F_l + [F^†_j,F_k]F_i F_l ) ) .
Comparing this result to eq. (<ref>) we obtain
𝐇_kl = i ( [H,F_k]F_l) , 𝐃_kl = 1/2∑_i,j=2^d^2 c_ij( F^†_j[F_k,F_i]F_l + [F^†_j,F_k]F_i F_l ) .
To simplify this expressions we use the structure constants d_ijk and f_ijk, defined from the commutation and anticommutaton relations as
d_ijk = 1/4 ( { F_i, F_j} F_k ) ,
f_ijk = -i/4( [F_i,F_j]F_k ) .
By substituting those definitions into eq. (<ref>), we obtain the following expressions for the matrix elements of 𝐇
𝐇_ij = -4 ∑_k=2^d^2 f_ijkθ^H_k , i,j ∈{2,...,d^2}
𝐇_i1 = 𝐇_1i = 0 , i ∈{1,...,d^2}
where θ^H_k provides the expansion of the Hamiltonian over the basis F_i, namely H=∑_i=2^i=d^2 F_i θ^H_i. The matrix elements of 𝐃 reads
𝐃_mn = -8 ∑_i,j,k=2^d^2 f_mik( f_njk (c)_ij + d_njk (c)_ij) , m,n ∈{2,...,d^2}
𝐃_m1 = 2 ∑_i,j=2^d^2 f_imj(c)_ij , 𝐃_1m = 0 ,
m ∈{1,...,d^2}
where c_ij is the so-called Kossakowski matrix, which we parametrized as c=Z^† Z with Z=θ^X+iθ^Y and θ^X,θ^Y real matrices.
Equations (<ref>) and (<ref>) are the matrix representation of the Lindblad operator ℒ in terms of the vector ω_i and the Kossakowski matrix c_ij.
§ II. FURTHER RESULTS
In the main text, we investigate the behavior of the ML framework applied to spin chains having V=0.1Ω, 2Ω. Here we report the case of a spin chain having V = 0.5 Ω. The results are shownin Figure <ref>. The graph exhibits a similar behavior to the cases analyzed in the main text. However, there is a notable distinction: the model saturates at a higher error. This discrepancy arises from the stronger coupling between the system and the bath, rendering non-Markovian effects non-negligible.
§ III. ADDITIONAL PLOTS AND PHYSICAL “READ OUT"
For completeness, we report here the plots of all 15 non-trivial components of the coherence vector v_i. The plots represent the exact time evolution (black solid line) and the dynamics predicted by the ML algorithm (red dashed line).
In Figure <ref> and <ref> we report the performance of the model trained on data that refers to a spin chain with V=0.1Ω. In Fig. <ref> the synthetic experimental data is generated with N=20,M=20. In this case, the ML prediction is almost perfectly overlapped on the exact line, indicating the correctness of the learned Lindbladian. In Fig. <ref> the synthetic experimental data is generated with N=2,M=2. In this case, the dynamics is roughly captured, made exception for some observables e.g., σ^y_1 σ^y_2, σ^y_1 σ^z_2, σ^z_1 σ^y_2 and σ^z_1 σ^z_2,.
In addition to the complete plots, we here report the learned expressions for the Hamiltonian and jump operators. In this case, we take into consideration the model trained over N=20 and M=20. For the Hamiltonian, we round up to two decimal places to improve readability; it reads
H = 0.5(σ^x_1 + σ^x_2) + 0.055(σ^z_1 + σ^z_2) + 0.025 σ^z_1 σ^z_2 - 0.03(σ^y_1 + σ^y_2) + 0.005( -σ^z_1 σ^x_2 - σ^y_1 σ^x_2 + σ^z_1 σ^y_2 - σ^x_1 σ^z_2 + σ^x_1 σ^y_2 ) .
Regarding dissipation part, there is only one non-zero eigenvalue (rounded to three decimal places) of the Kossakowski matrix γ = 0.013. Regarding the jump operator, rounded up to two decimal places, it reads
J = 0.245 σ^x_1 σ^z_2 + (0.19-0.1i) σ_2^y + (-0.16+0.07i) σ^x_1 σ^y_2 + (0.055+0.145i) σ_1^y σ_2^y + (0.095-0.075i) σ_1^y +
+ (-0.09-0.08i) σ_1^y σ_2^x - (0.05+0.105i) σ_1^x + (0.09-0.065i) σ_1^x σ_2^x + (0.03-0.1i) σ^z_1 σ^x_2 + (-0.09+0.02i) σ_1^z +
+ (0.005+0.08i) σ_1^z σ_2^x + (-0.03+0.06i) σ_1^z σ_2^z +(0.02+0.035i) σ_1^y σ_2^z + 0.005 σ_2^z
In Figure <ref> and <ref> we report the performance of the model trained on data that refers to a spin chain with V=2Ω. In Fig. <ref> the synthetic experimental data is generated with N=20,M=20. In this case, the ML prediction is almost perfectly overlapped on the exact line, indicating the correctness of the learned Lindbladian. In Fig. <ref> the synthetic experimental data is generated with N=3,M=2. In this case, the dynamics is poorly captured.
|
http://arxiv.org/abs/2306.05842v1
|
20230609121435
|
Efficiency of the averaged rank-based estimator for first order Sobol index inference
|
[
"Thierry Klein",
"Paul Rochet"
] |
math.ST
|
[
"math.ST",
"stat.TH"
] |
Heat transport in a Coulomb ion crystal with a topological defect
T. E. Mehlstäubler
July 31, 2023
=================================================================
Among the many estimators of first order Sobol indices that have been proposed in the literature, the so-called rank-based estimator is arguably the simplest to implement. This estimator can be viewed as the empirical auto-correlation of the response variable sample obtained upon re-ordering the data by increasing values of the inputs. This simple idea can be extended to higher lags of auto-correlation, thus providing several competing estimators of the same parameter. We show that these estimators can be combined in a simple manner to achieve the theoretical variance efficiency bound asymptotically.
Keywords: Sensibility analysis, estimator averaging, asymptotic efficiency
§ INTRODUCTION
Sobol indices are by now a common tool for Global sensitivity methods which aim at detecting the most influential input variables/parameters in complex computer models. In this framework, the input variables are considered as random elements and the relative influence on the quantity of interest of each subset of its components is classically quantified by the Sobol indices, usually denoted by S (see the book by Saltelli <cit.> for an overview on global sensitivity analysis). These indices, based on the Hoeffding's decomposition of the variance <cit.>, were first introduced in <cit.> and later revisited in the framework of sensitivity analysis in <cit.> (see also <cit.>). In a nutshell, a square integrable real-valued random variable Y, referred to as the output, is entirely or partially explained by a collection X of inputs variables. The relative influence of X on Y is quantified by the Sobol index :
S := (𝔼(Y | X) )/(Y)∈ [0,1].
In practice, an analytical expression of S is rarely available making statistical inference on Sobol indices an important question. In the last decades, several approaches were developed in the literature, each one falling into one of the four following categories.
* Those based on Monte Carlo, quasi Monte Carlo or nested Monte Carlo designs of experiments (see, e.g., <cit.>).
* Those based on spectral approaches (e.g. Fourier Amplitude Sensitivity Test (FAST) <cit.>, Random Balance Design (RBD) <cit.>, Effective Algorithm for computing global Sensitivity Indices (EASI).
<cit.> and polynomial chaos expansions <cit.>
* Those based on the so-called Pick freeze estimator in <cit.>.
* Those based on a nearest neighbors approach <cit.>) or similar kernel-based methods <cit.>, studied in the particular case of first-order Sobol indices <cit.>
and in <cit.> for general Sobol indices.
Theoretical properties for the last two categories are well documented, especially in the case of first order Sobol indices. Consistency and asymptotic normality have been proved for kernel estimators <cit.>, Pick Freeze <cit.>, nearest neighbors estimators <cit.> as well as the rank based estimator <cit.>. All these methods allow to estimate simultaneously all first-order Sobol indices from a single independent and identically distributed sample (two in the case of <cit.>), with the exception of the Pick freeze approach which requires a specific design of experiment associated to each input.
The kernel based approach developed in <cit.> is shown to be asymptotically optimal in quadratic mean, with its variance approaching the efficiency bound for a regular estimator of the conditional second order moment η = 𝔼( 𝔼 ( Y | X )^2 ). However, the method is particularly tedious to implement and the estimator not easily tractable in practice. On the contrary, the rank-based approach developed by <cit.> has by far the simplest implementation among all consistent methods but is sub-optimal in the sense that its variance does not reach the efficiency bound asymptotically. We show that the asymptotic variance of the rank estimator only differs from the efficiency bound by the additional term 𝔼( ^2(Y | X) ), which quantifies how far it is from optimality.
We introduce the family of lagged rank estimators η^(ℓ), ℓ≥ 1 that generalizes the method of <cit.>. We show that each lagged rank estimator η^(ℓ) performs similarly in quadratic mean as the original, under some control over the growth of the lag ℓ relative to the sample size n. By calculating the first order asymptotic expansion of the covariance matrix of a collection of lag estimators up to some maximal lag k, we derive an asymptotically optimal combination in the spirit of estimator averaging <cit.>. More importantly, we show how the average estimator can be made to reach the efficiency bound of <cit.> by choosing k growing sufficiently slowly to infinity relative to n.
The article is organised as follows. We set the theoretical framework and the definition of the lagged rank estimators η^(ℓ) in Section <ref>. Their properties are investigated in Section <ref>, with a special focus on their joint second order moments and convergence in quadratic mean, paving the way to proving the efficiency of the averaging method. A numerical analysis to illustrate and validate the various results is presented in Section <ref>. The proofs and technical lemmas are postponed to the Appendix.
§ RANK ESTIMATORS OF SOBOL INDICES
Let (Y, X) be a couple of random variables with Y real-valued and square-integrable. The Sobol index of Y with respect to X, which measures the part of the variance of the output Y that is ”explained” by the input X, is given by
S := (𝔼(Y | X) )/(Y) = 𝔼( 𝔼 ( Y | X ) ^2 ) - 𝔼(Y)^2 /(Y).
For inference purposes, because the expectation and variance of Y do not depend on the input, the only real difficulty lies in estimating the second order conditional moment
η := 𝔼( 𝔼 ( Y | X ) ^2 ).
When X is real-valued, in which case S is generally referred to as a first-order Sobol index, a simple estimator of η can be obtained from an iid sample (Y_1, X_1), ..., (Y_n, X_n) following the method developed in <cit.>. Let (Y_(i), X_(i))_i=1,...,n denote the data points sorted by increasing values of the X_i's, i.e. such that X_(1)≤ ... ≤ X_(n),
the rank estimator of η is defined by
η= 1/n-1∑_i = 1^n-1 Y_(i) Y_(i+1) .
This estimator is known to be consistent and asymptotically Gaussian under mild conditions <cit.>. A natural generalization of the idea consists in defining the lagged rank estimator associated to a lag ℓ≥ 1 as
η^(ℓ) = 1/n-ℓ∑_i = 1^n-ℓ Y_(i) Y_(i+ℓ) .
In order to investigate the properties of the lagged rank estimator η^(ℓ), let us introduce some technical assumptions related to the regularity of the relation between Y and X. Let Φ, V be measurable functions such that Φ(X) = 𝔼(Y | X) and V(X) = (Y | X). We assume that Φ and V are bounded
H1∀ x ∈ℝ , | Φ(x) | ≤ M_Φ and | V(x) | ≤ M_V
and Lipschitz
H2∀ x, x' ∈ℝ , | Φ(x) - Φ(x') | ≤ L_Φ |x - x' | and | V(x) - V(x') | ≤ L_V |x - x' |
for some positive constants M_Φ, M_V, L_Φ, L_V. Remark that under these assumptions, Φ^2 is also bounded
and Lipschitz,
which will be useful in later proofs. These conditions are quite mild compared to the usual assumptions for first order Sobol index inference. For instance, it is extremely common in the literature to assume that the inputs are uniformly distributed on [0,1] or have compact support. In this case, it is typically sufficient to assume that the conditional expectation and variance are continuously differentiable for (<ref>) and (<ref>) to hold.
In the sequel, we shall denote by k = k(n) the total number of lags considered for our purposes. This number is allowed to increase with n but must somehow be constrained by the distribution of the inputs, in particular by their range
Δ_n : = X_(n) - X_(1) .
Typically, we want to be able to consider as many lags ℓ as possible provided that the average distance between two data points X_(i) and X_(i + ℓ) is sufficiently small (roughly speaking, we want X_(i) to be close enough to X_(i+ ℓ) so that Y_(i) and Y_(i+ℓ) are almost identically distributed conditionally to X_(i)). This way, η^(ℓ) should provide an accurate depiction of the second conditional moment of Y. By a telescoping argument, the average distance can be bounded by
1/n-ℓ∑_i=1^n-ℓ( X_(i + ℓ) - X_(i)) ≤ℓ/n-ℓΔ_n ≤k /n-kΔ_n .
Hence, we require this term to vanish fast enough as n →∞, via the following simple assumption
H3𝔼(k^2 Δ_n^2 ) = o ( n).
This condition can be understood as both a regularity assumption on the tail of the input's distribution and a restriction on the maximal number k = k(n) of lags considered. It is nonetheless quite mild and can always be met unless the distribution of the inputs is heavy tailed, leading to extreme behaviors of the inputs' range Δ_n. The minimal requirement, corresponding to the situation where the distribution of the X_i's has compact support (excluding the trivial case Δ_n a.s.= 0), is to take k = o(√(n)).
If the distribution of the inputs decays exponentially fast, the asymptotic behavior Δ_n = O_P(log n) imposes the slightly stronger condition k = o (√(n) / log n ), far from prohibitive in practice. Finally, remark that we do not rule out data-driven values of k, such as e.g. k ∼ n^1/3/Δ_n which automatically satisfies (<ref>) regardless of the distribution of the inputs. Nevertheless, the cautious and simple k = ⌊ n^1/3⌋, which we use in all numerical applications, fulfills all theoretical requirements while providing a good rule of thumb for practical purposes, as discussed in Section <ref>.
§ THEORETICAL RESULTS
We are now in position to investigate some properties of the lagged rank estimators. Because we are ultimately interested in their convergence in quadratic mean, we focus on controlling the bias and variance, both for a finite sample size n and asymptotically as n grows to infinity. Only the main results are presented in this section, the detailed proofs and technical steps can be found in the Appendix.
Under (<ref>), (<ref>), we have for all ℓ = 1,...,k,
| 𝔼( η^(ℓ)) - η| ≤ℓ/n-ℓ( L_Φ M_Φ + 2 M_Φ^2 𝔼( Δ_n ) ).
In particular, if (<ref>) is also met, then 𝔼( η^(ℓ)) = η + o ( n ^-1/2).
This result, which is a direct consequence of Lemma <ref> in the Appendix, illustrates how the bias of η^(ℓ) may strongly depend on the lag ℓ. We observe this phenomenon in some examples of the numerical analysis in Section <ref> where the bias term is shown to highly vary in function of the lag, especially for smaller sample sizes n. Nevertheless, the variance becomes the dominating term asymptotically, as shown in the next proposition.
Under (<ref>), (<ref>) and (<ref>), we have for all ℓ = 1,...,k,
n ( η^(ℓ)) = 4 𝔼( Φ^2(X) V(X) ) + 𝔼(V^2(X) ) + (Φ^2(X) ) + o (1 ) .
Let us compare the limit variance to that of other existing estimators of single input Sobol indices. The main term (up to the convergence rate of 1/n), given by
σ^2_rank = 4 𝔼( Φ^2(X) V(X) ) + 𝔼(V^2(X) ) + (Φ^2(X) )
falls short to the theoretical optimal value
σ^2_opt = 4 𝔼( Φ^2(X) V(X) ) + (Φ^2(X) )
shown in <cit.> to be the asymptotic lower bound for the variance of an estimator of η. In the same paper, the authors propose a method that achieves the theoretical lower bound for the asymptotic variance, but relies on a preliminary non-parametric estimation of the joint density of (X,Y) along with various tuning parameters, making its construction somewhat tedious. Note that the rank estimator η^(ℓ) is asymptotically optimal if, and only if, V(X) a.s.= 0
in which case the Sobol index is equal to one.
For the sake of comparison, the alternative estimator of η proposed in <cit.> and based on a nearest neighbors estimation of the conditional expectation, achieves an asymptotic theoretical variance of
σ^2_nn = 5 𝔼( Φ^2(X) V(X) ) + 2 𝔼(V^2(X) ) + 2 (Φ^2(X) ) .
While the three variances are always comparable,
σ^2_opt≤σ^2_rank≤σ^2_nn,
an important advantage of the nearest neighbors approach over the rank method is that it can handle the estimation of multiple inputs Sobol indices, a problem that notoriously suffers from the curse of dimensionality. More recently, a kernel approach inspired from <cit.> was proposed in <cit.> with an asymptotic theoretical variance of
σ^2_ker = 4 𝔼( Φ^2(X) V(X) ) + 4 (Φ^2(X) ) .
This variance is, of course, higher than the theoretical lower bound σ^2_opt but is not comparable to the other two.
From an implementation point of view, the rank estimator η^(ℓ) is by far the easiest to construct, with the ordering of the inputs as its main computational hurdle. Besides its simplicity, a notable advantage of the method is to provide a new estimator for each lag ℓ, with similar properties asymptotically. This feature can be exploited by combining an appropriate number of rank estimators obtained with different lags, in order to improve the estimation. The next result shows how the rank estimators η^(1), ..., η^(k) form a collection of competing estimators with symmetric behaviors asymptotically.
Under (<ref>), (<ref>) and (<ref>), we have for 1 ≤ℓ < m ≤ k,
n ( η^(ℓ), η^(m)) =
4 𝔼(Φ^2(X) V(X) ) + (Φ^2(X) )
+ o (1).
For a fixed k, Propositions <ref> and <ref> give the following first order term in the asymptotic expansion of the covariance matrix Σ := ( Σ_ℓ m)_ℓ, m =1,...,k of η^(1), ..., η^(k) :
Σ_ℓ m = lim_n →∞ n ( η^(ℓ), η^(m)) = {[ σ^2_opt + 𝔼(V^2(X) ) if ℓ = m; σ^2_opt if ℓ≠ m ].
Remark that Σ is of full rank provided that V(X) is not almost surely zero, which indicates that the η^(ℓ)'s are linearly independent asymptotically. Therefore, it is possible to reduce the asymptotic variance of an estimator of η by considering a linear combination
η^(k)_av = ∑_ℓ = 1^k λ_ℓ η^(ℓ),
where the weights λ_ℓ, ℓ = 1,...,k are constrained to sum up to one. This heuristics is investigated in <cit.> to determine the weights minimizing the asymptotic variance as a function of Σ. Although Σ is unknown in practice, the symmetrical form of Σ in this case, having the same diagonal values as well as off-diagonal values, suffices to deduce that the solution corresponds to the equal weights λ_ℓ = 1/k. This simple way of combining the rank estimators actually achieves the theoretical efficiency bound of <cit.> under mild assumptions, as shown in the next theorem.
If k = k(n) tends to infinity as n →∞ and the conditions (<ref>), (<ref>) and (<ref>) are met, the average estimator η^(k)_av obtained with equal weights λ_ℓ = 1/k satisfies
lim_n →∞ n ( η^(k)_ av) = 4 𝔼(Φ^2(X) V(X) ) + (Φ^2(X) ) = σ^2_opt.
The fact that the averaged rank estimator achieves the variance efficiency bound σ^2_opt as n →∞ is certainly encouraging, although the result concerns the actual mean square error (MSE) of the estimator and not the variance of the Gaussian limit for a regular estimator, as introduced in <cit.>. While the regularity and asymptotic normality of η^(k)_av are to be expected under the appropriate assumptions, it has not been investigated in this paper as it deviates from the original objective of variance reduction.
§ NUMERICAL ANALYSIS
We investigate the performances of the rank estimators and their averages in different models of the form
Y = Φ(X) + √(V(X)) ϵ
where ϵ is a standard Gaussian random variable independent from X. Each model is simulated N = 10000 times to give a faithful representation of the distributions of the different estimators. We show the boxplots of the rank estimators obtained for all lags from ℓ = 1 to ℓ = 50 for four samples sizes from n=100 to n=2000, which we compare to the boxplots of the averages obtained for k=5 to k=50 with 5 estimators added at each step.
Due to the similarities in the interpretations of the results produced from various models, we choose to discuss only two values of the conditional expectation function, namely Φ(X) = sin(5X) and Φ(X) = X^2 - 3X. For the conditional variance, all examples are generated with V(X) = 4X^2 as other values of V hardly had any noticeable impact on the results. For the distribution of the inputs X_i, we considered the uniform distribution on [0,1] and the standard exponential distribution.
In Figure <ref>, we observe that the bias of the rank estimators is important and varies strongly with the lag for the smaller sample sizes n, but does vanish asymptotically as predicted by the theory. The averaging procedure appears to improve significantly the performances of the rank estimators, as can be expected in this model with a maximal theoretical improvement of around (σ^2_ - σ^2_)/σ^2_≈ 49 %. The positive effect of the averaging is mostly visible on the variance (smaller inter-quartile intervals) but can not compensate for the biases, all of the same sign.
A similar behaviour can be observed in Figure <ref> despite the distribution of the input X not having compact support. The maximal theoretical improvement from the averaging procedure is even higher in this case, being around (σ^2_ - σ^2_)/σ^2_≈ 96 %.
The convergence in quadratic mean of the rank and averaged estimators are sensible to the regularity conditions of the model, as can be seen in Figure <ref>. In the model Y = sin(5X) + 2X ϵ, with uniformly distributed inputs, where the regularity conditions (<ref>) and (<ref>) are satisfied, the MSEs of the various estimators do appear to behave accordingly to the theory in function of the sample size, rapidly reaching the asymptotic regime. The numerical results are not as convincing in the same model with exponentially distributed inputs, where the various estimators are slower to reach their asymptotic regime. This is especially true for the lagged rank estimator η^(k) with k growing to infinity, although it surprisingly performs better than expected by the theory. Remark that in this case, none of the conditions (<ref>) and (<ref>) hold for the conditional variance V, which is neither bounded nor Lipschitz on the support of the inputs distribution. Nevertheless, the evolution of the MSE of the averaged estimator seems to validate in both cases the theoretical first order expansion
n ( η^(k)_av) ≈σ^2_opt + 1/k𝔼(V^2(X) )
derived from Equation (<ref>) in the proof of Theorem <ref>. In all these scenarios, the squared bias account for less than 1% of the MSE, making it indistinguishable from the variance in the graphical representations.
Figure <ref> illustrates how things can fall apart when the regularity conditions in (<ref>) and (<ref>) are not met for the conditional expectation function Φ. Here, the bias of the rank estimators remains high even for small lags ℓ and large sample sizes n. This is due to the large differences between consecutive extreme values in the inputs X_i, amplified by the behavior of Φ : x ↦ x^2 - 3x (which is neither bounded nor Lipschitz in this case), causing the bias to remain high as n →∞. This example highlights the importance of the regularity conditions for the rank-based method to work and its potentially high bias, even when dealing with a single input.
§ CONCLUSION
The rank-based method proposed in <cit.> provides an easily implementable estimator for first order Sobol indices. Specifically, given real-valued output Y and input X, the second conditional moment η = 𝔼( 𝔼(Y | X)^2 ) is estimated by the lag-one cross-product of the outputs Y_i ordered by increasing values of the inputs :
η^(1) = 1/n-1∑_i=1^n-1 Y_(i) Y_(i+1) .
Under regularity conditions on the expectation and variance of the response conditionally to the input, the estimator is known to be consistent and asymptotically Gaussian. In this paper, we discuss a natural extension of the method which consists in considering rank estimators obtained from higher order lags ℓ≥ 1 :
η^(ℓ) = 1/n-ℓ∑_i=1^n-ℓ Y_(i) Y_(i+ℓ) , ℓ =1,...,k.
We show that these estimators share the same asymptotic properties under technical regularity conditions, provided that the maximal lag k grows sufficiently slowly relative to n. We derive a closed form expression for the asymptotic covariance matrix of the collection (η^(1), ..., η^(k) ), which allows to study the asymptotic behavior of the average estimator
η^(k)_av = ∑_ℓ = 1^k λ_ℓ η^(ℓ),
for suitable weights λ_ℓ, ℓ = 1,...,k. Base on the symmetry of the covariance matrix, the averaging procedure of <cit.> justifies the equal weights λ_ℓ = 1/k as an asymptotically optimal choice. This is confirmed theoretically with the variance of average estimator η^(k)_av reaching the efficiency bound of <cit.> for a regular estimator of η, whenever k grows to infinity sufficiently slowly. In practice, the rule of thumb k = ⌊ n^1/3⌋ provides an entirely satisfactory choice in the various simulated examples, while verifying all the technical conditions for asymptotic efficiency. The theoretical results, as well as the importance of the regularity assumptions, are well validated by the numerical analysis.
§ APPENDIX
Let ℱ_n denote the σ-algebra generated by X_1,...,X_n. The proofs of the results rely essentially on firstly investigating the distribution of the various estimators conditionally to ℱ_n. In particular, we exploit the fact that the Y_(i)'s remain independent conditionally to X_1,...,X_n despite the sample re-shuffling, since the permutation that orders the inputs increasingly is ℱ_n-measurable.
To ease notation, we shall write ϕ_i = Φ(X_i) = 𝔼(Y_i | X_i) and v_i = V(X_i) = (Y_i | X_i) for all i=1,...,n, and similarly for the ordered sample, e.g. ϕ_(i)= Φ(X_(i)), v_(i) = V(X_(i)).
§.§ Technical lemmas
If (<ref>) and (<ref>) hold, then for ℓ = 1,...,k,
| 𝔼(η^(ℓ) | ℱ_n ) - 1/n∑_i=1^n ϕ_i^2 | ≤ℓ/n - ℓ( L_Φ M_ΦΔ_n + 2 M_Φ^2 ).
Proof. Remark that 𝔼(Y_(i) Y_(i+ℓ) | ℱ_n ) = ϕ_(i)ϕ_(i+ℓ) due to Y_(i) and Y_(i+ℓ) being independent conditionally to X_1,...,X_n. It follows
| 𝔼(Y_(i) Y_(i+ℓ) | ℱ_n ) - ϕ_(i)^2 | = | ϕ_(i)( ϕ_(i+ℓ) - ϕ_(i)) | ≤ L_Φ M_Φ( X_(i+ℓ) - X_(i)),
leading to
| 𝔼( η^(ℓ) | ℱ_n ) - 1/n-ℓ∑_i=1^n-ℓϕ_(i)^2 |
≤ L_Φ M_Φ1/n-ℓ∑_i=1^n-ℓ( X_(i+ℓ) - X_(i)) ≤ L_Φ M_Φℓ/n-ℓΔ_n ,
using Equation (<ref>). Finally, summing over n-ℓ terms instead on n deviates of at most
| 1/n-ℓ∑_i=1^n-ℓϕ_(i)^2 - 1/n∑_i=1^nϕ_i^2 | ≤2 ℓ/n-ℓ M_Φ^2
by (<ref>), ending the proof. □
Under Assumptions (<ref>) and (<ref>), we have for ℓ < n,
| (n - ℓ) ( η^(ℓ) | ℱ_n ) - 1/n∑_i=1^n ( 4 ϕ^2_i v_i + v^2_i ) | ≤ℓ/n - ℓ( C_1 Δ_n + C_2 ) ,
where C_1, C_2 are positive constants that depend only on Φ and V.
Proof.
Let Z_i,j,ℓ = ( Y_(i) Y_(i+ℓ), Y_(j) Y_(j + ℓ) | ℱ_n )
and for a given i ∈{ 1,...,n-ℓ}, consider the set S_i,ℓ⊂{1,...,n } of indices j such that i, i+ ℓ, j, j+ℓ are not all distinct :
S_i,ℓ = { i, i- ℓ, i + ℓ}∩{ 1,...,n } .
Since Z_i,j,ℓ = 0 if j ∉ S_i, ℓ by independence conditionally to ℱ_n, we have
(η^(ℓ) | ℱ_n )
= 1/(n- ℓ)^2∑_i=1^n-ℓ∑_j ∈ S_i, ℓ Z_i,j,ℓ .
For i=ℓ+1,...,n-2ℓ, the set S_i, ℓ contains exactly three elements that can be dealt with separately:
* If j = i, Z_i,j,ℓ = ϕ_(i)^2 v_(i + ℓ) + ϕ_(i + ℓ)^2 v_(i) + v_(i) v_(i + ℓ), and by (<ref>) and (<ref>),
| Z_i,j,ℓ - v_(i)( 2 ϕ_(i)^2 + v_(i)) | ≤
(M_Φ L_V + M_V L_Φ + M_V L_V) ( X_(i+ℓ) - X_(i)) .
* If j = i- ℓ (and i > ℓ), Z_i,j,ℓ = ϕ_(i-ℓ)ϕ_(i+ ℓ) v_i and
| Z_i,j,ℓ - ϕ_(i)^2 v_(i)| ≤ M_Φ M_V L_Φ( X_(i+ℓ) - X_(i- ℓ)) .
* If j = i+ ℓ (and i ≤ n-2ℓ), Z_i,j,ℓ = ϕ_(i)ϕ_(i+ 2 ℓ) v_(i+ ℓ) and
| Z_i,j,ℓ - ϕ_(i)^2 v_(i)| ≤ M_Φ (M_V L_Φ + M_Φ L_V) ( X_(i+2ℓ) - X_(i)) .
Gathering all three terms, we obtain for all i=ℓ+1,...,n-2ℓ,
| ∑_j ∈ S_i, ℓ Z_i,j,ℓ - ( 4 ϕ_(i)^2 v_(i) + v_(i)^2 ) | ≤C_1/3( X_(i+2ℓ) - X_(i- ℓ))
for some C_1> 0. Moreover, for the 2 ℓ terms corresponding to i ≤ℓ and n - 2 ℓ < i ≤ n- ℓ, we can use the crude bound
| ∑_j ∈ S_i, ℓ Z_i,j,ℓ - ( 4 ϕ_(i)^2 v_(i) + v_(i)^2 ) | ≤ 2(4 M_Φ^2 M_V + M_V^2) .
Using the telescoping argument of Equation (<ref>), we deduce
| ∑_i=1^n-ℓ∑_j ∈ S_i, ℓ Z_i,j,ℓ - ∑_i=1^n-ℓ( 4 ϕ_(i)^2 v_(i) + v_(i)^2 ) | ≤( C_1 Δ_n + 4(4 M_Φ^2 M_V + M_V^2) ) ℓ .
The missing terms for i > n - ℓ can be bounded similarly by
| ∑_i=1^n-ℓ( 4 ϕ_(i)^2 v_(i) + v_(i)^2 ) - ∑_i=1^n( 4 ϕ_(i)^2 v_(i) + v_(i)^2 ) | ≤( 4 M_ϕ^2 M_V + M_V^2 ) ℓ
and the result follows easily from here, using the triangular inequality.
□
§.§ Proof of Proposition <ref>
The result follows from Lemmas <ref> and <ref>, using the variance decomposition
( η^(ℓ)) = 𝔼( ( η^(ℓ) | ℱ_n ) ) + ( 𝔼( η^(ℓ) | ℱ_n ) ).
Using Lemma <ref>, the first term in the variance decomposition is easily shown to verify
(n-ℓ) 𝔼( ( η^(ℓ) | ℱ_n ) ) = 4 𝔼( Φ^2(X) σ^2(X) ) + 𝔼(σ^4(X) ) + o (1) .
using that 𝔼(ℓΔ_n) ≤𝔼(k Δ_n) ≤√(𝔼(k^2 Δ_n^2)) = o (1/√(n)) = o(1) by (<ref>). For the second term, we have
( 1/n∑_i=1^n ϕ_i^2 ) = 1/n( Φ^2(X) )
so that Lemma <ref> combined with (<ref>) give us directly
( 𝔼( η^(ℓ) | ℱ_n ) ) = ( Φ^2(X) )/n + o ( 1/n).
Hence,
(n- ℓ ) ( 𝔼( η^(ℓ) | ℱ_n ) ) = n-ℓ/n( Φ^2(X) ) + o (1) = ( Φ^2(X) ) + o (1)
and the result follows. □
§.§ Proof of Proposition <ref>
We follow the same steps as in the proofs of Lemma <ref> and Proposition <ref>, starting with
(η^(ℓ) , η^(m) | ℱ_n ) = 1/(n- ℓ)(n- m)∑_i=1^n-ℓ∑_j ∈ S_i, ℓ, m Z_i,j,ℓ, m
where Z_i,j,ℓ,m = 𝔼( Y_(i) Y_(i+ℓ) Y_(j) Y_(j + m) | ℱ_n ) - ϕ_(i)ϕ_(i+ℓ)ϕ_(j)ϕ_(j + m) and for all i=1,..., n -ℓ,
S_i,ℓ,m = { i, i+ ℓ, i - m , i + ℓ - m }∩{1,...,n } .
In the (at most) four cases for j ∈ S_i,ℓ,m and i ∈{ m+1,..., n - ℓ - m },
| Z_i,j,ℓ,m - ϕ_(i)^2 v_(i)| ≤ C ( X_(i+ℓ+m) - X_(i-m))
for some constant C >0. Using the same arguments, we arrive at
| (n-ℓ) ( η^(ℓ), η^(m) | ℱ_n ) - 4/n∑_i=1^n ϕ_i^2 v_i | ≤ℓ/n-ℓ( C_1' Δ_n + C_2' ),
for some constants C_1', C_2' > 0. On the other hand
( 𝔼( η^(ℓ) | ℱ_n ) , 𝔼( η^(m) | ℱ_n ) ) = ( Φ^2(X) )/n + o ( 1/n)
by Lemma <ref>. We conclude using the decomposition
( η^(ℓ), η^(m)) = 𝔼( (η^(ℓ) , η^(m) | ℱ_n ) ) + ( 𝔼( η^(ℓ) | ℱ_n ) , 𝔼( η^(m) | ℱ_n ) ).
□
§.§ Proof of Theorem <ref>
Using Equation (<ref>), we obtain by straightforward calculation
n ( η^(k)_av) = 1/k^2∑_ℓ, m = 1^k n ( η^(ℓ), η^(m)) = σ^2_opt + 1/k𝔼(V^2(X) ) + o(1),
after verification in the proofs that he residual terms o(1) of Propositions <ref> and <ref> are negligible uniformly for all ℓ, m ≤ k. The asymptotic efficiency follows for k growing to infinity. □
abbrv
|
http://arxiv.org/abs/2306.09062v1
|
20230615114454
|
Theory of gravitational lensing on a curved cosmic string
|
[
"Igor I. Bulygin",
"Mikhail V. Sazhin",
"Olga S. Sazhina"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"gr-qc",
"hep-th"
] |
Article Title]Theory of gravitational lensing on a curved cosmic string
[1,2]Igor I. [email protected]
1]Mikhail V. [email protected]
These authors contributed equally to this work.
1]Olga S. [email protected]
These authors contributed equally to this work.
*[1]Sternberg Astronomical Institute of Lomonosov Moscow State University, Universitetsky pr., 13, Moscow, 119234, RF
*[2]Astrophysical school “Traektoria”, Moscow, 107078, RF
It is discussed in detail the complete mathematical model of gravitational lensing on a single cosmic string (CS) of general shape and position with respect to the line of sight.
CS are one-dimensional extended objects assuredly predicted by modern cosmology. The presence of CS changes the global geometry of the Universe, could clarify the properties of the early Universe, including inflation models, and could serve as a unique proof of higher-dimensional theories.
Despite the fact that CS have not yet been reliable detected, there are several strong independent indications of the existence of the CS, based of CMB analysis and search of gravitational lens chains with special properties. However, early considered models of straight CS presented only a small fraction of the general CS-configurations to be observed.
Now we propose model which could significantly increase the possibilities of CS observational search. It is considered more realistic models have necessarily include the inclinations and bends of the CS. Besides, the recent analysis of observational data on the search for gravitational-lens candidates, shows a large number of pairs that could be explained by the complex geometry of the CS.
[
[
July 31, 2023
=================
§ INTRODUCTION
Astronomers and physicists are closely approaching the search for nontrivial structures in the Universe, from topological defects to the consequences of the hidden multidimensionality of space-time. Such studies are actively supported by modern mathematical theory and existing gaps in understanding the unified picture of physical interactions and the structure of hidden sectors of matter: dark matter and dark energy.
Almost 50 years have passed since the prediction of cosmic strings (CS) as cosmic objects by T. W. B. Kibble <cit.>-<cit.>. CS were actively studied in subsequent works by Y. B. Zel'dovich <cit.>, A. Vilenkin and others <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. In particular, the role of CS in the formation of gravitational-lens images was shown in <cit.>, and the mechanism of generation of CMB anisotropy on CS was shown in <cit.>.
The existence of CS does not contradict all currently available cosmological observational data and is widely supported in theory. CS avoid the problems of the possible topological defects (single monopoles and domain walls), thus being the most interesting candidates from the point of view of observations. Variants of hybrid models (“dumbbells”, “beads”, “necklaces” – CS with monopoles at the ends and conglomerates of such structures) that do not contradict the observational data are also considered. They are particularly interesting, because, firstly, they are preferable from the point of view of theory <cit.> – <cit.>, and secondly, they open up a wider area of space for their search. Indeed, a short dumbbell-type CS could be much closer to the observer than a CS of the same angular size, but “piercing’’ the surface of the last scattering. Closer CS allow to search for more gravitational lensed galaxies. Hybrid models also appear in superstring theory <cit.>.
The search for chains of gravitational lens events that a CS could form seems to be the most perspective astrophysical test. Firstly, it is possible to use data from numerous surveys and carry out such a search in automatic mode <cit.>, and secondly, such a search complements the search for the CMB anisotropy. CS candidates, identified independently both in anisotropy data and by the presence of lens chains, are the most convincing.
The search for CS using gravitational lensed pair of distant objects began in the 1980s with the study of several pairs of quasars, <cit.>. Numerous further unsuccessful attempts are described in the works <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. The search for observational manifestations of CS loops has been undertaken repeatedly, but also has not yet led to positive results.
Methods of searching for gravitational-lens images (forming chains, the so-called “New Milky Way”), methods of searching for characteristic structures in the CMB anisotropy, as well as methods of gravitational-wave astronomy are applicable to a very wide class of CS predicted by the theory, being almost universal.
But for simplicity of calculations, it is usually assumed that the CS is located perpendicular to the line of sight, then two images of distant source are formed in the plane of the CS lens, and are not rotated relative to each other.
The main goal of this paper is to demonstrate that in the case of a CS with a slope or a CS with a bend in the picture plane, the resulting images will be asymmetrical, with different positional angles. This fact significantly expands the search for gravitational lensed pairs.
The article is organized as follows. In the Chapter 1 we provide a brief introduction to gravitational lensing on a single straight CS. This based on recent work by <cit.>. In the Chapter 2 we present the calculation of gravitation lensing effects due to a CS inclination. In the Chapter 3 we present the model of gravitational lensing on a curved CS. Thus in these two last chapters we consider the CS of general position with respect of the line of sight. We conclude by declaring the importance of consideration of the general position CS and discuss the new search strategy of gravitational lens pairs. After the Conclusion (Chapter 4) there are Appendixes. In the Appendix A we provide the flat approximation for a CS space-time with a conical singularity. In the Appendix B we provide the detailed calculations of energy-momentum tensor for a curved CS. In the Appendix C we describe photon trajectories for small metric perturbation. In the Appendix D we give derivation of a lens equation for a curved CS.
§ BRIEF INTRODUCTION TO GRAVITATIONAL LENSING ON A SINGLE STRAIGHT COSMIC STRING
The metric of the cosmic string in cylindrical coordinates (t, z, r, φ) has the well-known form:
g_μν = diag(1, -1, -1, -r^2(1 - 4 Gμ)^2)
It is a conical metric with a deficit angle Δθ = 8π Gμ in a plane perpendicular to the string (see <ref>). The string lensing model in the flat geometric approximation is the following.
ϕ = -η + 4π Gμ(1 - R_sR_g)
ψ = η + 4π Gμ(1 - R_sR_g)
where R_g is the distance from an observer to a source (a galaxy), R_s is the distance from an observer to the string, η is first coordinate, an angle between the direction to the string and the direction to the source (ξ will be the second coordinate, the angle coordinate along the string).
If in (<ref>) |η| < θ_E we have two images of the source, where
θ_E = 8π Gμ(1 - R_sR_g)
The first image (shifted to the right of the string):
I_1(η, ξ) =
I(η - θ_E / 2, ξ), η > -θ_E
0, η≤ -θ_E
The second image (shifted to the left of the string), (Fig. <ref>):
I_2(η, ξ) =
I(η + θ_E / 2, ξ), η < θ_E
0, η≥θ_E
or
I_1+2(η, ξ) =
I(η + θ_E / 2, ξ), η < -θ_E
I(η + θ_E / 2, ξ) + I(η - θ_E / 2, ξ), |η| ≤θ_E
I(η - θ_E / 2, ξ), η > θ_E
what coincides with the results obtained earlier, <cit.>.
§ EFFECTS DUE TO A STRING INCLINATION
Let us consider a case when cosmic string (CS) has an additional parameter, inclination i > 0 (an inclination of 0^∘ corresponds to an string located perpendicular to the line of sight, an inclination of 90^∘ corresponds to a string parallel to one). In this case the lensing parameter θ_E will be dependent on the position of the source, θ_E = θ_E(i, ξ) due to two effects.
* for each ξ the CS wil have different distance to an observer, R_s = R_s(ξ),
* the effective deficit angle Δθ (i, ξ) for i > 0 is less than for i = 0.
Let us discuss the first effect. For the triangle “observer – point on the string for ξ = 0 – point on the string from the source”:
R_s(ξ)/sin(90^∘ - i) = R_s(ξ = 0)/sin(180^∘ - ξ - (90^∘ - i))
Then
R_s(ξ) = R_s(ξ = 0)/cosξ + tan i sinξ≈R_s(ξ = 0)/1 + ξtan i
θ_E = Δθ (i) ( 1 - R_s(ξ = 0)/R_g (1 + ξtan i))
We can define the distance to the string R_s to be R_s(ξ = 0) to shorten the formulae.
The deficit angle depends on the distance to the point of the string that we observe when passing the source. The conical metric is equivalent to the flat metric with cut, i.e. with two “effective observers” spaced by a distance of L:
Δθ· h = L
where h is the length of the perpendicular from observer to the CS:
h = R_s cos i
In case of inclined string we can adopt the same geometric framework, but in 3 dimensions. For every ξ the picture will be the same except now Δθ(i, ξ) is defined by equation (<ref>). Also in this equation instead of h we have R_s(ξ). Since L is the same for all such ξ, then
Δθ R_s cos i = Δθ (i, ξ) R_s(ξ)
and
Δθ (i, ξ) = Δθ (cos i + ξsin i)
Finally,
θ_E(i, ξ) = Δθ (cos i + ξsin i)) ( 1 - R_s/R_g (1 + ξtan i))
All the other steps in constructing a lensing transformation are the same, so we end up with (see Fig. <ref>, <ref>):
I_1+2(η, ξ) =
I(η + θ_E(i, ξ) / 2, ξ), η < -θ_E(i, ξ)
I(η + θ_E(i, ξ) / 2, ξ) + I(η - θ_E(i, ξ) / 2, ξ), |η| ≤θ_E(i, ξ)
I(η - θ_E(i, ξ) / 2, ξ), η > θ_E(i, ξ)
Since ξ≪ 1, it is more useful to make an expansion of θ_E(i, ξ):
θ_E(i, ξ) = θ_E(i, ξ = 0) + ∂θ_E/∂ξ|_ξ = 0·ξ
where
θ_E(i, ξ = 0) = Δθ cos i ( 1 - R_s/R_g)
∂θ_E/∂ξ|_ξ = 0 = Δθsin i
Note that from here the rate of increase of θ_E is limited by the value of Δθ in radians. Knowing the limitations on it (Δθ≲ 10^-5), the tilt limitations during lensing can be neglected in almost all realistic cases.
It should also be borne in mind that such a picture may occur, even if only a part of the string on the line of sight will have a large slope.
§ MODEL OF GRAVITATIONAL LENSING ON A CURVED STRING
In the previous paragraph we have discussed the effects of inclination on a picture one can get from an object that lies behind the string. Now the model can be developed further.
If a CS on a line of sight has a bend with an angle θ≠ 0^∘, the metric does not have the form (<ref>). To get the picture of a galaxy behind the CS one should calculate the geodesics, using the general relativity framework.
Consider a static CS, that consists of 2 straight lines, connected by a Besier curve (see Fig. <ref>). As a matter of simplicity, the string is located in the (x^1, x^2) plane, since the effect of inclination (see Chapter <ref>) can be neglected:
X^1(s) = x(s) =
0, s ∈ (-∞, - R)
R sinθ4(1 + sR)^2, s ∈ [-R, R]
s sinθ, s ∈ (R, +∞)
X^2(s) = y(s) =
s, s ∈ (-∞, - R)
R cosθ4(1 + sR)^2 -R4(1 - sR)^2, s ∈ [-R, R]
s cosθ, s ∈ (R, +∞)
X^3(s) = z(s) = R_g - R_s
The angle θ describes how much the string changed the positional angle and R is the characteristic length of such bend. After we have defined the string location, we need to compute the energy-momentum tensor (EMT) of such string, using the standard formula:
T_μν(𝐱) = ℱ^-1{μ∫_-∞^+∞ dσexp(- i 𝐤𝐗) ( ∂_tX_μ∂_tX_ν - ∂_σX_μ∂_σX_ν)}
where the ℱ^-1 is the inverse Fourier transformation frequency domain 𝐤 to coordinate domain 𝐱. The complete derivation of EMT can be seen in the Appendix <ref>. If we assume that the bend is small, compared to other parts of the string (R = 0), EMT can be written as:
T_μν =
μ(
[ δ_↓ + δ_↑ 0 0 0; 0 - δ_↑sin^2θ - δ_↑sinθcosθ 0; 0 - δ_↑sinθcosθ -δ_↓ - δ_↑cos^2θ 0; 0 0 0 0; ])
where:
δ_↓ = δ(z)δ(x)(1 - H(y) )
δ_↑ = δ(z)δ(xcosθ - ysinθ)H(xsinθ + ycosθ)
and H(x) is a unit step Heaviside function.
According to the linearized Einstein equation, nonzero components of the EMT give us the nonzero components of the metric perturbation h_μν. The metric itself is divergent if we assume the infinitely thin CS, but the Cristoffel symbols and the geodesics equation for photons in such space can be derived. The full derivation can be found in Appendixes <ref> and <ref>. The final result of this section is an initial boundary problem (IBP) for photon trajectory, that should be solved numerically in order to get the lensed picture:
d 𝐯da = -12∇_𝐧
(h_↑ + h_↓) -
[∂ h_↑∂ a(
[ cos^2θ sinθcosθ; sinθcosθ sin^2θ ])
+ ∂ h_↓∂ a(
[ 1 0; 0 0 ])]
𝐯
d𝐧/da = 𝐯
𝐧(a = 0) = 𝐧_0
𝐧(a = 1) =0
where 𝐧_0 is the initial direction to the point source. The definitions for a, 𝐧, h_↑, ↓ and the partial derivatives of functions h_↑, ↓ can be seen in the Appendix <ref>.
After getting the solution of IBP we can calculate 𝐯_f = -𝐯(a = 1) - the direction under which the light from the point source 𝐧_0 is seen. Since the IBP can have several solutions, we can numerically check the area around 𝐧_0 using the shooting method to formally obtain a map -𝐯_f(𝐧_0). After that the lensing picture can also be computed numerically. The result of this computation can be seen in the Fig. (<ref>), (<ref>).
One can also notice, that for big angle of a bend the double image cannot be obtained even for an object that lies behind the string (see Fig. <ref>). From numerical simulations the critical angle is ∼ 13^∘. This result can be an argument for the lack of double images of galaxies.
It was also shown in <cit.> and <cit.> that CS can produce more than 2 images of distant sources, due to their small scale stricture, which can be the case in our model, if one incorporates more than one bend. However, the modern limitations on the deficit angle are of same magnitude, as our angular resolution in visible light. This means that GL events on CS with more than 2 images are difficult to find and analyze.
§ CONCLUSIONS
The model presented in the paper first generalizes the gravitational lensing on a CS of general position. Gravitational lensing on a CS with an inclination in the plane coinciding with the beam of vision is considered. Gravitational lensing on a CS curved in a plane perpendicular to the line of sight is considered. In the work, the curvature of the CS is given by one fracture with a given angle. A fundamentally new result is that the bending of the CS crucially affects the number of images. So, with a larger value of the bending angle (approximately more than 13^∘), the second image disappears, which can serve as an argument to explain the absence of a large number of gravitational-lens chains (new ”Milky Ways“).
The simulation results are applied by authors to the analysis of gravitational-lens candidates in the field of the previously found by CMB analysis <cit.> candidate string CSc-1 (in preparation).
For cosmology, astrophysics, and theoretical physics the discovery of CS is without a doubt a huge step in understanding the structure of the Universe, especially its global properties and the earliest stages of its evolution.
The presented in this paper detailed theoretical study opens up fundamentally new ways to search for gravitational-lens events, which, together with the analysis of CMB anisotropy, will allow statistically significant detection of CS.
Data availability
The code for the modeling of lensing by the inclined and bended string can be sent upon request by e-mail.
Acknowledgments
I.I. Bulygin acknowledges the financial support by the <<BASIS>> foundation and expresses its gratitude to the <<Traektoria>> foundation.
§ FLAT APPROXIMATION FOR A CS SPACE-TIME WITH A CONICAL SINGULARITY
Geodesic trajectory has the form:
d^2 x^λ/ds^2 = - Γ^λ_μνd x^μ/dsd x^ν/ds
where
Γ^λ_μν=g^λα/2(∂ g_μα/∂ x^ν + ∂ g_να/∂ x^μ - ∂ g_μν/∂ x^α)
Nonzero derivatives in the metric of the cosmic string only has g_φφ component, so only non-zero Christoffel symbols are:
Γ^φ_φ r = g^φφ/2∂ g_φφ/∂ r = 1/r
Γ^φ_r φ = g^φφ/2∂ g_φφ/∂ r = 1/r
Γ^r_φφ = r (1 - 4 G μ)^2
The first two characters are the same for a flat space and do not contain string parameters. This means that we can consider the space to be flat. The third character is used in combination with the derivatives dφ/ds. If we make a replacement:
φ' = φ (1 - 4 G μ)
Then all the equations will take the form as for a flat conical space with deficit angle Δθ = 8π G μ in the plane perpendicular to the string.
§ ENERGY-MOMENTUM TENSOR FOR A CURVED STRING
To define the EMT of the curved string, we assume the area on which the galaxy is lensed to be small. So only one segment with nonzero curvature is needed for the model. Thus we approximate the string with two straigth lines connected by Besier curve:
X^1(s) = x(s) =
0, s ∈ (-∞, - R)
R sinθ4(1 + sR)^2, s ∈ [-R, R]
s sinθ, s ∈ (R, +∞)
X^2(s) = y(s) =
s, s ∈ (-∞, - R)
R cosθ4(1 + sR)^2 -R4(1 - sR)^2, s ∈ [-R, R]
s cosθ, s ∈ (R, +∞)
X^3(s) = z(s) = R_g - R_s
In this article we use the convention X^0 = t and assume the static string.
The parameter s in string location 𝐗 is not its natural parameterization σ, which is used in calculation of EMT:
T_μν(𝐱) = ℱ^-1{μ∫_-∞^+∞ dσexp(- i 𝐤𝐗) ( ∂_tX_μ∂_tX_ν - ∂_σX_μ∂_σX_ν)}
The connection between s and σ is:
dσ/ds = ρ(s) = √(x'(s)^2 + y'(s)^2 + z'(s)^2 ) =
1, s ∈ (-∞, - R)
√(cos^2 θ/2 + (s/R)^2 sin^2θ / 2), s ∈ [-R, R]
1, s ∈ (R, +∞)
This connection transforms the EMT into:
T^μν(𝐱) = μ∫_-∞^+∞ρ(s) ds ( ∂_tX^μ∂_tX^ν - ∂_σX^μ∂_σX^ν) δ^(3)(𝐱 - 𝐗(s))
The main step in calculation of EMT is to split the integral into 3 parts:
* ↓ - for s ∈ (-∞, - R), where the positional angle is 0;
* ↑ - for s ∈ (R, +∞), where the positional angle is θ;
* B - for s ∈ [-R, R], where the bend is located.
and then find integrals that describe all components of the EMT in some linear combination, for example:
T^00(𝐱) = μ∫_-∞^-R ds δ^(3)(𝐱 - 𝐗(s)) +
+ μ∫_-R^R ds √(cos^2 θ/2 + (s/R)^2 sin^2θ / 2)×δ^(3)(𝐱 - 𝐗(s)) +
+ μ∫_R^+∞ ds δ^(3)(𝐱 - 𝐗(s)) =
= μ (δ_↓ + Δ_00^B + δ_↑)
where the calculated integrals are respectfully:
δ_↓ = δ(z)δ(x)(1 - H(y + R) )
Δ_00^B = ∫_-R^R ds √(cos^2 θ/2 + (s/R)^2 sin^2θ / 2)×δ^(3)(𝐱 - 𝐗(s))
δ_↑ = δ(z)δ(xcosθ - ysinθ)H(xsinθ + ycosθ - R)
Some components are trivially zero, such as:
T^0i = T^i0 = T^3μ = T^μ 3 = 0
The other ones include ∂_σ = ρ(s)^-1∂_s:
T^11 (𝐱) = 0 -
- μ/4∫_-R^R ds sin^2θ( 1 + sR)^2/√(cos^2 θ/2 + (s/R)^2 sin^2θ / 2)×δ^(3)(𝐱 - 𝐗(s)) -
- μsin^2θ∫_R^+∞ ds δ^(3)(𝐱 - 𝐗(s)) =
= - μ ( Δ_11^B + δ_↑sin^2θ )
T^12 (𝐱) = 0 -
- μ/4∫_-R^R ds [ (1 + sR) cosθ + (1 - sR)](1 + sR) sinθ/√(cos^2 θ/2 + (s/R)^2 sin^2θ / 2)×δ^(3)(𝐱 - 𝐗(s)) -
- μcosθsinθ∫_R^+∞ ds δ^(3)(𝐱 - 𝐗(s))
= - μ ( Δ_12^B + δ_↑sinθcosθ )
T^22 (𝐱) = -μ∫_-∞^-R ds δ^(3)(𝐱 - 𝐗(s)) -
- μ/4∫_-R^R ds [ (1 + sR) cosθ + (1 - sR)]^2/√(cos^2 θ/2 + (s/R)^2 sin^2θ / 2)×δ^(3)(𝐱 - 𝐗(s)) -
- μcos^2θ∫_R^+∞ ds δ^(3)(𝐱 - 𝐗(s))
= -μ (δ_↓ + Δ_22^B + δ_↑cos^2θ )
All the integrals named Δ_ij^B are of the form:
δ(z) ∫_-R^R f(s) δ(x_0 - x(s)) δ(y_0 - y(s)) ds =
f(s_0)δ(z)δ(y_0 - y(s_0))|x'(s_0)||_x_0 = x(s_0), if y_0 = y(x_0)
0, if y_0 ≠ y(x_0)
Thus the exact solutions for Δ_ij^B are:
Δ_00^B = δ(z) δ(y + x tanθ/2 - 2√(xR/sinθ) + R ) √(cos^2θ2 + sin^2θ2(2√(xRsinθ) - 1)^2xsinθR)×
× (1 - H(xsinθ + ycosθ - R)) H(y + R)
Δ_11^B = δ(z) δ(y + x tanθ/2 - 2√(xR/sinθ) + R ) √(xsinθRcos^2θ2 + sin^2θ2(2√(xRsinθ) - 1)^2)×
× (1 - H(xsinθ + ycosθ - R)) H(y + R)
Δ_12^B = δ(z) δ(y + x tanθ/2 - 2√(xR/sinθ) + R ) 1 - √(xRsinθ) (1 - cosθ)√(cos^2θ2 + sin^2θ2(2√(xRsinθ) - 1)^2)×
× (1 - H(xsinθ + ycosθ - R)) H(y + R)
Δ_22^B = δ(z) δ(y + x tanθ/2 - 2√(xR/sinθ) + R ) √(Rx sinθ)(1 - √(xRsinθ) (1 - cosθ))^2√(cos^2θ2 + sin^2θ2(2√(xRsinθ) - 1)^2)×
× (1 - H(xsinθ + ycosθ - R)) H(y + R)
So the full answer is:
T_μν =
μ(
[ δ_↓ + Δ_00^B + δ_↑ 0 0 0; 0 -Δ_11^B - δ_↑sin^2θ -Δ_12^B - δ_↑sinθcosθ 0; 0 -Δ_12^B - δ_↑sinθcosθ -δ_↓ - Δ_22^B - δ_↑cos^2θ 0; 0 0 0 0; ])
To solve the linearized Einstein's equations we also need the source function. In the approximation of small bend (R ≪ R_s Δθ or simply R = 0):
S_μν =
μ(
[ 0 0 0 0; 0 -δ_↓ -δ_↑cos^2θ - δ_↑sinθcosθ 0; 0 - δ_↑sinθcosθ -δ_↑sin^2θ 0; 0 0 0 δ_↓ + δ_↑; ])
§ PHOTON TRAJECTORIES FOR SMALL METRIC PERTURBATION
Let x^1, x^2 be the axis parallel to the picture plane, x^3 be the axis parallel to the line of sight. The source (galaxy) will be at x^3 = 0, the observer will be located at x^3 = R_g. The metric perturbation (in the article it is the curved cosmic string) between the observer and the source will be placed at x^3 = R_g - R_s and it will be in the form:
h_μν =
(
[ 0 0 0 0; 0 h_11 h_12 0; 0 h_12 h_22 0; 0 0 0 h_33; ])
Since photon travel along the null geodesics, it is convenient to choose time x^0 = t as a parameter:
d^2 x^i/dt^2 = - ( Γ^i_μν - Γ^0_μνd x^i/dt) d x^μ/dtd x^ν/dt, i = 1, 2, 3
In the weak field approximation one can write:
Γ^λ_μν≈1/2(∂ h_μ^. λ/∂ x^ν + ∂ h_ν^. λ/∂ x^μ - ∂ h_μν/∂ x_λ)
It is easy to see that for this metric perturbation:
Γ^0_μν = 1/2(∂ h_μ^. 0/∂ x^ν + ∂ h_ν^. 0/∂ x^μ - ∂ h_μν/∂ x_0) = 1/2(0 + 0 - 0 ) = 0
thus the equation for photon trajectory is simple:
d^2 x^i/dt^2 = - Γ^i_μνd x^μ/dtd x^ν/dt, i = 1, 2, 3
The complete list of all Christoffel symbols is shown in the tables <ref>, <ref>, <ref>:
Recall v_i = dx^i / dt and the equations are:
d v^1dt = 12 ∂ h_11∂ x^1 ( v^1 )^2 +(∂ h_21∂ x^2-12 ∂ h_22∂ x^1) ( v^2 )^2 - 12 ∂ h_33∂ x^1( v^3 )^2 +
+ ∂ h_11∂ x^2 v^1 v^2 + ∂ h_11∂ x^3 v^1 v^3 + ∂ h_12∂ x^3 v^2 v^3
d v^2dt = 12 ∂ h_22∂ x^2 ( v^2 )^2 +(∂ h_21∂ x^1-12 ∂ h_11∂ x^2) ( v^1 )^2 - 12 ∂ h_33∂ x^2( v^3 )^2 +
+ ∂ h_22∂ x^1 v^1 v^2 + ∂ h_12∂ x^3 v^1 v^3 + ∂ h_22∂ x^3 v^2 v^3
d v^3dt = - 12 ∂ h_33∂ x^3( v^3 )^2 - 12 ∂ h_11∂ x^3( v^1 )^2 - 12 ∂ h_22∂ x^3( v^2 )^2 -
- 12 ∂ h_33∂ x^1 v^1 v^3
- 12 ∂ h_33∂ x^2 v^2 v^3
- 12 ∂ h_12∂ x^3 v^1 v^2
The lensing effect is small, so we can just adopt the first order expansion in μ for v^1 and v^2. We can use an approximation v^3 = 1 since it will be the second order correction at most in the first two equations:
d v^1dt = - 12 ∂ h_33∂ x^1 + ∂ h_11∂ x^3 v^1 + ∂ h_12∂ x^3 v^2
d v^2dt = - 12 ∂ h_33∂ x^2 + ∂ h_12∂ x^3 v^1 + ∂ h_22∂ x^3 v^2
v^3 = 1
Using the third equation in (<ref>), we can replace the d/dt by d/dz and this is the final result before the derivation of lens equation.
§ SOLVING A LENS EQUATION FOR A CURVED CS
Before the derivation we need to solve the linearized Einstein's equation for a static bended string:
h_μν(𝐱) = 4G ∫_ℝ^3 d𝐱' S_μν/|𝐱 - 𝐱'|
For the purpose of simplicity we will consider the case R ≪ R_s Δθ so the field from bend s∈ [-R, R] can be neglected. Recalling:
h_↑,↓ = Δθ/2π∫_ℝ^3 d𝐱' δ_↑, ↓/|𝐱 - 𝐱'|
we can rewrite geodesic equations (<ref>):
d 𝐯/dz = -1/2(
[ ∂_x; ∂_y ])
(h_↑ + h_↓) _=𝐀(x, y, z) - [∂ h_↑/∂ z(
[ sin^2θ sinθcosθ; sinθcosθ cos^2θ ])
+ ∂ h_↓/∂ z(
[ 0 0; 0 1 ])]_=𝐁̂(x, y, z)𝐯
where 𝐯 = (v^1, v^2)^T. This equation is particularly important since in linear approximation 𝐯 is an angle between the photon path and the z-axis. Suppose that without the string lensing a point is seen from direction 𝐧, 𝐯_i is the initial direction of photon's path and -𝐯_f is the final direction under which the point is seen with a lens. Thus, the lens equation is:
𝐧 = -𝐯_f - (𝐯_f - 𝐯_i)R_g - R_s/R_g
If we know the initial image without the lens, then we know 𝐧. Our task is to find 𝐯_f. The connection between 𝐯_i and 𝐯_f is presented by the solution of DE (<ref>) with the initial condition 𝐯(R_g 𝐧, z = 0) = -𝐯_i. This approximation is valid only in the case, when all the lensing happens at z = R_g - R_s, near the string. But the string is not a compact object, so we propose another scheme.
Suppose the radiation is emitted from the point 𝐫(z = 0) = R_g 𝐧_0. 𝐫 = (x(z), y(z))^T is a vector in the picture plane with a fixed value of z. The ray should have the initial conditions 𝐯_0 such that it travels to the observer, so 𝐫(z = R_g) =0. Thus the boundary value problem can be formulated:
d𝐯/dz = -𝐀(𝐫, z) - 𝐁̂(𝐫, z)𝐯
d𝐫/dz = 𝐯
𝐫(z = 0) = R_g 𝐧_0
𝐫(z = R_g) =0
Once this problen is solved, we can calculate 𝐯(z = R_g) = -𝐯_f and create a map 𝐯_f(𝐧_0), which is our lens equation.
To solve this equation numerically, we can treat 𝐯_f as a parameter in shooting method for IBP (<ref>). To further simplify the process of numerical integration, we rescale the spatial variables:
a = z / R_g ∈ [0, 1]
𝐧 = 𝐫 / R_g
and the IBP reads:
d 𝐯da = -12∇_𝐧
(h_↑ + h_↓) -
[∂ h_↑∂ a(
[ cos^2θ sinθcosθ; sinθcosθ sin^2θ ])
+ ∂ h_↓∂ a(
[ 1 0; 0 0 ])]
𝐯
d𝐧/da = 𝐯
𝐧(a = 0) = 𝐧_0
𝐧(a = 1) =0
We also need all the derivatives of string metric: ∇_𝐧 h_↑, ↓, ∂ h_↑, ↓ / ∂ a. They are (in terms of scaled variables, r = R_s / R_g):
∂ h_↑/∂ a = Δθ/2π a - 1 + r/(a - 1 + r)^2 + (n_x cosθ - n_y sinθ)^2( 1 + n_xsinθ + n_y cosθ/√(𝐧^2 + (a - 1 + r)^2))
∂ h_↓/∂ a = Δθ/2π a - 1 + r/(a - 1 + r)^2 + n_x^2( 1 - n_y/√(𝐧^2 + (a - 1 + r)^2))
∂ h_↑/∂ n_x = Δθ/2π [-sinθ/√(𝐧^2 + (a - 1 + r)^2) +
+ cosθn_x cosθ - n_y sinθ/(a - 1 + r)^2 + (n_x cosθ - n_y sinθ)^2( 1 + n_xsinθ + n_y cosθ/√(𝐧^2 + (a - 1 + r)^2))]
∂ h_↓/∂ n_x = Δθ/2π n_x/(a - 1 + r)^2 + n_x^2( 1 - n_y/√(𝐧^2 + (a - 1 + r)^2))
∂ h_↑/∂ n_y = Δθ/2π [-cosθ/√(𝐧^2 + (a - 1 + r)^2) -
- sinθn_x cosθ - n_y sinθ/(a - 1 + r)^2 + (n_x cosθ - n_y sinθ)^2(1 + n_xsinθ + n_y cosθ/√(𝐧^2 + (a - 1 + r)^2))]
∂ h_↓/∂ n_y = Δθ/2π 1/√(𝐧^2 + (a - 1 + r)^2)
|
http://arxiv.org/abs/2306.04001v1
|
20230606202837
|
One-Dimensional Deep Image Prior for Curve Fitting of S-Parameters from Electromagnetic Solvers
|
[
"Sriram Ravula",
"Varun Gorti",
"Bo Deng",
"Swagato Chakraborty",
"James Pingenot",
"Bhyrav Mutnury",
"Doug Wallace",
"Doug Winterberg",
"Adam Klivans",
"Alexandros G. Dimakis"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"eess.SP"
] |
One-Dimensional Deep Image Prior for Curve Fitting of S-Parameters from Electromagnetic Solvers
Sriram Ravula1,
Varun Gorti1,
Bo Deng1,
Swagato Chakraborty2,
James Pingenot2,
Bhyrav Mutnury3,
Doug Wallace3,
Doug Winterberg3,
Adam Klivans1, and
Alexandros G. Dimakis1
1University of Texas at Austin,
2Siemens,
3Dell
Email: {sriram.ravula, vgorti, bodeng}@utexas.edu,
{swagato.chakraborty, james.pingenot}@siemens.com,
{bhyrav.mutnury, doug.wallace, doug.winterberg}@dell.com,
[email protected], [email protected]
July 31, 2023
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
A key problem when modeling signal integrity for passive filters and interconnects in IC packages is the need for multiple S-parameter measurements within a desired frequency band to obtain adequate resolution. These samples are often computationally expensive to obtain using electromagnetic (EM) field solvers. Therefore, a common approach is to select a small subset of the necessary samples and use an appropriate fitting mechanism to recreate a densely-sampled broadband representation. We present the first deep generative model-based approach to fit S-parameters from EM solvers using one-dimensional Deep Image Prior (DIP). DIP is a technique that optimizes the weights of a randomly-initialized convolutional neural network to fit a signal from noisy or under-determined measurements. We design a custom architecture and propose a novel regularization inspired by smoothing splines that penalizes discontinuous jumps. We experimentally compare DIP to publicly available and proprietary industrial implementations of Vector Fitting (VF), the industry-standard tool for fitting S-parameters. Relative to publicly available implementations of VF, our method shows superior performance on nearly all test examples using only 5-15% of the frequency samples.
Our method is also competitive to proprietary VF tools and often outperforms them for challenging input instances.
s-parameter, vector fitting, deep image prior
Code available at <https://github.com/Sriram-Ravula/Curvefitting-DIP>.
§ INTRODUCTION
Modern electronic system design requires accurate electromagnetic (EM) characterization of interconnects to ensure acceptable signal integrity.
The interconnects are often designed without solid referencing, due to competitive pricing targets for high volume products and in some cases due to flex interconnects for consumer electronic products with small form factors. As a result, computationally cheaper EM characterization methods such as transmission line analysis fail to deliver sufficient accuracy at high frequencies and 3D EM simulation is required. Despite recent advances in 3D computational
electromagnetics and parallel solvers <cit.>,
accurate 3D characterization of complex interconnects remains computationally prohibitive. For frequency domain broadband characterization of digital interconnects this expensive computation needs to be carried out at multiple frequencies to provide enough resolution and length of time domain simulation.
A common practice is to use some fitting techniques where the 3D EM field solver does not run at all of the desired frequency points, and the final frequency sweep output is obtained by fitting a subset of frequencies. Traditionally this interpolation is carried out by model order reduction techniques such as AWE expansion with Pade approximation <cit.> or Vector Fitting (VF) <cit.>. The advantage of the latter is that it allows the core EM field solver to be treated like a black box, and is often preferred for commercial tools, enabling modularity for the frequency domain EM engine. VF is often used in an adaptive loop to predict the next best sampling frequency with a convergence target. Vector Fitting methods represent the incoming samples in a pole-residue form and use a least squares method to obtain the coefficients of the pole-residue expansion.
One limitation of Vector Fitting is its sensitivity to the number of pole pairs used in the expansion. Too few poles result in under-fitting and losing important details of the true frequency domain response, while too many poles result in over-fitting and non-physical signatures leading to non-passive behavior. In some cases, it is possible to run a parameter search to pick the best number of pole pairs – however this does not guarantee a relief from over-fitting and is expensive, especially for interconnect models with a large number of ports and complex frequency domain signatures.
We present the first deep generative model-based approach to fit S-parameters from EM solvers.
We develop a novel interpolation technique using the framework of Deep Image Prior (DIP) <cit.>, a powerful deep learning method that optimizes the weights of a randomly-initialized convolutional neural network to fit a signal from noisy or under-determined measurements.
§.§ Our Contributions
* We construct the first deep generative prior for S-parameters. Our design uses a custom architecture and a regularization inspired by smoothing splines that penalizes discontinuous jumps.
* Our method requires no pre-training, and can interpolate curves using only 5–15% of the required output frequency samples.
* We introduce a novel sparsity constraint on third derivatives, resulting in improved approximations in PSNR.
* We outperform public Vector Fitting implementations by an order of magnitude, and give improvements to proprietary methods on electrically-long channels (Figure <ref> and Table <ref>).
* Our method can produce multiple samples from the posterior distribution, allowing us to obtain uncertainty estimates that are well-correlated with the real error (Figure <ref>).
§ BACKGROUND
§.§ Curve Fitting S-parameters
In our problem setting, there is a generating function f(·): →^p × p that takes frequency values as input and produces complex-valued matrices as output. Here, p is the number of ports in the EM system. For a given frequency ω, we denote the resulting matrix as f(ω) _ω, with the individual entries _ω[i, j] ∈, i, j ∈{0, …, p-1} termed S-parameters.
It is generally impossible to create an analytical model of a channel's frequency response. Fortunately, for most applications it suffices to have a set of densely-sampled points from the underlying function. For a given vector of query frequencies = [ω_0, …, ω_f-1], we wish to obtain the samples {_ω_0, …, _ω_f-1}. We denote by ∈^p × p × f the concatenation of the sample matrices along a new frequency dimension, where [i, j, k] = _ω_k[i, j], for i, j ∈{0, …, p-1} and k ∈{0, …, f-1}. Therefore, our ultimate goal is to find .
Given the physical specifications of a system and a query frequency ω, an EM field solver performs simulations to output _ω at the requested frequency. However, field solvers are extremely computationally expensive, with simulations for the most complex systems taking up to several hours per frequency point on modern, high-performance workstations with a large number of compute cores.
The computational requirement is typically reduced by querying a vector of sub-sampled frequencies _y = [ω_y_0, …, ω_y_f'- 1], where the new index set is a subset of the original query index set: {y_0, …, y_f'- 1}⊆{0, …, f-1}, and f' ≤ f is the number of sub-sampled frequency points. We again concatenate the resultant sub-sampled outputs {_ω_y_0, …, _ω_y_f'- 1} along a new frequency dimension and call the resulting complex-valued tensor ∈^p × p × f' the measurements or observations. The goal of curve-fitting is to recover the densely-sampled frequency series given the sub-sampled measurements .
§.§ Vector Fitting
The most widely-used method for curve fitting S-parameters is Vector Fitting <cit.>, which directly parameterizes a functional approximation of the underlying generating function f(·). This method relies on the observation that the frequency response of a linear system can be modeled as a complex-valued rational function of the form
h(s) = 𝐝 + s𝐞 + ∑_k=0^K-1𝐜_k/s - p_k,
where the functional model is h(·): →^p × p, the argument s = σ + jω is defined in the Laplace domain, and the goal is that the fitted model matches the measurements at every observed frequency point: h(s = jω_t) = _ω_t, ∀ t ∈{y_0, …, y_f'- 1}. Here, 𝐝∈^p × p is the DC bias at ω = 0, 𝐞∈^p × p is a scale factor for frequency-dependent bias, 𝐜∈^K × p × p is a tensor of complex residues with matrix-valued entries 𝐜_k ∈^p × p for k ∈{0, … K-1}, and 𝐩∈^K is a vector of complex poles with entries p_k ∈. The number of poles, K, is a hyperparameter chosen using heuristics or cross-validation. The parameters 𝐝, 𝐞, 𝐜, and 𝐩 are optimized iteratively, either for a predetermined number of steps or until the model fits the measurements to a specific tolerance. Subsequent additions to the original Vector Fitting method have included improvements to the iterative algorithm <cit.> and enforcement of physical constraints on the underlying system <cit.>. While fast relative to 3D field solvers, VF operations with a large number of poles can be computationally expensive for systems with high numbers of ports and frequency samples <cit.>.
§.§ Deep Image Prior
Deep Image Prior is an unsupervised learning framework that uses the convolutional architecture of an untrained neural network as a prior for solving image inverse problems <cit.>. Suppose we are given = (^*) +, where ∈^m are noisy linear measurements, (·): ^m→^n is a known, differentiable forward operator, ^* ∈^n is an unknown image, and ∈^m is additive white noise. Usually the system is underdetermined, i.e. m < n, and has infinitely many possible solutions. Because of this, we need some form of regularization or prior to constrain the set of solutions.
DIP finds a solution , defined as
= G_^*(), ^* = _ - (G_())_2^2,
where G_θ is an untrained convolutional neural network (cnn), parameterized by randomly-initialized weights ∈^d, with latent input vector ∈^k (which can be optimized along with ). Even though the network has not been trained and only gets to see noisy measurements of a single image, DIP achieves high-quality reconstructions for various problems such as denoising, super-resolution, and inpainting <cit.>. The central idea is that the deep convolutional generator structure allows it to fit natural signals faster than noise, which explains its ability to denoise and interpolate.
Numerous works have developed the DIP framework in various directions. The authors of One-dimensional DIP demonstrate that deep networks with one-dimensional convolutions are a good prior for time series inverse problems <cit.>. Other works take a Bayesian viewpoint of DIP for uncertainty quantification <cit.>. Bayes' Rule gives the relationship between the posterior p(|) that we want to characterize, likelihood p(|) we get from the measurements, and prior p() imposed by our model as p(|) ∝ p(|) p(). The authors of <cit.> analyze DIP from a Bayesian perspective and use Stochastic Gradient Langevin Dynamics (SGLD) as an alternative to gradient descent for updating DIP weights. By introducing a regularization term to the objective in Eq. (<ref>) and adding noise to the weights after each update step, SGLD performs posterior sampling (assuming some other suitable conditions are met). The SGLD update for one iteration is given by
Δ_ = α/2[ ∇_log p( | ) + ∇_log p() ] + 𝐧,
𝐧 ∼(0, α),
where α is the step size, ∇_log p( | ) is the gradient of the measurement loss in Eq. (<ref>) (assuming Gaussian noise and with additional scale factors), and ∇_log p() is the gradient of a regularization function such as l_2 weight decay. SGLD eliminates the need for early stopping during DIP optimization, which was used to prevent the network from overfitting to the measurements. In addition, SGLD has the benefit of allowing us to sample multiple reconstructions _i = G__i(), _i ∼ p(|) instead of producing a point estimate.
§.§ Enforcing Causality
S-parameters from linear time-invariant (LTI) systems are causal. For a time-domain signal h(t), causality is defined as the property that the signal cannot produce a response before being given an input, otherwise defined as
h(t) = 0, ∀ t < 0.
An equivalent frequency-domain definition is given by the Kramers-Kronig relations <cit.> which state that the real and imaginary portions of the signal H(ω) = ℱ(h(t)), where ℱ is the Fourier transform, are related by the Hilbert transform ℋ as
(H(ω)) = - ℋ((H(ω))), ∀ω.
Enforcing causality is not as simple as predicting the imaginary part from the real part (or vice-versa) using Eq. (<ref>) due to errors in the simulated data caused by truncation and discretization. Truncation error results from the fact that the Hilbert transform in Eq. (<ref>) is calculated as an integral over frequency values from -∞ to ∞, while the simulated data available to us are band-limited. Discretization error arises from the quantization of the frequency response to fit in floating-point representation.
Torun et. al propose a causality enforcement layer (CEL) at the output of a neural network to mitigate truncation and discretization errors while enforcing causal outputs <cit.>. The CEL is parameter-free and applies a series of transformations to a real-only input to create the complex output. The input to the CEL is the real part of the reconstructed frequency response, extrapolated by a factor n_e to reduce truncation error. To create a symmetrical curve around DC, the initial extrapolated response is mirrored, where the positive frequency values are reflected onto the negative frequency spectrum. The Fourier transform of the mirrored spectrum is taken, and the non-DC frequencies are scaled by a factor of 2 before the signal is zero-padded to increase its length by a factor of n_k. This padding serves as an interpolation in the frequency spectrum and reduces discretization error. The Inverse Fourier transform of the scaled and padded signal is taken and the resulting signal is truncated to an interpolation factor of n_k (down from the total n_e · n_k factor). The final output is a complex signal whose real and imaginary portions are those of the truncated, transformed signal, scaled by n_k and -n_k respectively. The CEL is applied concurrently on all channels of the input, treating them as independent frequency-domain signals.
§ METHOD
We pose the problem of curve fitting sub-sampled S-parameter measurements as a linear inverse problem and propose to solve it using a novel method based on Deep Image Prior. We consider measurements = () + ℰ, where ∈^r × f' and ∈^r × f are flattened, real-valued versions of and , (·): ^r × f→^r × f' is a sub-sampling operator that discards the rows of its argument except for those corresponding to frequencies present in the measurements, and ℰ∈^r × f' is additive white noise. The value r = 2 p^2 represents the flattening of each of the p^2 entries of the matrices _ω into a single dimension, with the factor of 2 due to converting the real and imaginary components into 2 real values. This type of problem is known as inpainting for images and imputation for time series.
We consider the signals and multi-variate or multi-channel frequency series. We parameterize a one-dimensional convolutional generator network G_θ(·): ^r × f→^r × f and pose our initial problem as:
= G_^*(), ^* = _ - (G_())_2^2,
where ∈^r × f is a latent input vector. In the proceeding sections, we first detail the architecture of our generator and how the problem domain informs our choices. Inspired by smoothing splines, we also introduce a regularization term to the problem in Eq. (<ref>) that encourages a form of smoothness in the network output. We finally describe our optimization algorithm, SGLD, along with our choice of .
§.§ Architecture
The choice of architecture is crucial to the success of DIP, as the method relies on the implicit prior provided by the network structure. We must be careful to pick an architecture that is sufficiently expressive to model realistic S-parameters while also providing good enough regularization to reject noise. We use the base architecture from One-dimensional DIP <cit.>, which is based on the U-Net family of autoencoders <cit.>. We show a block diagram of the architecture in Figure <ref> and each network component in Figure <ref>.
The network is comprised of an encoder pathway with successively smaller frequency resolution in each layer, followed by a decoder pathway with increasing spatial frequency resolution. Encoder and decoder layers with the same frequency resolution are linked by skip connections, which concatenate the intermediate representations along the channel dimension. As reported by Ulyanov et. al for the case of images, the skip connections aid our network in generating multi-channel S-parameter signals with features from varying frequency scales <cit.>. We fix the size of the convolutions to 3, which, combined with upsampling and downsampling operations, we find to provide sufficient coupling between adjacent frequency values.
We use batch normalization, LeakyReLU nonlinearity, average pooling for downsampling, and linear interpolation for upsampling as in the original One-dimensional DIP implementation. We alter the final layer to add a causality enforcement layer at the output. The CEL is preceeded by an upsampling layer that increases the frequency resolution of the second-to-last-layer from f frequency points to 2f points to meet the extrapolation requirement of the CEL (and therefore we set the extrapolation factor n_e = 2). This is followed by a decoder layer that reduces the channel width of the intermediate representation from c_D[1] to r/2 to isolate only the real components of the inputs to the CEL. We set the interpolation factor n_k = 1, meaning that we do not interpolate the signal.
§.§ Smoothing Regularization
We propose a novel regularization method that penalizes non-natural frequency domain signals to augment the structural prior provided by DIP. We notice that the frequency response of broadband linear systems are comprised of many nearly-parabolic regions. To investigate a solution, we first turn to classical methods for fitting curves with polynomials. For an unknown scalar-valued function f(x) with known input-output samples {x_i, y_i}_i=0^N-1, smoothing splines fit a model function h(·) as
h^* = _h[ ∑_i=0^N-1 (y_i - h(x_i))^2 + λ∫_x ( d^2h(x)/dx^2)^2 dx ],
where λ is a non-negative smoothing hyperparameter <cit.>. The first term is a data-fitting loss similar to the loss in Eq. (<ref>). The second term is a smoothness penalty that penalizes the sum of the squares of the second derivative of the model h(·). This penalty encourages functions to have uniformly small coefficients on their second derivatives, preferring flat quadratic regions interspersed with linear or constant regions. The hyperparameter λ controls the intensity of the regularization, with λ = 0 admitting any function that fits the data and λ = ∞ allowing only functions with second derivative uniformly equal to zero.
We propose the following regularization penalty:
R(f(·), λ) = λ∑_i=0^p-1∑_j=0^p-1∫_ω|d^3f(ω)[i,j]/dω^3| dω.
In other words, we treat each of the p^2 S-parameters in a system as separate frequency-domain curves and penalize the absolute value of their third derivatives. By penalizing the third derivatives instead of second, we allow for parabolic regions with arbitrary magnitude. Also, by penalizing the sums of absolute values, we allow for sparse cubic regions linking the parabolic regions instead of cubic regions with uniformly small magnitude as would be the case if we use sums of squares. In practice, we use discrete sums and third-order differences to calculate the integral and third derivative in Eq. (<ref>), respectively.
§.§ Training
Our initial setup given in Eq. (<ref>) is a minimization problem, which as we detail in Section <ref> poses the risk of overfitting to the measurements. We therefore perform SGLD as our update algorithm. Our update rule is given by
Δ_ = α/2[ ∇_ - (G_())_2^2 + ∇_ R(G_(), λ) ] + 𝐧,
𝐧 ∼(0, α),
where α is the step size and the first (data-fitting) term represents the likelihood p(|) while the second (regularization) term is the prior p(). SGLD allows us to quantify the uncertainty of our method by calculating a sample-wise mean and variance for reconstructions _1, …, _n drawn i.i.d. from the posterior.
Following prior works, we do not optimize the input to the network along with the weights. In order to take advantage of the U-Net structure of the network, we initialize the latent as = ^†() ∈^r × f where ^† is the adjoint of the forward operator . This transformation essentially fills the unknown frequency points in with zeros, “lifting” it back into the same dimensions as . Also following prior work, we add white Gaussian noise to , where for iteration t the actual input to the network is given by _t and
_t = + 𝐧_t, 𝐧_t ∼(0, σ^2_t ),
with σ^2_t decreases geometrically as t increases.
§ EXPERIMENTS
We experiment on simulated S-parameter data with very long electrical channels, which present a more challenging curve fitting problem than shorter channels due to their closely-packed resonances. We create a dataset of 15 simulated frequency curves representing EM solver-generated S-parameters for electrically long interconnects, with up to 16 ports and 9,999 query frequencies. We present the specifications of the examples in our dataset in Table <ref>.
As baselines, we compare to the public Scikit-RF implementation of VF <cit.> as well as a proprietary VF implementation from Siemens, used in current industry workflows <cit.>. We compare to the public implementation for curve fitting given a fixed, pre-determined number of sub-sampled frequencies. The proprietary VF performs curve fitting using active learning by iteratively fitting available samples, then requesting more frequency samples from the simulator, repeating the process until a stopping criterion is reached. We compare DIP and the public VF to the proprietary VF by selecting an equally-spaced set of measurements with the same sub-sampling ratio as the final set of measurements used by proprietary VF. Although the exact measurements used by DIP and public VF may differ from those used by proprietary VF, keeping the sub-sampling ratio the same means that we do not unfairly advantage or disadvantage any of the methods.
For all of our experiments, we perform SGLD for 20,000 iterations using a learning rate of α = 2×10^-4 and regularization hyperparameter λ = 0.1. For the additive input noise 𝐧_t, we set the starting variance σ_0^2 = 1×10^-2 and the final variance σ_20,000^2 = 1×10^-6. We set the number of convolution filters in all layers of the network, except for the decoder in the output layer, equal to (25√(r)), where round(·) denotes a rounding operation. In addition, we set the network depth N = (log_2 f) - 4, where ceil(·) rounds up to the next highest integer. We set the number of filters to r/2 in the final decoder in the output layer.
§.§ Quantitative Results
In the first experiment, we create equally-spaced measurements of each of the examples in our dataset using between 5 and 15% of the available samples. We perform curve-fitting on the measurements using our method and the public implementation of VF, and report the peak signal-to-noise ratio (PSNR) with respect to the fully-sampled reference example. We show a plot of the results in Figure <ref>. Our method outperforms public VF in terms of mean PSNR for every sub-sampling rate we test, achieving an improvement of between 12 and 15 dB over VF depending on the rate. The mean PSNR achieved by our method for 5% of samples observed is higher than that of VF for 15% by over 6 dB, demonstrating that our method can exceed the reconstruction ability of publicly-available curve fitting tools using less than 1/3 of the samples. Our method also displays less variance than VF in its reconstruction PSNR between examples.
Next, we compare our method and the public VF to proprietary VF using the method we describe in Section <ref>. We present our method and public VF with a different number of equally-spaced measurements for each example, depending on the number of measurements used by proprietary VF at the end of its active learning. We present the PSNR results for each method in Table <ref>. Our method achieves the best PSNR in 4 out of 15 cases, and second best in 7 of the 11 cases where proprietary VF performs better. These results demonstrate that our method can perform similarly to, and in some difficult cases, better than handcrafted curve-fitting priors that have been developed and improved for decades.
§.§ Qualitative Results
We display visual curve fitting results from each method for example 0 in Figure <ref>. Example 0 presents a particularly challenging curve-fitting problem for VF, having a rapidly-changing magnitude spectrum across all frequency bands. As indicated in Table <ref>, however, our method is able to achieve good results on this example, outperforming the next best method by over 12 dB. The reconstructed curves in Figure <ref> tell a similar story. While proprietary VF captures the general shape of the magnitude response, it often produces artifacts such as the extraneous lobes at the high frequencies in the bottom left plot of Figure <ref>. Our method, on the other hand, tracks the shape of the S-parameters well without producing noticeable artifacts. Public VF displays poor performance on this example.
As a simpler, 2-port case we plot the visual results for example 2 in Figure <ref>. Proprietary VF outperforms our method on this example in terms of PSNR, but only by about 2 dB. This result is impressive, given that proprietary VF is able to choose its samples using active learning and our method is given pre-selected measurements that may not be optimal. As seen in Figure <ref>, our method achieves similar qualitative results to proprietary VF.
§.§ Uncertainty Quantification
One of the benefits of SGLD is that we can draw multiple samples from the posterior and quantify the uncertainty using the sample-wise standard deviation. In this experiment, we draw multiple posterior samples for curve fitting on example 4 with 5% of the frequency samples observed as measurements. Following the authors of <cit.>, we use a “burn-in” period for SGLD, where we only start tracking the sampled SGLD outputs after 15,000 iterations, and track the DIP output every 100 iterations during the post burn-in period for a total of 50 SGLD samples. We plot the mean and standard deviation outputs from our method, along with the reference fully-sampled image and the magnitude of the error in Figure <ref>.
The standard deviation plot shows spikes around 0-5 GHz, 20-25 GHz, and 35-40 GHZ, all of which are reflected by corresponding spikes in the error plot at the same frequency ranges. While the magnitude of the standard deviation is around 10× smaller than that of the true error, the modes in the sample-wise standard deviation predict the modes of the absolute error reasonably well. Despite never being trained, our method is able to provide a useful estimate of uncertainty in its predictions.
§.§ Ablation Study
Finally, we test the incremental benefits of our design choices by performing an ablation study. We start with a base DIP model that does not use a CEL at the output and simply optimizes the objective in Eq. (<ref>) using gradient descent with a fixed input . One at a time, we re-introduce the regularization term from Eq. (<ref>), the additive input noise schedule 𝐧_t, SGLD in place of gradient descent, and the CEL, in that order. We perform curve fitting on example 0 with varying sub-sampling rates using each of these incremental models and plot the results in terms of PSNR in Figure <ref>.
Adding the regularization term significantly improves results, especially for sub-sampling rates higher than 5%. The additions of input noise and SGLD offer small improvements, while further adding the CEL on top offers another large leap in performance. Overall, our design choices, and particularly our novel regularization, demonstrate a definitive improvement over naïve application of DIP.
§ CONCLUSION
We presented a novel method to extrapolate a detailed frequency response from a limited sampling of the required data using the framework of Deep Image Prior. Our method significantly outperforms a publicly available Vector Fitting implementation, and shows promise versus a proprietary commercial implementation.
Our method uses a novel smoothing regularization term that penalizes non-natural frequency responses and also enforces causality, both incorporated into a DIP optimization framework.
An important future direction for our work is to incorporate active learning into our DIP framework. Active learning methods select which frequency response points to observe in a sequential manner.
This would obviate the burden of selecting equally-spaced measurements a priori to fit with DIP by allowing the active learning algorithm to query the simulator in an online fashion. Since SGLD allows us to draw multiple samples from the posterior distribution, we could use the sample-wise variance at each predicted frequency point to inform our active measurement selection process.
ieeetr
|
http://arxiv.org/abs/2306.02414v1
|
20230604172946
|
Functional Directed Acyclical Graphs for Scattering Amplitudes in Perturbation Theory
|
[
"Thorsten Ohl"
] |
hep-ph
|
[
"hep-ph"
] |
DAGs for Scattering Amplitudes]
Functional Directed Acyclical Graphs for
Scattering Amplitudes in Perturbation Theory
[email protected]
University of Würzburg,
Institute of Theoretical Physics and Astrophysics,
Emil-Hilb-Weg 22,
97074
Würzburg,
Germany
I describe a mathematical framework for the efficient processing of
the very large sets of Feynman diagrams contributing to the
scattering of many particles. I reexpress the
established numerical methods for the recursive construction of
scattering elements as operations on compact abstract data
types. This allows efficient perturbative
computations in arbitrary models, as long as they can be described
by an effective, not necessarily local, Lagrangian.
[
Thorsten Ohl 0000-0002-7526-2975
July 31, 2023
====================================
§ INTRODUCTION
The efficient and reliable computation of scattering amplitudes for many
particles in a large class of models,
both on tree level and including higher order corrections,
is a central element of all efforts for analyzing the physics at LHC
and possible future colliders.
Since the first release of Madgraph <cit.> about 30
years ago, there has been tremendous progress in the capabilities of the
tools that can compute such scattering amplitudes numerically. Replacing sums of
Feynman diagrams by recursive numerical evaluation opened the
realm of many-legged amplitudes, including loop corrections. In
fact, the treatment of QCD corrections has matured so much that tools
like Madgraph5 <cit.> are now employed regularly by
endusers for LHC physics. Electroweak radiative corrections are
starting to become available in user friendly tools and recursive
techniques are being applied to loop calculations. At the same time,
de facto standard formats like
UFO <cit.> allow the specification of
almost any physics model that might be of interest in the near and not
so near future.
In this paper, I will elaborate a common mathematical structure behind
the recursive calculations. The focus is not on the immediate
numerical evaluation, but on the elucidation of an algebraic
structure that will later be translated into numerical code. This
simplifies supporting more general interactions, because purely numerical
codes have to make assumptions that can turn out to be hard to relax
later. In addition, algebraic expressions can be used to generate more
comprehensive tests of models and implementations. They also simplify
the automatic generation of the additional expressions needed for
subtractions schemes <cit.>.
Finally, at a time when functional programming and strong type systems
are moving more and more from academia into the mainstream, it is a
useful exercise to reconstruct the mathematical structures in a way
that can easily be translated into efficient programs making use of
these paradigms.
The mathematical structures presented here have not been developed
in a vacuum, but are a distillation of commonalities observed in the
concrete data structures implemented for the matrix element generator
O'Mega <cit.> that is part of the
Whizard event
generator <cit.>.
Nothing in the following discussion will be specific to
leading order, tree level
matrix elements. Exactly the same structures appear when implementing loops
using additional
legs <cit.>
or when adding higher order contributions as terms in an effective
action using a skeleton expansion. The translation of the algebraic
expressions into robust numeric code calling sophisticated external
libraries for loop integrals <cit.> is much more
challenging, of course. However, also here the algebraic step offers
more options than a purely numerical approach.
The outline of the paper is as follows:
in section <ref>, I briefly review the recursive
techniques used for computing scattering amplitudes for processes with many
external particles. This section also serves the purpose of establishing
the terminology and notation used in the remaining sections. In
section <ref>, I introduce Directed Acyclical Graphs (DAGs),
bundles and their relationships.
In section <ref>, I present an algorithm
for efficiently constructing the DAGs representing scattering
amplitudes. In section <ref>, I briefly describe how to
generate efficient numerical code from DAGs constructed according to
the algorithm presented in the previous sections.
In appendix <ref>, I sketch the implementation of DAGs and
bundles in O'Mega <cit.>.
§ SCATTERING AMPLITUDES
It has long been recognized that the textbook representation of
scattering amplitudes as a sum of Feynman diagrams becomes very inefficient
as the number of external particles rises. Indeed, even though
general estimates are hard to derive for realistic models with
conserved quantum numbers, analytic formulae for toy models and
explicit calculations for specific processes confirm the expectation
that the number of tree level Feynman diagrams grows factorially with
the number of external particles. If Feynman diagrams with loops are
represented by tree
diagrams <cit.>,
each loop adds two more external particles.
In addition to requiring prohibitive computational resources, the
destructive interferences inherent in gauge theories lead to a loss of
precision if too many terms are added. Starting with 2→6 processes
at tree level, the need for a more efficient representation became
evident.
In order to simplify the notation in this section,
I will cross all scattering amplitudes from n_in→
n_out to n=n_in+n_out→0. Except for
the momentum, I will also suppress all quantum numbers in this
introductory section. The treatment of general quantum numbers (spin,
flavor, color, etc.) will be the focus of the following sections.
§.§ Recursion
The appropriate building blocks to replace Feynman diagrams turned out
to be k-particle matrix elements of fields
ϕ({i_1,i_2,…,i_k}) =
⟨0|Φ|p_i_1,p_i_2,…,p_i_k|⟩
which will be referred to as wavefunctions or
of their associated currents
j({i_1,i_2,…,i_k}) =
⟨0|J|p_i_1,p_i_2,…,p_i_k| ⟩,
as pioneered by Berends and Giele <cit.>.
The set of indices I={i_1,i_2,…,i_k} is a subset of the indices
enumerating the external particles or open loop momenta.
Since I∈ 2^{1,2,…,n}, the number of possible different
wavefunctions or currents grows only as an exponential 2^n instead of a
factorial n!∼ n^n. Furthermore, both can be computed recursively
ϕ(I) = ∑_I_1∪ I_2=I P_I V_I,I_1,I_2ϕ(I_1)ϕ(I_2)
j(I) = ∑_I_1∪ I_2=I V_I,I_1,I_2 P_I_1j(I_1)P_I_2j(I_2) ,
without expanding them into Feynman diagrams, which would
reintroduce factorial growth. In (<ref>), P_I denotes
a propagator and V_I,I_1,I_2 a vertex factor for three legs. The
generalization to models containing vertices with more than three legs is obvious.
Note that ϕ is just j multiplied by a momentum space
propagator. Thus the choice between the two is only a matter of
convenience. The rest of the paper will mostly refer to
wavefunctions (<ref>), but
all constructions can be repeated trivially for the currents (<ref>).
§.§ Topologies
There are many ways in which a scattering amplitude ℳ can
be constructed from (<ref>). The first approach
observes that the j
in (<ref>) is already amputated. It therefore suffices to set
the momentum
p_1 = - ∑_i=2^n p_i
on the mass shell of particle 1 to obtain the scattering amplitude
using the LSZ prescription
ℳ({1,2,…,n}) = j({2,3,…,n}) .
This is implemented numerically in
Helac <cit.> and
Recola <cit.>.
The second approach
glues the ϕ from (<ref>) at vertices to
obtain the scattering amplitude in the form
ℳ({1,2,…,n}) =
∑_I_1∪ I_2∪ I_3={1,…,n} K_I_1,I_2,I_3ϕ(I_1)ϕ(I_2)ϕ(I_3)
with obvious generalizations to models containing vertices with more
than three legs. The partitions (I_1, I_2, I_3) of the external
particles must be chosen carefully to avoid double
counting <cit.>
and the keystones K correspond to vertex factors.
This approach was pioneered for numerical calculations in the
standard model by
Alpha <cit.>
and is implemented as an algebraic algorithm for arbitrary models in
O'Mega <cit.>.
The third approach combines
the DAGs at propagators
ℳ({1,2,…,n}) =
∑_I∪ I'={1,…,n} j(I) P_I,I' j(I')
instead of vertices as in (<ref>). It was pioneered by
Comix <cit.> and
OpenLoops <cit.>
Algebraically, all expressions (<ref>) will give
the same final result, but the number of nodes that need to be evaluated
can vary slightly and numerical results will differ due to
the different order of evaluation. O'Mega <cit.>
allows to compute the amplitude both as (<ref>) and
as (<ref>) and confirms these expectations.
While it is impossible to give general estimates for the number of
wavefunctions that need to be evaluated in realistic models, one can count
them for some examples using O'Mega. In the standard model, it appears
that (<ref>) requires some 10% fewer evaluations
than (<ref>) in an optimal implementation.
One advantage of (<ref>) and (<ref>) is that at
most n/2 of the external momenta appear in the ϕ compared
to n-1 for (<ref>). Therefore fewer steps with
accumulating floating point errors are required in
the recursive evaluation of ϕ(I).
While this could in principle be a significant advantage in
amplitudes with strong gauge cancellations, the difference appears to
be small in practice.
The algorithm adding quantum numbers to (<ref>),
(<ref>) and (<ref>) described in the following
sections is equally applicable for all three variants
in (<ref>).
§.§ Evaluation
In the case of a fixed physics model with a moderate number of fields
and couplings, such as the standard model, the recursive
evaluation (<ref>) can be expressed as an iteration of
matrix multiplications <cit.>.
This approach has the advantage that all scattering amplitudes in the
supported model can be computed using the same executable, without the
need for recompilation.
However, extending this approach to more complicated models,
in particular to models that can be specified by endusers
in formats like UFO <cit.>, is far from
trivial <cit.>. Instead, it is beneficial to represent
the recursion relations (<ref>) abstractly by a data
structure from which dedicated code can be generated and compiled subsequently,
following the pioneering treatment of Feynman diagrams in
Madgraph <cit.>. This approach has been
implemented for the recursive evaluation
in <cit.>.
This motivates the search for a data structure that represents the
recursion relations (<ref>) concisely and can be
constructed efficiently from the Feynman rules of a model. The
obvious candidate is a finite Directed Acyclical Graph (DAG), that
corresponds to the evaluation of an arithmetical expression in which
common subexpressions are evaluated only once and later recalled from
memory when needed again.
Additional benefits of algebraic manipulations are that it is easier
to prune the computation of wavefunctions that are not needed in the
final result, that one can target special hardware or dedicated
virtual machines <cit.> that avoid the need for
compilation. Formfactors can be restricted to lightlike momenta at
compile time <cit.>.
Finally, one can optionally instrument the code with
numerical checks of Ward and Slavnov-Taylor identities for gauge boson
wavefunctions, in order to test matrix element
generator, numerical libraries and model descriptions.
§ DAGS AND BUNDLES
In this section, I will focus on universal mathematical
constructions, not practical algorithms. The discussion of
the latter will follow in section <ref>.
Given a set N of nodes, a set E of edges and a
set C(N) ⊆ 2^N of children
which is typically the set of subsets of nodes with a limited number
of elements,
any map from N to the powerset of E × C(N)
Δ : N → 2^E × C(N)
defines a Directed Graph 𝐆=(N,E,Δ) in the sense
described below.
The function Δ can be specified completely by the
set (n,Δ(n))|n∈ N of ordered pairs. This equivalence
will be used below to define transformations on DAGs as set
theoretical operations that can be implemented efficiently in
computer programs. I will often
employ the more intuitive notation n↦Δ(n)|n∈ N or
the abbreviated form δ_n|n∈ N.
In order to avoid excessive nested superscripts, I will sometimes use the
notation A→ B for set B^A of all functions from the set A to the
set B.
With wavefunctions as nodes and vertex factors as edges, this
definition captures the recursion relations (<ref>)
exactly. Note that the map Δ (<ref>) is well defined
iff the combination of momenta and other quantum numbers identifies
the wavefunctions or currents uniquely.
There are cases where physical quantum numbers are not
sufficient to distinguish wavefunctions. For example, if the
scattering amplitude is to be expanded in the powers of some coupling
constants, these powers can contribute at different levels of the
recursive expansion. Therefore a wavefunction can appear more than
once with the same momentum and physical quantum numbers. This forces
us to add the powers of these coupling constants as unphysical labels
that will be combined in the final step (<ref>). Since such
labels are later required anyway to disambiguate
variable names in the generated numerical code, this adds no
additional burden. Such a counting of coupling constants is of course
crucial for adding a consistent number of counterterms in calculations
involving loops and when adding precomputed loops using a skeleton
expansion or effective actions.
The nodes in the preimage of ∅ under Δ
L = Δ^-1(∅)
= n ∈ N | Δ(n)=∅
are called leaf nodes and correspond to the external states in
scattering amplitudes.
Since the elements of C are sets of
elements of N, we can derive from Δ two mutually recursive
expansion functions
Δ̂: E× C(N) → E× C(2^E× C(N))
(e,n_i|i∈ I) ↦ (e,Δ^*(n_1)|i∈ I)
element-by-element and
Δ^*(n) = {n} for Δ(n)=∅
Δ̂(Δ(n)) for Δ(n)≠∅ .
If 𝐃=(N,E,Δ) represents an acyclical graph, i.e. a DAG,
with a finite number of nodes |N|,
the functions Δ^* and Δ̂ will reach a fixed point
after a finite number of steps. This fixed point
consists exclusively of mutually nested sets of leaves.
If the image of Δ consists only of singleton sets
and ∅, the fixed point reached from any starting node n
corresponds to a tree diagram. Otherwise it corresponds to a forest
of tree diagrams, if the elements of the sets are distributed recursively.
As an illustration, consider the DAG 𝐃 with the sets
N = {1,2,…,8}
E = ∅
C = {n,n'}| n≠n'∈ N
and the map
Δ = { 1↦∅, 2↦∅,
3↦∅, 4↦∅,
5↦{{1,2}}, 6↦{{5,3}}, 7↦{{5,4}},
8↦{{6,4}, {7,3}}} ,
where I have not spelled out the unlabeled edges.
A quick calculation gives
Δ^*(8) =
{{{{{{1,2}}, 3}}, 4},
{{{{{1,2}}, 4}}, 3}} .
This corresponds to the forest consisting of the trees
{{{1,2},3},4}
{{{1,2},4},3} .
This DAG encodes a stripped down version of the Feynman diagrams for the
process e^+e^-qq̅→ g,
that ignores both the details of the couplings and
the contributions of Z and Higgs bosons. Note that the common
subdiagram e^+e^-→γ appears only once
in the DAG as 5↦{{1,2}}, but twice in the forest.
A general directed graph can contain cycles and
the functions Δ^* and Δ̂ will not reach a fixed point
even if |N|<∞. As described in section <ref>, it
will however always
be possible to equip N with a natural order
so that the application of Δ acts strictly decreasing with
respect to this order.
There can obviously be no cycles and the graph is guaranteed to
be a DAG in this case.
If the same node n appears many times in the children,
a DAG provides a very efficient encoding of large
sets of graphs. The storage and computing time required by typical
sets of tree diagrams
grows factorially with the number of leaves |L|. In contrast, the
space and time required for implementing the DAG scales linearly with |N|,
which only grows as an exponentially in |L|.
Using persistent functional data
structures <cit.>
instead of mutable arrays to
implement the function Δ simplifies the algorithm described
below significantly. The additional space and time
requirements replace |N| by |N|ln|N| and turn out not to be
important for large |N|.
§.§ Constructing DAGs
Using DAGs as a compact representation has only a marginal benefit if
their construction requires the generation of all tree diagrams in
intermediate steps or if the applications require a full expansion.
Fortunately, the sum of Feynman diagrams encoded in the DAG can be
evaluated either using the DAG directly or by generating
a dedicated numerical code that
evaluates each node n∈ N only once. As explained in
section <ref>, it turns out that the DAGs
representing perturbative scattering amplitudes can be constructed
without requiring the construction of the corresponding
forest.
For this purpose, I introduce the empty DAG
ϵ = (∅, ∅, ∅, ∅)
where Δ=∅ is the function with empty domain and
codomain. I also define a function
ω : (N → E × 2^C(N)) ×𝒟 →𝒟
(n↦ (e, c),x) ↦ω_n↦ (e, c)(𝐃)
with the function ω_n↦ (e, c) that
adds a node n together with the mapping n↦(e,c)
ω_n↦ (e, c) (N, E, Δ) =
( N ∪{n}, E ∪ e, Δ∪{n↦(e,c)}) ,
where e and (e,c) are shorthands for the sets e_i|i∈ I
and (e_i,c_i)|i∈ I with |I| elements. In particular, they
may be empty to allow inserting a leaf node. In order to avoid
ambiguities in the definition of ω, I will
require that n∉N n'_i∈ N
in ω_n↦{(e,n'_i|i∈ I)}.
With these definitions, the DAG in (<ref>) is
ω_8↦{{6,4},{7,3}}ω_7↦{{5,8}}ω_6↦{{5,3}}
ω_5↦{{1,2}}ω_4↦∅ω_3↦∅ω_2↦∅ω_1↦∅ϵ ,
where the function applications associate to the right, of course. It
is obvious that any finite DAG can be
constructed by repeated applications of ω.
For the finite DAGs that are the subject of this paper,
the function ω can be implemented easily in
programming languages that have efficient support for persistent sets
and maps (also known as dictionaries) that can grow without a lot of
reallocation. Functional programming languages with
garbage collection make such implementations particularly
straightforward.
The domain and codomain of functions like ω (<ref>)
are highly structured sets and static type systems allow to verify
already at compile time
that only matching functions are being composed. Beyond
preventing errors, a strict type discipline helps to uncover
mathematical structures, such as the ones described in this section.
This paper is based on the implementation
in the matrix element generator
O'Mega <cit.> using
<cit.>, as
described in appendix <ref>.
§.§ Lattices of DAGs
For our purposes, DAGs representing scattering amplitudes for the same
external states, categories of DAGs that share the same leaf nodes
𝒟_L =
𝐃 = (N, E, Δ) | Δ^-1(∅) = L
are the most interesting.
Since we describe a DAG as a tuple of sets, there is a natural notion of
inclusion for pairs of DAGs in 𝒟_L
𝐃' = (N',E',Δ') ⊆𝐃 = (N,E,Δ) ⇔
N' ⊆ N E' ⊆ E
( ∀ n ∈ N': Δ'(n) ⊆Δ(n) ) .
It is obvious that this notion of inclusion corresponds to the inclusion
of the sets of tree diagrams encoded by the DAGs.
In the same fashion, we can define union and intersection for the
DAGs 𝐃_i = (N_i,E_i,Δ_i)
𝐃_1 ∪𝐃_2 = (N_1 ∪ N_2, E_1 ∪ E_2, Δ_1 ∪Δ_2)
𝐃_1 ∩𝐃_2 = (N_1 ∩ N_2, E_1 ∩ E_2, Δ_1 ∩Δ_2)
where
Δ_1 ∪Δ_2 =
n ↦Δ_1(n) ∪Δ_2(n) | n∈ N_1∩ N_2
∪ n ↦Δ_1(n) | n∈ N_1∖ N_2
∪ n ↦Δ_2(n) | n∈ N_2∖ N_1
and in
Δ_1 ∩Δ_2 =
{ n ↦Δ_1(n) ∩Δ_2(n) | n∈ N_1∩ N_2
( Δ_1(n) ∩Δ_2(n) ≠∅ n ∈ L ) }
I am careful to avoid adding new leaf nodes to the intersection.
From these definitions, it is obvious that ⊆
turns 𝒟_L into a partially ordered set and ∪
and ∩ turn it into a lattice.
From this point of view,
𝐃_1∪𝐃_2 is the least common upper bound
of 𝐃_1 and 𝐃_2,
while 𝐃_1∩𝐃_2 is their greatest common lower
bound. Finally 𝒟_L is bounded from below, with
_L = (L,E,n→∅|n∈ L)
as the bottom element.
§.§ Mapping and Folding DAGs
The most important functions for manipulating DAGs and extracting the
information encoded by them are
folds that perform a nested application of a suitable function
for all nodes to a starting value x
Φ_f ((N, E, Δ), x) = f_δ_|N|⋯ f_δ_2f_δ_1x ,
where the elements
of Δ={δ_n_1,δ_n_2,…,δ_n_|N|} are
arranged in the partial order if the nodes that guarantees acyclicity of the DAG.
The only constraint on the function
f : (N → E × 2^C(N)) × X → X
(δ,x) ↦ f_δ(x)
is that the domain and codomain of f_δ:X→ X must be identical.
The computational cost scales with
the size of the DAG and not with the size of the forest of tree
diagrams described by it.
Used with the constructor ω (<ref>)
on the empty DAG, the fold
performs a complete copy of any DAG 𝐃
Φ_ω(𝐃,ϵ) = 𝐃 .
Precomposing the first argument of ω in (<ref>) with
a function
f : (N→ 2^E× C) → (N→ 2^E× C)
in the first argument using the notation
(ω∘ f)_δ = λ_f(δ)
maps a DAG 𝐃 to a new DAG 𝐃'
Φ_ω∘ f(𝐃,ϵ)
= 𝐃'
which can encode a different set of tree graphs.
The precomposition (<ref>) can naturally be
extended to functions mapping nodes to sets of nodes
f : (N→ 2^E× C) → 2^(N→ 2^E× C)
δ ↦{f_1(δ),…,f_k(δ)}
as
ω_f(δ) = ω_f_k(δ)…ω_f_1(δ)
with the identity
ω_∅𝐃 = 𝐃
iff the result of f is the empty set ∅.
Finally, I define a function
H : (S, 𝐃) ↦𝐃' ⊆𝐃
that takes a set S⊆ N of nodes and a DAG and returns the minimal DAG that
contains all the nodes in the set such that the mutually recursive
evaluation of the functions Δ^*
and Δ̂ from (<ref>) is well defined for the
nodes in this set. Intuitively, this corresponds to following all
chains of arrows in n→Δ(n)|n∈ N from 𝐃 that start in S.
§.§ Bundles
I am interested in maps between DAGs that respect certain structures. In
order to describe these concisely, I borrow the notion of bundles
from topology and differential geometry.
A bundle 𝐁 = (X, B, π) is a triple consisting of
a set X, called the total set, a set B, called the base, and a
projection π:X→ B.
The preimages π^-1(b)⊆ X are
called fibers. The notation π^-1:B→ 2^X must of course
not be misunderstood as the inverse of π. The fibers are pairwise
disjoint and their union
X = _b∈ Bπ^-1(b)
reproduces the set X. A section is a map s:B→ X for
which π∘ s: B→ B is the identity. It corresponds to
choosing one and only one element from each fiber.
This definition generalizes the trivial bundle
𝐁_trivial = (B× F, B, π)
with
π(b, x) = b
π^-1(b) = (b, F)
where all fibers are trivially isomorphic to F and a section is the
parameterized graph s:B→ B× F of a function B→ F.
Bundles formalize equivalence relations on the set X,
with the base B as the set of all equivalence classes and π the
canonical projection of an element of X to its equivalence class.
The composition π^-1∘π:X→2^X maps each element to the
set of the members of its equivalence class. Sections
correspond to choosing one element from each equivalence class. An
illustrative example is equivalence of nodes up to color quantum
numbers, where π corresponds to ignoring color. Flavor,
coupling constant and loop expansion order can be treated in the same
way.
Bundles can be arranged in a sequence
B_0 @<π_1<< B_1 @<π_2<< B_2 @<π_3<< ⋯ .
However, since the preimage π_i^-1 is not the inverse of the
projection π_i, the preimage of a composition of projections is not the
composition of the individual preimages, but
(π_i∘π_i+1)^-1(b) = ∪_x∈π_i^-1(b)π_i+1^-1(x)
instead.
As in the case of DAGs, such structures and the operations on them
can be implemented for finite sets X
straightforwardly in functional programming languages with static type
systems and garbage collection (cf. appendix <ref>).
In particular, it is efficient to add
elements to the set X and update the base B and maps π
and π^-1 immediately. This allows to grow a bundle
simultaneously while
building a new DAG in order to maintain the relationships to be introduced
in section <ref>.
§.§ Projections and Preimages of DAGs
Given a DAG 𝐃=(N,E,Δ), where the set of
nodes N is also the total set in a bundle 𝐁=(N,B,π), it is
natural to ask if there is a canonical
DAG 𝐃'=(B,E',Δ')
with the base of 𝐁 as its set of nodes.
First, we observe that every section s of 𝐁 and
map f:E→ E' defines a projected
DAG 𝐃_s,f=(B,E',Δ_s,f) with
Δ_s,f : B → 2^E' × C(B)
b ↦π̂_f(Δ(s(b)))
where π̂_f is the distribution of π over the nodes together
with the application of f to the edges
π̂_f(e,n_i|i∈ I) = (f(e),π(n_i)|i∈ I) .
The formula (<ref>) has to be augmented by the
prescription that a b for which s(b) is a leaf node
in 𝐃 and therefore Δ_s,f(b)=∅
is not added as a leaf node to 𝐃_s,f, similar to
the definition (<ref>) of the intersection of two DAGs.
In most
cases f:E→ E' will be a simple projection that in our applications
will be determined
straightforwardly by the two sets of Feynman rules governing the
construction of the two DAGs. Therefore we can
write 𝐃_s instead of the more
explicit 𝐃_s,f.
The dependence of this projection on the section s is not
satisfactory. However, the DAG
Π(𝐃) = ⋃_s∈ S(𝐁)𝐃_s ,
where S(𝐁) denotes the set of all sections of the bundle 𝐁,
is well defined and will be shown to suit our needs.
Observe that the union is
the correct universal construction for our applications, because the
additional quantum numbers in N lead to more selection rules.
These selection rules are the reason for the dependency of 𝐃_s on s.
The DAG corresponding
to the more basic set of nodes B should therefore be the combination of all
possibilities.
As an example consider the scattering of two scalars without and with
flavor. Without flavor, there will be s-, t- and u-channel
diagrams. With a conserved flavor, only one of them will remain.
Note however, that this construction does not guarantee
that the set of nodes of the DAG Π(𝐃) is actually the
full base B
of the bundle 𝐁. We must therefore demand in addition compatibility of
DAG and bundle, by requiring that the diagram
B @<π<< N
@Aν AA @AAν A
Π(𝐃) @<Π<< 𝐃
commutes. The function ν in the commuting square (<ref>) just
extracts the set of nodes from a DAG
ν(N,E,Δ) = N .
The objects in the commuting square (<ref>) can
be understood as a combination of a pair of DAGs and a bundle, which I
will call a fibered DAG. In programs, nodes can be added
to the DAG 𝐃 and the bundle in concert such that the
relationship (<ref>) is maintained.
An immediate benefit of such an universal construction of the projection is
that it provides a corresponding preimage Π^-1 which maps DAGs with
the base B as nodes to all DAGs with the set N as nodes.
The maps in the preimage can be written
Δ^s,f : N → 2^E × C(N)
n ↦ŝ^f(Δ(π(n)))
where ŝ^f is to be understood as the distribution of s over
the nodes together with the application of f to the edges. Unfortunately,
in contrast to (<ref>), there will not be a single
function f:E→ E'. Instead, we must allow that ŝ^f maps
into the powerset 2^E'× C(N) instead of E'× C(N). In
addition, the image of f will depend, via the Feynman rules, on the
nodes appearing as children.
Since the resulting notation would be unnecessarily cumbersome, I will
refrain from making the nature of f in (<ref>) explicit as
a function by specifying its domain and writing out all of its
arguments. Nevertheless, the discussion of the example in
section <ref> will demonstrate how a set of Feynman
rules defines the maps Δ^s,f unambiguously.
In this picture, the application of Feynman rules amounts to choosing
a particular element of the preimage Π^-1. It would however be extremely
wasteful to construct the preimage first and to throw away all but one
of its elements later. In section <ref>, I will describe an
algorithm that can be used to construct the desired element directly.
So far, I have assumed that the DAGs are selected by Feynman
rules that are local to each vertex in the case of Feynman diagrams or
to each element δ_i of the map Δ in our DAGs
individually. There are however important exceptions. The most
important is provided by loop expansions. There it is required for
consistency that counterterms are inserted a fixed number of times in
Feynman diagrams. Such conditions on complete Feynman diagrams do
not translate immediately to the DAGs, whose components can enter the
scattering amplitudes (<ref>) more than once. Fortunately, this
problem can be solved by introducing additional unphysical labels
representing loop orders to the physical labels of the nodes and to
select the required combinations of wavefunctions in (<ref>) at
the end, as will be described in section <ref>.
The same applies to selecting fixed orders in the perturbative
expansions, as required for comparing to many results from the literature.
We call two DAGs 𝐃_1=(N_1,E_1,Δ_1)
and 𝐃_2=(N_2,E_2,Δ_2)
equivalent with respect to a
pair of bundles 𝐁_1=(N_1,B,π_1)
and 𝐁_2=(N_2,B,π_2) with the same base B iff there is a
common projected DAG 𝐃
Π_1(𝐃_1) = 𝐃 = Π_2(𝐃_2) .
In this case 𝐃_1 and 𝐃_2 can be viewed as
refinements of the same basic DAG 𝐃.
This notion of equivalence generalizes the notion of topological
equivalence for diagrams, where two diagrams are considered
equivalent if they agree after stripping off all quantum numbers.
With the new notion of equivalence, we can say that the sets of
Feynman diagrams encoded in a DAG are equivalent up to flavor or upto
color.
Using the basic commuting square (<ref>), we can
immediately extend the bundle complex (<ref>) to include the
corresponding DAGs
B_0 @<π_1<< B_1 @<π_2<< B_2 @<π_3<< ⋯
@Aν AA @Aν AA @Aν AA
𝐃_0 @<Π_1<< 𝐃_1 @<Π_2<< 𝐃_2 @<Π_3<< ⋯ .
In our applications, this complex does not continue further to the left,
because for each number of leaf nodes there is a natural leftmost nontrivial
DAG 𝐃_P, described in section <ref>
below.
In the following section <ref> I will describe how to use
Feynman rules to walk the lower row of (<ref>) to
construct a DAG for a scattering amplitude efficiently in stages.
§ DAGS FROM FEYNMAN RULES
In principle, it is possible to construct the DAG encoding all Feynman
diagrams in a single step.
First one adds leaf nodes for external states,
labeled by all quantum numbers (momentum, spin/polarization, flavor,
color, …). Which states are to be included here depends on the
choice of algorithm, as has been discussed in
section <ref>.
Then one uses the Feynman rules of the model to add all nodes where the
node and its children correspond to an allowed vertex. This proceeds
iteratively: in the first step all subsets of the leaf nodes appear as
children. In the following steps subsets of all nodes, including the
leaf nodes appear as children subject to the constraint that no leaf
node appears twice if the DAG is expanded recursively with the
functions Δ^* and Δ̂ from (<ref>).
This iteration will
terminate after a finite number of steps when all leaf nodes have been
combined in all possible ways.
While this algorithm inserts nodes that will not appear in
the scattering amplitude the function H (<ref>) can
be used to harvest the minimal DAG.
This is a workable approach, but it is neither the most efficient
nor particularly maintainable in actual code. Since the nodes are labeled by all
quantum numbers, handling them all at once requires the construction
of many nodes that will not appear in the final result. Adding
quantum numbers in several stages instead allows us to use the constraints from
earlier simpler stages to avoid in later stages the construction of
many more complicated nodes that will never be used. While not
relevant for the final numerical code, experience with early versions
of O'Mega <cit.> revealed that the
latter approach requires noticeably less time and memory for constructing the code.
Breaking up the
construction of the DAG into several stages also simplifies the implementation of
each stage and allows separate testing and swapping of different
implementations. Finally, applications often need access to projected
DAGs as described in section <ref> anyway. A
prominent example is the construction of phase space parameterizations
that only refer to kinematical information, such as propagators and masses.
Some of the stages described in the following subsections will be
performed in a particular order, while the order of others can be
interchanged easily.
§.§ Momenta
An element of the set N_P of nodes in the first
DAG 𝐃_P=(N_P,∅,Δ_P) to be constructed
is uniquely labeled by
a subset of the powerset 2^{1,2,…,n} of labels for the
external momenta and the edges are unlabeled. The leaf
nodes are the elements n({i}) of N_P and the action of the
map Δ_P is given by
n(I) ↦{ (∅, n(I_i)|1≤ i≤ k) | 2≤ k≤ l-1
∪_i=1^kI_i=I I_i≠∅}
where l is the maximum number of legs of the vertices in the model.
Obviously, we can order the nodes n(I) according to the number of
elements of I to prove that there are no cycles in 𝐃_P.
In case of (<ref>), we only need the elements
of 2^{2,…,n} as labels. In the cases (<ref>)
and (<ref>), only labels with at most n/2 elements are needed.
Finally, the function H (<ref>) is applied to construct the
minimal DAG required for evaluating one of the expressions (<ref>).
§.§ Flavors and Lorentz Structures
In the next stage, the momenta of the leaf nodes of 𝐃_P are
combined with the flavor quantum numbers of the corresponding
external state. The resulting leaf nodes form
the starting point of a new
DAG 𝐃_F=(N_F,V_F,Δ_F) and
bundle 𝐁_F=(N_F, N_P, π_F).
The edges V_F are vertex factors consisting of coupling constants,
Lorentz tensors and Dirac matrices.
Using a fold Φ of 𝐃_P using the constructor ω
of 𝐃_F with
precomposition (<ref>) that
maintains the fibration (<ref>) will ensure that
the nodes
of 𝐃_P are visited in the correct order of growing label
sets. The function f that is precomposed to ω
in (<ref>) acts on each element
n(I) ↦ (∅, n(I_i)|1≤ i≤ k)
of the map Δ_P as follows:
since the n(I_i)∈ N_P have been
processed, they are elements of the base of the growing
bundle 𝐁_F. Therefore, the fibers π_F^-1(n(I_i)) are
already complete and we can compute their cartesian product
Γ = π_F^-1(n(I_1)) ×π_F^-1(n(I_2)) ×⋯ .
We then use the Feynman rules to select all elements of Γ that can be
combined with another flavor to obtain a valid vertex. This
defines a function Γ↦2^V_F.
For each of the resulting
flavors, a new node labeled by I and this flavor is added
to 𝐃_F and 𝐁_F
together with the corresponding vertex factors and
elements of Γ as edges and children, maintaining the
fibration (<ref>).
This algorithm has been implemented in
O'Mega <cit.>
and is completely independent of the kind of Feynman
rules. It can accommodate both hardcoded rules and rules derived from a
UFO file <cit.>. The only potential performance
bottleneck is the efficient matching of vertices to the elements
of a Γ representing a large number of children. For vertices with few legs, this is not a
practical issue, but care has to be taken for vertices with many legs
where the factorial growth of the number of permutations might be felt.
Once the flavors have been assigned, it is known which fermion lines
contribute in the computation of each node. This information must
also be
added to the node in order to be able to assign the correct sign to
interfering contributions in (<ref>) later. Special care must be taken if the model
contains Majorana
fermions <cit.>.
By construction, after the fold is complete, the new
DAG 𝐃_F encodes all the information needed to
compute the scattering amplitude for the
leaf nodes in a theory without color, using one of the
formulae (<ref>).
Some nodes in 𝐃_F might not be needed due
to conserved quantum numbers. Therefore the
function H (<ref>) from 𝐃_F is applied again to construct the
minimal DAG required to evaluate one of the expressions (<ref>).
§.§ Colors
Since the color representation depends on the flavor, the assignment
of color quantum numbers in the construction of the
DAG 𝐃_C=(N_C,V_C,Δ_C)
naturally comes after the construction of 𝐃_F.
We can now follow the steps of the previous stage, as
described in section <ref>, word for word, only
replacing the subscripts (F,P) by (C,F).
The implementation in O'Mega uses the realization of the color flow
basis described in <cit.>, but, except for the labeling of
the nodes in N_C, the form of the vertices in V_C
and the Feynman rules to be used, the algorithm is
completely independent of the representation of the color algebra.
Having the color information available algebraically allows to compute
color factors and color correlators <cit.> analytically.
§.§ Coupling Orders
As already mentioned in section <ref>, there are cases
where it is important that the Feynman diagrams encoded by the DAG
contain certain coupling constants with fixed powers. The most
important examples are the counterterms and the terms of an effective
action in a loop expansion. Also the inclusion of self energy type terms
will not terminate in at DAG, unless a finite maximum expansion order
is prescribed.
For practical purposes it is sometimes also important to compute only
a part of a scattering amplitude corresponding to fixed powers of
couplings. Such results are often available in the literature from Feynman
diagram based calculations and a comparison for the purpose of
validation is only possible if the DAG based calculation can select
exactly the same contributions.
A priori, this conflicts with the representations (<ref>) of
scattering amplitudes as DAGs, since the wavefunctions or currents
will have accumulated different powers of couplings that will be mixed
by (<ref>).
Fortunately, there is a simple solution. For example, in the case
of (<ref>) we can write
ℳ_o({1,2,…,n}) =
∑_I_1∪ I_2∪ I_3={1,…,n}
o_1+o_2+o_3=o
K_I_1,I_2,I_3ϕ_o_1(I_1)ϕ_o_1(I_2)ϕ_o_1(I_3)
to compute the scattering amplitude at the coupling order o. The
only change required is that the wavefunctions have to keep track of
the coupling orders accumulated in their recursive computation. Since
the powers of the couplings are additive, we never have to add the
wavefunctions or currents that exceed the requested order to the DAG.
This necessitates augmenting the set of labels of the nodes by unphysical
“quantum numbers” corresponding to the coupling orders. It can be
implemented easily, as long as the number of coupling orders to be tracked
remains moderate.
§.§ Skeleton Expansion
If we are using DAGs to efficiently implement a skeleton expansion,
the remarks in section <ref> apply word for word by
replacing “coupling order” by “loop order”.
§.§ Multiple Amplitudes
In practical applications <cit.>, it is usually necessary
to compute scattering amplitudes for the same external
momenta, but more than one combination
of flavors and colors at the same time. These flavor and color
combinations often overlap pairwise and the sets of
leaf nodes will also overlap, i.e. L_1∩ L_2≠∅. In this
case, it is efficient to combine the corresponding
DAGs 𝐃_L_1 and 𝐃_L_2 into a single DAG and to compute the
scattering amplitudes from this DAG in order to reuse nodes
from the part of the DAG build on L_1∩ L_2. For this purpose,
we can generalize the
union defined in (<ref>) to a map
∪: 𝒟_L_1×𝒟_L_2→𝒟_L_1∪ L_2
in an obvious way.
§ CODE GENERATION
The example (<ref>) can be translated directly into, e. g. Fortran, as
where n and n denote fourmomenta and
wavefunctions, respectively. computes external
wavefunctions, propagators and is a coupling
constant. Using overloaded operators , and
allows to write similarly concise and readable code for realistic
models with standard model quantum numbers.
In the case of
more general models, functions implementing the vertex factors can
be generated from UFO files <cit.>.
Identically structured code can be
emitted as bytecode for a virtual machine that realizes the operators
as basic instructions <cit.>. The improving memory
bandwidth for graphical processing units even allows to start
targeting GPUs for interesting examples.
As already mentioned in the introduction, the generation of robust
numerical code is much more challenging if the DAG encodes diagrams
that contain loops. The problem has been solved for the standard
model <cit.>.
The structures described in the paper will help with the task of
extending this approach to general models.
§ CONCLUSIONS
I have described the algebraic structures that organize recursive
calculations in perturbative quantum field theory without the need to
expand intermediate expressions into Feynman diagrams. In functional
programming languages, these algebraic structures translate
directly into data structures. In a second step, these data
structures are translated to efficient numerical code for any
programming language or hardware target required.
This algebraic approach adds flexibility over purely numeric
implementations tied to specific models and computing targets.
It allows for more extensive
consistency checks and paves the way for more challenging
applications.
Acknowledgments
I thank Wolfgang Kilian, Jürgen Reuter and the other members of the Whizard
team for the decades long productive collaboration.
This work is supported by the German Federal Ministry for Education and
Research (BMBF) under contract no. 05H21WWCAA.
§ IMPLEMENTATION
§.§ DAGs
Here is the relevant subset of the
signature <cit.> of the module in
O'Mega <cit.>, implementing the functions from
sections <ref> and <ref>. For flexibility,
this module is implemented as a functor application on the
types , and , corresponding
to N, E and C(N) respectively
Here declares an abstract data type and
declares values and functions, the latter just being values in a
functional programming language. The type is
polymorphic. The actual signature in O'Mega contains additional
convenience functions that can be build from the functions presented
here.
Note that this implementation breaks the function ω (<ref>)
into products of functions ω^0 and ω^1, with
ω_n↦∅ = ω^0_n
ω_n↦{(e_1,c_1),…,(e_k,c_k)} = ∏_i=1^kω^1_n↦(e_i,c_i) .
The function ω^0 (called here) can be used to
construct _L∈𝒟_L
from ϵ∈𝒟_∅, while the action
of ω^1 (called here), does not leave
the category 𝒟_L. This provides a better interface
for programming, but the ω used in the main part of the paper
allowed a more concise writeup of the mathematical structures in
section <ref>.
Correspondingly, the fold Φ from (<ref>) is broken into
processing all nodes and processing
all N→ E× C mappings element-by-element. The
equivalent of (<ref>) is then
Note that this gives up some generality, because the Φ
from (<ref>) could process the sets of E× C as a whole
and not only element-by-element. However, this interface is more
straightforward and is better tailored to our applications.
The function implements H (<ref>). In particular,
finds the subset of the DAG that is reachable
from the node and adds it to the DAG .
This way, applications
can compute a minimal DAG for further processing.
Since the construction of the DAG 𝐃_P
(cf. section <ref>) is very simple, it had been combined
with the construction of 𝐃_F
(cf. section <ref>) in O'Mega <cit.>
before the structures described in this paper were elaborated.
However, the separation of the remaining stages described in
section <ref> forms the backbone of the current version of
O'Mega.
§.§ Bundles
Here is the signature of the
module in O'Mega <cit.>. Again a functor is applied to
the types , and the function ,
corresponding to X, B and π respectively
The semantics of the functions is evident from the discussion of
bundles in section <ref>. Note that π is universal
for all bundles with this type, while π^-1 depends on the
elements added to the bundle previously.
|
http://arxiv.org/abs/2306.09020v1
|
20230615102806
|
Distributionally Robust Stratified Sampling for Stochastic Simulations with Multiple Uncertain Input Models
|
[
"Seung Min Baik",
"Eunshin Byon",
"Young Myoung Ko"
] |
math.OC
|
[
"math.OC",
"cs.PF"
] |
Baik, Byon, and Ko
DR Stratified Sampling under Multiple Input Uncertainties
Distributionally Robust Stratified Sampling for Stochastic Simulations with Multiple Uncertain Input Models
Seung Min Baik
Pohang University of Science and Technology, [email protected]
Eunshin Byon
University of Michigan, [email protected], https://ebyon.engin.umich.edu/
Young Myoung Ko
Pohang University of Science and Technology, [email protected], https://www.lstlab.org/
This paper presents a robust version of the stratified sampling method when multiple uncertain input models are considered for stochastic simulation. Various variance reduction techniques have demonstrated their superior performance in accelerating simulation processes. Nevertheless, they often use a single input model and further assume that the input model is exactly known and fixed. We consider more general cases in which it is necessary to assess a simulation's response to a variety of input models, such as when evaluating the reliability of wind turbines under nonstationary wind conditions or the operation of a service system when the distribution of customer inter-arrival time is heterogeneous at different times. Moreover, the estimation variance may be considerably impacted by uncertainty in input models. To address such nonstationary and uncertain input models, we offer a distributionally robust (DR) stratified sampling approach with the goal of minimizing the maximum of worst-case estimator variances among plausible but uncertain input models. Specifically, we devise a bi-level optimization framework for formulating DR stochastic problems with different ambiguity set designs, based on the L_2-norm, 1-Wasserstein distance, parametric family of distributions, and distribution moments.
In order to cope with the non-convexity of objective function, we present a solution approach that uses Bayesian optimization.
Numerical experiments and the wind turbine case study demonstrate the robustness of the proposed approach.
input uncertainty; Monte Carlo sampling; reliability analysis; simulation budget allocation; variance reduction
Improved S-factor of the ^12C(p,γ)^13N reaction at E =320-620 keV and the 422 keV resonance
A. Yadav
July 31, 2023
===========================================================================================
§ INTRODUCTION
This paper devises a new input sampling strategy for stochastic simulation to estimate outputs of interests under multiple uncertain input models. To acquire system outputs, stochastic simulation typically samples random input parameters and runs a computer model repeatedly. We focus on stochastic computer models that produce noisy outputs despite identical input parameters <cit.>. Stochastic simulation with stochastic computer models involve two sources of randomness: the input's probability distribution and the output's inherent stochasticity.
This study is motivated from reliability analysis for wind turbines using stochastic simulation <cit.>. The National Renewable Energy Laboratory (NREL) of the U.S. Department of Energy has created aeroelastic computer models, such as TurbSim <cit.> and FAST <cit.>, to aid in the design of reliable wind turbines. To analyze the failure probability, that the load response exceeds a certain threshold level, variance reduction techniques have been proposed to enhance computing efficiency over the crude Monte Carlo sampling <cit.>.
Variance reduction studies often employ a single input model. Yet, some situations require handling several input models, such as when the input characteristics change over time or across dispersed locations. Consider a multi-turbine wind farm. Each turbine experiences a different wind condition because upstream turbines' operations add to the turbulence, which changes the free-flow wind condition <cit.>, referred to as wake effects <cit.>. As a result, downstream turbines experience heterogeneous wind conditions. Furthermore, even at a fixed location, the wind patterns change throughout the year <cit.>. Calculating the failure probabilities by running simulations with various input models will require extensive computing power. On the other hand, the optimal sampling budget allocation for a specific input model may result in significant inefficiency for other input models.
Furthermore, conventional variance reduction approaches assume that the true input distribution is known. But occasionally, a fitted or empirical distribution that is estimated with limited observations is used as its surrogate. When measurement data is unavailable, a physics-based numerical model is employed to approximate the true distribution <cit.>. The estimation errors in the input model may result in poor estimation quality of the simulation response. Though many studies have been carried out recently to take input uncertainty into account, the majority are yet limited to a single input model.
Among several variance reduction techniques, this study is concerned with stratified sampling. We devise a new variance reduction technique, referred to distributionally robust stratification (shortly, DR-strat), for determining a robust input sampling strategy. Our approach involves allocating the limited simulation budget when estimating performance measures under different uncertain input models. Hinging upon the fundamentals of distributionally robust optimization (DRO), we minimize the worst-case estimator variance for a set of plausible distributions. Specifically, we formulate a bi-level optimization problem where the outer problem minimizes the maximum variance using a sampling vector across strata as a decision vector, while the inner problem finds a plausible (uncertain) input model with the largest variance. We employ Bayesian optimization (BO) to search the solution space probabilistically.
Below we summarize the contribution of our study.
* We propose a new variance reduction technique to determine a robust input sampling strategy under multiple input models' uncertainties. To the best of our knowledge, this is the first study to take the input model uncertainty into account in variance reduction techniques for stochastic simulation.
* We provide a framework for formulating an optimization problem to derive a robust input sampling strategy. In contrast to most existing DRO studies, which deal with a single input model, we consider multiple input models in formulating the bi-level DR stochastic problem and suggest a solution procedure by adopting BO.
* We construct four types of ambiguity sets of plausible distributions that represent potential candidates for true input models, based on L_2-norm, 1-Wasserstein distance, parametric family, and distribution moments. We also investigate how various set design approaches affect the estimation result. While we demonstrate four types of ambiguity sets, the proposed bi-level optimization methodology is easily extensible to other types of sets as well.
* Our numerical experiments and a case study involving wind turbine reliability demonstrate that the proposed method successfully derives an estimator that robustly reacts to various input model uncertainties. As a result, our approach enables the efficient reuse of simulation results for performance measure estimation under multiple uncertain input models, which is crucial in the circumstances with limited computational budgets.
The remainder of the paper is organized as follows. Section <ref> reviews previous studies.
Section <ref> summarizes the conventional stratification method and provides the overall framework for deciding a robust input sampling strategy. Section <ref> discusses the DR-stratified sampling method with mathematical details. Section <ref> conducts numerical experiments. Section <ref> concludes and suggests future research directions.
§ LITERATURE REVIEW
Overall, this study is closely related to the two broad areas of research: stochastic simulation under input uncertainty and DRO. First, studies on input uncertainty in stochastic simulation include multiple research streams, including input uncertainty quantification <cit.>, sensitivity analysis on additional input data collection, and the simulation optimization under input uncertainty <cit.>. <cit.> and <cit.> provide a comprehensive review of related research studies.
This work is more closely related to the third of these streams. In particular, our approach is similar to the computational budget allocation problem in ranking and selection (R&S) studies, concerning input uncertainty to pursue a robust optimal sampling method. <cit.> investigate the impact of input uncertainty on simulation output with a mixed-effect model and adjust indifference-zone (IZ) procedures to guarantee the average probability of correct selection (PCS). <cit.> follow a robust approach for optimal computing budget allocation and solve approximate optimization problems to maximize PCS. <cit.> study a robust selection of the best problem with the IZ approach based on the concept of an ambiguity set.
When evaluating performance, these approaches consider both the alternative and the input model to search for the best among a set of alternatives. Our focus is slightly different, as we are particularly interested in the performance (i.e., estimator variance) solely impacted by the input model. While R&S studies typically require separate simulations for different alternatives under the same input model, which is both effective and necessary for their purposes, we simultaneously assess the influence of the sampling strategy across multiple input models.
Next, studies in the DRO literature treat uncertain input models with the concept of ambiguity set. <cit.> conduct a study on modeling the DRO problem with a moment-constrained ambiguity set and developing a tractable solution procedure for solving it. <cit.> adopt the empirical likelihood method to interpret the conventional DRO approach and investigate the confidence interval for the target performance to address a potential loss of coverage accuracy. <cit.> estimate the tail-related quantity of interest and investigate the characteristics of the worst-case objective. <cit.> review related studies comprehensively.
Similar to these DRO studies that generate an ambiguity set, our approach makes use of ambiguity sets. However, we consider several ambiguity sets, one corresponding to each input model, unlike most previous DRO studies that only analyze a single input model.
§ PROBLEM DESCRIPTION
Consider a black box computer model that generates an output Y ∈ℝ given an input X ∈ℝ^P following a distribution F. Given X, the computer model produces either a stochastic (or noisy) or deterministic output. In this study, we focus on the stochastic computer model, mirroring the stochasticity of NREL simulators employed in our motivating wind turbine application. However, our approach can be easily adopted in deterministic computer models.
Let Y(X) denote the simulation output at the input X. For the reliability analysis to estimate a failure probability ℙ(Y(X)>l), representing the probability of the simulation output being larger than a threshold l, we use g(x) = 1(Y(x)>l) where 1(·) is an indicator function. With the stochastic computer model, g(x) is random even at fixed x. We are interested in estimating the mean of g(X) (i.e., μ𝔼[ g(X)] = 𝔼_X [ 𝔼_Y [g(X) | X ] ]). With g(x) = 1(Y(x) > l), we have μ = 𝔼_X [ 𝔼_Y [1( Y(X)>l|X) ] ] = 𝔼_X [ ℙ( Y(X)>l|X) ] = ℙ(Y(X)>l). Our objective is to design an estimator μ̂ that effectively estimates the target performance measure μ. Proper allocation of the simulation efforts is crucial under a fixed computational budget N_T when the computational cost for evaluating g(X) is expensive.
This study considers a discrete input vector X, as a starting point of research that addresses input uncertainty in variance reduction techniques, for computational purpose. Most existing DRO studies has primarily focused on ensuring tractability when constructing ambiguity sets and formulating optimization problems <cit.>. Considering a continuous input has often led to situations where the optimization problem becomes computationally intractable, except for special cases with inherent structural features. Therefore, many practical problems have assumed a discrete input <cit.>, as it allows for feasible solution procedure to DRO problem. Similarly, we also use a discrete input so to ensure that our DR-strat problem can be solved under all four types of ambiguity sets.
Still, we would like to note that the proposed methodology is practically applicable to situations where discretization of continuous inputs can be employed. One of the most commonly used methods in the literature on wind energy reliability is the so-called binning method <cit.>. It partitions the wind speed range into multiple intervals and runs a computer model at each interval (or bin). The strata in the stratified sampling can be formed by these intervals, and their representative values can be set to be the domain of X. Further, when the input vector is continuous (e.g., wind speed), we can discretize it into multiple bins, as demonstrated in our case study in Section <ref>.
§.§ Recap: Stratified Sampling for Single Input Model
The crude Monte Carlo sampling is the most basic approach that provides an unbiased estimator for a single input model with the distribution F. It estimates the performance measure μ by μ̂^MC = ∑_n=1^N_T g(X_n)/N_T, where {X_n}_n = 1^N_T are independent and identically distributed (i.i.d.) samples drawn from F. When the event of interest occurs rarely, such as the exceedance event {Y(x)>l} with large l, a significant number of simulation runs may be required to obtain an accurate output estimate. Alternatively, stratified sampling, one of the popular variance reduction techniques, provides more effective way for drawing input samples to reduce the estimator variance Var[μ̂].
Let us review conventional stratified sampling for a single input model. Suppose that the sampling domain Ω of the input vector X can be divided into mutually exclusive and exhaustive strata {S_k}_k= 1^K. Conditional output mean m_k = 𝔼[g(X)|X∈ S_k] of the kth stratum can be estimated by averaging the simulation outputs at n_k conditional inputs as ∑_j=1^n_k g(X_j|k)/n_k, where {X_j|k}_j = 1^n_k are i.i.d. samples drawn from a conditional distribution of F given that an input belongs to the kth stratum (i.e., {X∈ S_k}). We call n = (n_1, n_2, …, n_K) a sampling vector. A probability of the kth stratum is ω_k = ℙ(X ∈ S_k). We assume ω_k > 0, ∀ k, to avoid any trivial issues. With strata probabilities ω = (ω_1, ω_2, …, ω_K), we define the stratified sampling estimator by aggregating the conditional estimates from each of the strata as follows:
μ̂^Str(n) = ∑_k=1^K ω_k ∑_j=1^n_k g(X_j|k)/n_k.
The stratification estimator is always unbiased (i.e., 𝔼 [μ̂^Str(n)]=μ) regardless of n. However, the sample vector n affects the stratified sampling estimator variance. Please refer Online Supplement A.1 for details. Suppose that the computational cost of drawing an input, as well as evaluating an output, is the same across all strata. Given a total simulation budget N_T, the following sampling vector is known to be optimal (i.e., it minimizes Var[μ̂^Str(n)]) <cit.>:
n^Str = (n_1^Str, n_2^Str, …, n_K^Str), n_k^Str = N_T ω_k σ_k/∑_k=1^K ω_k σ_k, ∀ k = 1, 2, …, K.
In practice, n_k^Str's are rounded to integers by allowing small non-proportionalities.
§.§ Multiple Input Models
As discussed earlier in Section <ref>, multiple uncertain input models need to be taken into account in several circumstances. Considering M input models, we are interested in estimating M performance measures μ_m^c 𝔼[ g(X_m^c) ] for m=1, 2, …, M, where the input random vector or variable (R.V.) X_m^c follows the mth input distribution F_m^c. Here, the superscript c is used to denote the correct (or true) information. We suppose that these true distributions are unknown, and they are inferred using empirical data in practice. Our goal is to design an estimator that performs well for all M input models in terms of reducing variance while also being robust to the uncertainties in input distributions.
Specifically, we minimize the maximum of M estimator variances, 1 ≤ m ≤ MmaxVar[μ̂_m (n) ], where each variance corresponds to the estimator for a different input model with the same sampling vector n. However, the precise maximum value is impossible to calculate because the input distributions are not known. To tackle this, we consider an ambiguity set, denoted by ℱ_m, to represent the set of probable distributions of the mth input model for 1 ≤ m ≤ M. A set ℱ_m is constructed to include distributions close to a nominal distribution (e.g., predicted or fitted input distribution), such as those within a certain Wasserstein distance. Consequently, to take the robustness against the uncertainty, we adopt a DRO approach, whereby we consider the worst-case estimator variance over a set of distributions.
We aim to allocate computational budgets across strata to minimize the maximum variance among multiple uncertain input models. Let μ̂^DR-Str(n;F_m) denote the estimator for the mth input model under our robust stratified sampling approach (the mathematical definition of μ̂^DR-Str(n;F_m) will be provided in Section <ref>). Then the problem boils down to finding a robust input sampling vector n^DR-Str, that is, how many samples to draw from the predetermined strata, by solving the following problem.
nmin 1 ≤ m ≤ Mmax F_m ∈ℱ_mmax Var[μ̂^DR-Str(n;F_m) ].
By minimizing the maximum of worst-case estimator variances, we prevent the estimator variance from growing too large even with the poor estimation of uncertain input models.
§ METHODOLOGY: DR-STRATIFIED SAMPLING
The conventional stratification estimator, discussed in Section <ref>, is determined based on the characteristics of input distribution F and output function g. Thus, the optimal simulation budget allocation, or the sampling vector n^Str, changes as F varies.
This section proposes a distributionally robust stratification method designed to robustly respond to uncertainties within multiple input models.
§.§ Formulation of DR-Strat Problem
This section presents the detailed formulation of the DR-strat problem for determining the DR-strat sampling vector in (<ref>). We first define the new estimator design that is suitable to handle multiple uncertain input models. Then we formulate a bi-level optimization problem where the inner problem finds the worst-case estimator variance among the plausible input models and the outer finds the optimal sampling vector.
§.§.§ DR-Stratfication Estimator.
To estimate outputs of interest under several input models, our strategy is to run simulations under a reference distribution (a single common distribution used to draw inputs for all models), instead of running simulations under each input model (regarding individual sampling distribution) separately. We then reuse the obtained simulation outcomes for each input model. This procedure enables us to significantly reduce simulation efforts. The problem is how to allocate simulation efforts.
Let us consider the mth input model. The new estimator considers both the reference distribution F_ref and the plausible distribution F_m, a candidate for characterizing the input model. Here, F_m is an element of ℱ_m constructed upon the nominal (or base) distribution F̅_m of the mth input model. This nominal distribution serves as a basis (such as the center point) for creating the ambiguity set. For the input sampling domain Ω = {x_i}_i=1^|Ω|, we denote the probability mass function (pmf) values of F_ref as p_ref = ( p_ref,1, …, p_ref, |Ω|) and F_m as p_m = ( p_m,1, …, p_m,|Ω|). Further, we use the notations X_ref and X_m to denote input R.V.s following F_ref and F_m, respectively. Thus, we have ℙ( X_ref = x_i ) = p_ref, i and ℙ(X_m = x_i ) = p_m, i for 1 ≤ i ≤ |Ω|. Suppose that we divide Ω into K strata for K ≤ |Ω|. The probabilities that X_ref and X_m belong to the kth stratum, S_k, become ω_ref,k = ℙ( X_ref∈ S_k ) = ∑_i ∈{i| x_i ∈ S_k} p_ref,i and ω_m,k = ℙ( X_m ∈ S_k ) = ∑_i ∈{i| x_i ∈ S_k} p_m,i for k=1,2,…,K. Similar to the conventional stratified sampling, we assume these strata probabilities are strictly positive to avoid trivial issues.
We additionally define notations X_ref,k and X_m,k to denote the conditional R.V.s, given that X_ref and X_m belong to the kth stratum, respectively (i.e., X_ref,k d = X_ref|{X_ref∈ S_k} and X_m,k d = X_m|{X_m ∈ S_k}). So, the conditional probabilities of input x_i given that {x_i ∈ S_k} are ℙ( X_ref,k = x_i ) = p_ref,i / ω_ref,k and ℙ( X_m,k = x_i ) = p_m,i / ω_m,k.
Suppose we draw n_k i.i.d. samples, denoted by {X_j|k}_j=1^n_k, from the conditional reference distribution of F_ref given { X_ref∈ S_k } for 1 ≤ k ≤ K. Then, the new DR-strat estimator for estimating E[g(X_m)] under the input distribution F_m can be defined as follows:
μ̂^DR-Str (n; F_m) = ∑_k=1^K ω_m,k/n_k∑_j=1^n_k g(X_j|k) ℙ(X_m,k = X_j|k)/ℙ(X_ref,k = X_j|k) .
Here, please note that μ̂^DR-Str (·) has an additional argument F_m (for the evaluation), unlike μ^Str(·) in (<ref>) that does not. This indicates that the estimation is performed for the mth input model with distribution F_m. Further, the strata probability ω_k in (<ref>) is substituted with ω_m,k in order to consider F_m in the left-hand side of the equation.
We would like to highlight that there is another important difference between the estimators μ̂^Str and μ̂^DR-Str. In estimating the measure of interest when the same distribution is used for both input sampling and evaluation, μ̂^Str in (<ref>) provides unbiased estimation for E[g(X)]. On the contrary, DR-strat samples inputs from the reference distribution F_ref but estimates the output under another distribution F_m. Thus, we need to use the likelihood ratio . ℙ(X_m,k = X_j|k) / ℙ(X_ref,k = X_j|k) . in μ̂^DR-Str in (<ref>) to correct the bias.
Proposition <ref> shows that the DR-strat estimator is unbiased (i.e., the estimator mean becomes the same as the true output mean when the input distribution is F_m).
For a random vector X_m following a distribution F_m,
𝔼[μ̂^DR-Str (n;F_m) ] = 𝔼[ g(X_m) ].
Next, Proposition <ref> derives the variance of the DR-strat estimator.
For a random vector X_m following a distribution F_m,
Var[ μ̂^DR-Str (n;F_m) ]
= ∑_k=1^K1/n_k( ( ω_ref,k∑_i ∈{i| x_i ∈ S_k}𝔼[ g(x_i) ] ℙ(X_m = x_i ) ^2/ℙ(X_ref = x_i ) ) - ( ∑_i ∈{i| x_i ∈ S_k}𝔼[ g(x_i) ] ℙ(X_m = x_i ) )^2 ).
Online Supplement A.2 and A.3 provide the detailed proofs for the above propositions.
As μ̂^DR-Str (n;F_m) is an unbiased estimator for 𝔼[ g(X_m) ] as shown in (<ref>), we want to minimize its variance Var[ μ̂^DR-Str (n;F_m) ] in (<ref>) by allocating simulation budgets adequately. In the subsequent discussion, we will present a new formulation to robustly allocate budgets across multiple strata in order to handle multiple uncertain input distributions.
§.§.§ DR-Strat Problem.
We start constructing the DR-strat problem by formulating the inner maximization problem first, which aims to find the maximum value of worst-case variances among multiple sets of plausible input models. In this stage, the sampling vector n (the decision vector of the outer minimization problem) is given. Using the new estimator design in (<ref>), the inner maximization problem becomes
1 ≤ m ≤ Mmax F_m ∈ℱ_mmax Var[μ̂^DR-Str(n;F_m) ],
where the plausible distributions F_m's and input model index m are the decision variables.
We define an index set of input values at the kth stratum as I_k = {i | x_i ∈ S_k} for 1 ≤ k ≤ K. Noting that X_m is a discrete R.V., the distribution F_m ∈ℱ_m has the equivalent meaning with p_m ∈𝒫_m with 𝒫_m being the ambiguity set expressed in terms of pmfs. Using the estimator variance in (<ref>), the inner problem can be reformulated as follows:
1 ≤ m ≤ Mmax p_m ∈𝒫_mmax ∑_k=1^K1/n_k( ( ω_ref,k∑_ i ∈ I_k 𝔼[ g(x_i) ] p_m,i^2/ p_ref,i) - ( ∑_ i ∈ I_k𝔼[ g(x_i) ] p_m,i)^2 ).
Here, 𝔼[g(x_i)] in (<ref>) are supposed to be estimated from the pilot stage simulation (e.g., by fitting meta-models to data).
Next, to find the optimal sampling strategy that minimizes the maximum value of worst-case estimator variances, we formulate the DR-strat problem as follows:
(DR-Str) nminp_m ∈𝒫_m
1 ≤ m ≤ Mmax ∑_k=1^K1/n_k( ( ω_ref,k∑_ i ∈ I_k 𝔼[ g(x_i) ] p_m,i^2/ p_ref,i) - ( ∑_ i ∈ I_k𝔼[ g(x_i) ] p_m,i)^2 )
s.t. ∑_k=1^K n_k = N_T
.
The outer minimization problem determines a sampling vector n under a budget constraint. We call the optimal solution of this min-max problem a DR-strat sampling vector, denoted by n^DR-Str.
Section <ref> describes how we solve this problem.
§.§ Ambiguity Set Design
This section discusses the design of the ambiguity set in the DR-strat problem. The configuration of the ambiguity set significantly affects the result from the DR-strat approach, as it determines the search space of the inner maximization problem. We explore four types of ambiguity sets, those often employed in the literature <cit.>. The first two sets are based on discrepancy functionals associated with the L_2-norm and 1-Wasserstein distance. We also construct an ambiguity set based on distribution moments. Finally, a collection of the same parametric distributions is employed. These four set types provide a comprehensive analysis of the DR-strat's performance under various aspects of input model uncertainty, while other set design can also be used, based on the specific problem structure at hand and the prior knowledge available about the input distribution.
We define an ambiguity set as a collection of pmfs, as we consider discrete input vectors X_m's. Let p̅_m = (p̅_m,1, p̅_m,1, …, p̅_m,|Ω|) denote the mth nominal distribution. The elements in p̅_m or p_m should add up to one (i.e., ∑_i=1^|Ω|p̅_m,i = ∑_i=1^|Ω| p_m, i = 1). We let a positive scalar value γ_m denote a parameter that quantifies the degree of uncertainty. Depending on the set design, we will use an extra subscript or superscript in the subsequent discussion. The size parameter γ_m can be chosen using domain knowledge or the level of confidence about the nominal distribution.
We assume that each realization of the mth input model, p_m∈𝒫_m, is independent of the realization of another input model, p_m'∈𝒫_m', when m ≠ m'. Thus, we construct the ambiguity set for each input model separately. Future extension of this research may address possible dependencies between input models.
Now, we discuss each type of ambiguity set for the mth input model. First, we define the ambiguity set based on the L_2-norm as follows:
𝒫_m^L_2 = {p_m | p_m - p̅_m _2≤γ_m^L_2} = {p_m | ∑_i=1^|Ω|( p_m,i-p̅_m,i)^2 ≤(γ_m^L_2)^2 },
where ·_2 denotes the L_2-norm. This set consists of pmfs that have the L_2 distance to the nominal pmf p̅_m smaller than the uncertainty level γ_m^L_2.
Next, the p-Wasserstein (p-𝒲) distance-based ambiguity set is defined as follows:
𝒫_m^p-𝒲 = {p_m | ∃ q_m, ij≥ 0, ∀ i,j = 1, …, |Ω|, s.t. [ ∑_i=1^|Ω|∑_j=1^|Ω| x_i - x_j^p q_m, ij≤( γ_m^p-𝒲)^p; ∑_j=1^|Ω| q_m,ij = p_m,i, ∀ i = 1, …, |Ω|; ∑_i=1^|Ω| q_m, ij = p̅_m,j, ∀ j = 1, …, |Ω| ]},
where · is the basis norm used for p-𝒲 distance. This set consists of pmfs p_m's, in which the p-𝒲 distance to the nominal distribution p̅_m smaller than the uncertainty level γ_m^p-𝒲. Please refer Online Supplement A.4 for details.
Several studies suggest various techniques for solving DRO problems regarding Wasserstein distance-related constraints <cit.>. For illustrative purposes, we present a case where such constraints are relatively easy to handle. When the input R.V. is defined on one-dimensional space (i.e., when p=1), the 1-𝒲 distance-based ambiguity set with the size parameter γ_m^1-𝒲 becomes
𝒫_m^1-𝒲 = {p_m | ∑_i=1^|Ω|-1( | ∑_j=1^i p_m,j - ∑_j=1^i p̅_m,j| ( x_i+1 - x_i ) ) ≤γ_m^1-𝒲},
where x_i < x_j, ∀ i<j. The detailed derivation is provided in Online Supplement A.4.
Thirdly, the ambiguity set of the same parametric distribution family is defined as
𝒫_m^Param = {p_m | p_m,i = ℙ( X_m = x_i), ∀ i = 1, …, |Ω|, where X_m ∼𝒟_m(θ_m), ∀θ_m ∈Θ_m },
where X_m denotes the R.V. for the mth input model, 𝒟(θ_m) is a certain member within a pre-specified distribution family with its parameter θ_m, and Θ_m is the set of candidate parameters. Here, the magnitude |Θ_m| of the range in which the parameter varies can be interpreted as the ambiguity set size parameter γ_m.
For example, if X_m is a binomial R.V. with parameters (N_m^Bin, p_m^Bin), the ambiguity set can be expressed as follows:
𝒫_m^Bin = {p_m | p_m,i = N_m^Binx_i(p_m^Bin)^x_i(1-p_m^Bin)^N_m^Bin-x_i, ∀ i = 1,…, |Ω|, ∀ (N_m^Bin, p_m^Bin) ∈Θ_m } .
As another example, let us consider a discretized version of the Rayleigh distribution family. This set will be used in our case study that analyzes wind turbine simulator outputs. By letting the input R.V.'s probability mass be proportional to the probability density of Rayleigh distribution with an input shift, we get the following ambiguity set.
𝒫_m^Rayleigh = {p_m | p_m,i∝x_i - Δ_m/(σ_m^Rayleigh)^2 e^-1/2( x_i - Δ_m/σ_m^Rayleigh)^2, ∀ i = 1, …, |Ω|, ∀ (σ_m^Rayleigh, Δ_m) ∈Θ_m },
with the pair of Rayleigh scale parameter σ_m^Rayleigh and input shift Δ_m. We note that the choice of the parametric family is not limited to the examples here; one may instead select any other family depending on prior domain expertise.
Finally, we define the ambiguity set based on distribution moments as follows:
𝒫_m^Moment
= {p_m|[ ( ∑_i=1^|Ω| p_m,i x_i - μ̅_m)^TΣ̅_m^-1( ∑_i=1^|Ω| p_m,i x_i - μ̅_m ) ≤γ_1,m; ∑_i=1^|Ω| p_m,i( x_i -μ̅_m) ( x_i -μ̅_m)^T≼γ_2,m^ubΣ̅_m; ∑_i=1^|Ω| p_m,i( x_i -μ̅_m) ( x_i -μ̅_m)^T≽γ_2,m^lbΣ̅_m
+ 2 ( ∑_i=1^|Ω| p_m,i x_i - μ̅_m)( ∑_i=1^|Ω| p_m,i x_i - μ̅_m)^T ]},
where μ̅_m and Σ̅_m are the mean vector and covariance matrix of the nominal input vector X̅_m, respectively, and γ_1,m, γ_2,m^lb, and γ_2,m^ub are positive scalar values determining the level of uncertainty. This set is an extension of the ambiguity set proposed in <cit.>. The original set in <cit.> bounds above the first and second-order moments, but we also include the third constraint to further limit the second moment to be bounded below. This lower bound is included because, in the problem under consideration in this study, both extreme instances can result in the largest estimator variance. Online Supplement A.5 discusses how we construct this new ambiguity set in detail.
With γ_1,m = 0 and γ_2,m^lb = γ_2,m^ub = 1, this ambiguity set consists of the distributions which have the same first and second moments as the nominal distribution. But, this does not imply that the ambiguity set includes the nominal distribution only.
§.§ Solving DR-Strat Problem
This section discusses how to solve the DR-strat problem in (<ref>). In our case, the variable to be optimized is the sampling vector n, which is the decision vector in the outer problem with regard to the inner maximization problem's objective value v(n). For calculating v(n) given the sampling vector n, one can either apply the iterative algorithm or use a nonlinear solver. In this study, we utilize open-source solvers with implementation details provided in Online Supplement B.
The challenge lies in solving the outer problem. One may consider enumerating all potential candidates n's and choosing the one that generates the smallest v(n). This naive approach is, however, not computationally efficient, even if it is possible to compute. The number of possible solutions, _N_T-1C_N_T-K by the formula of combination with repetition, becomes extremely huge (e.g., approximately 10^10 for N_T=100 and K=7) even with moderate N_T and K, making an exhaustive search computationally intractable.
In the literature, several algorithms have been presented for solving a bi-level optimization problem (e.g., using a single-level reduction or KKT conditions) <cit.>. Unfortunately, the objective function of our inner problem is a non-convex form of the pmf p, preventing us from employing existing techniques. Recent studies on robust decision-making show that evolutionary approaches, such as a genetic algorithm, can be used to solve analytically intractable problems, but they tend to heavily focus on exploitation.
We utilize BO, a probabilistic global optimization approach which is known to strike a balance between exploration and exploitation, and to be effective in handling multi-local-optima <cit.>. BO models the variable-function value relationship with GP, which iteratively updates as new observations become available. Specifically, we start with an initial set 𝒟_sv of sampling vectors and the corresponding set 𝒱_inner = {v(n), ∀n∈𝒟_sv} of the objective values of the inner problem. Then, we model the relationship between the sampling vector and its corresponding objective value with GP. A new candidate sampling vector n^new is determined by maximizing the acquisition function (ACQ). Among various ACQs, we utilize the following expected improvement over the best objective value found so far.
EI(n) = 𝔼[ max(v(n^best) - v(n), 0 ) ],
which can be calculated using the mean and
variance of the GP posterior at n.
This new sampling vector n^new and its objective value v(n^new) are added to 𝒟_sv and 𝒱_inner, respectively. If the new objective value is better (lower) than the current best, n^new replaces n^best. These steps are repeated until a stopping criterion is met. During the iteration, we allow the elements of n to have continuous values rather than restricting them to integers. When the iteration finally terminates, we round up the obtained n^best. Additional details are provided in Online Supplement B.
§ NUMERICAL EXPERIMENTS
This section assesses the effectiveness of the DR-strat method. Section <ref> describes a modified stratified sampling approach as the benchmark model that takes into account multiple input models without uncertainties. We implement the proposed methodology and compare it with the benchmark model in two experimental settings: a numerical example in Section <ref> and the case study involving wind turbine reliability in Section <ref>.
Online Supplement D.4 also provides additional experimental results with two-dimensional input.
§.§ Benchmark Model
Section <ref> has outlined the conventional stratified sampling method, which handles a single input model. To the best of our knowledge, no prior studies in stratified sampling consider multiple distributions with input uncertainty. For fair comparison, we use a modified approach as our benchmark model that ignores input uncertainty while handling multiple input models. Specifically, assuming complete information about input models, the benchmark model treats the nominal distributions as true input models. Similar to the proposed DR-strat, it uses a single reference distribution during the sampling phase and then estimates the response for each input model using (<ref>). With the goal of obtaining a sampling vector that minimizes the maximum estimator variance among multiple nominal input models, it formulates the following problem.
(Str-M) nmin 1 ≤ m ≤ Mmax ∑_k=1^K1/n_k( ( ω_ref,k∑_ i ∈ I_k 𝔼[ g(x_i) ] p̅_m,i^2/ p_ref,i) - ( ∑_ i ∈ I_k𝔼[ g(x_i) ] p̅_m,i)^2 )
s.t. ∑_k=1^K n_k = N_T.
Please note that the objective term in the optimization problem (Str-M) does not have a maximum operator p_m ∈𝒫_mmax which reflects the uncertainty in the mth input model, unlike that in (DR-Str) in (<ref>). Let n^Str-M denote the optimal sampling vector of (Str-M), where M in the superscript implies the consideration of multiple input models, in contrast to n^Str in Section <ref>.
§.§ Toy Example
§.§.§ Experimental Setting.
Consider estimating the tail probability ℙ(Y(X)>l) with a one-dimensional input X. Mimicking the standard normal input R.V. in the example in <cit.>, we employ the following scaled binomial R.V.s X̅_1 and X̅_2 as the nominal distributions of two input models, with the domain of B̅_1 and B̅_2 as {23, 24, …, 57}.
X̅_1 = B̅_1- (80×0.5)/√(80× 0.5^2), X̅_̅2̅ = B̅_̅2̅- (80×0.5)/√(80× 0.5^2), where B̅_1 ∼Bin(75, 0.55), B̅_2 ∼Bin(85, 0.45).
For the output model, we use the same model in <cit.> and define the conditional output given a certain input to be Y|{X=x}∼𝒩(μ_Y(x), σ_Y(x)) with
μ_Y(x) = 0.95x^2 (1 + 0.5cos(10x) + 0.5cos(20x)),
σ_Y(x) = 1 + 0.7|x| + 0.4cos(x) + 0.3cos(14x).
To meet the target performance measure (the tail probability) values with the nominal distributions as ℙ(Y(X̅_1)>l) = 0.0428 and ℙ(Y(X̅_2)>l) = 0.0564, a threshold l is set to be 5.2.
For the input domain Ω = { x_i | x_i = (i-40)/√(20), ∀ i = 23, …, 57 }, we consider 7 strata S_k = {x_i | x_i = (i-40)/√(20), ∀ i = 23+5(k-1), …, 22+5k } for k = 1, …, 7. The total simulation budget N_T is 100. We use the mean of the two nominal distributions as the reference distribution for initial sampling ( i.e., ℙ( X_ref = x_i ) = ( ℙ( X̅_1 = x_i ) + ℙ( X̅_2 = x_i ) )/2, ∀ x_i ∈Ω ). We recommend choosing the reference distribution near the nominal distributions which the ambiguity sets are constructed around. Obtaining the optimal reference distribution remains a subject of our future study. Further, we assume that conditional output means {𝔼[g(x_i)]}_i =1^|Ω| are known as in (<ref>). In reality, we can estimate them via learning a meta-model with the results obtained from running the pilot stage simulations. To construct the four types of ambiguity sets, we utilize (<ref>), (<ref>), (<ref>), and (<ref>). The detailed settings for set size parameters are provided in Online Supplement C.1.
§.§.§ Instances within Ambiguity Sets.
We first depict plausible distributions in each ambiguity set in Figure <ref> for the two input models. Solid and dotted curves represent the pmfs of the nominal and plausible distributions, respectively. Depending on the underlying similarity measure, the plausible distributions show different shapes.
The pmfs in the ambiguity sets constructed with the two discrepancy measures, shown in Figures <ref> and <ref>, similarly show moderate spikes. They are alike to many empirical distributions fitted from historical
data. Still, the differences between the two sets exist. In the 1-𝒲 distance-based set, plausible distributions tend to differ from the nominal distribution at fewer input points, compared to those in L_2-norm-based set. This is because L_2-norm calculates the difference of pmfs individually at each input value in (<ref>), whereas the 1-𝒲 distance is calculated in a cumulative manner in (<ref>).
Next, Figure <ref> shows smooth pmfs in the ambiguity set of a parametric family.
This set type is desirable if historical data is to be fitted to a pre-specified parametric distribution. Finally, pmfs in the moment-based ambiguity set in Figure <ref> depict the most jagged shapes. These spiky pmfs appear because the set constraints restrict only the first two moments but not the distribution shape. As a given input's probability mass can vary greatly, feasible realizations may deviate dramatically from the nominal distribution pattern. Employing this set type may result in excessive conservatism when incorporating the input model uncertainty. As a result, the moment-based set should be used only when the true distribution possibly has an unusual pmf form.
§.§.§ Implementation Results.
We compare the sampling vectors from DR-strat and the benchmark method, using the same reference (sampling) distribution F_ref for both methods. Figure <ref> depicts n^DR-Str and n^Str-M for each ambiguity set design. We observe that n^Str-M focuses intensively on particular strata (allocating 74% of the total simulation budget the 2nd, 3rd, and 5th strata) where both the input probability and conditional output variance Var[g(X)|X ∈ S_k ] are relatively high. On the other hand, n^DR-Str tends to be more stretched even to strata with low probability but high conditional output variance. In all set designs, n^DR-Str has a smaller maximum (1 ≤ k ≤ Kmax n_k^DR-Str) and a larger minimum budget (1 ≤ k ≤ Kmin n_k^DR-Str), compared to n^Str-M, indicating the conservative tendency of the proposed method.
The resulting sampling vector for each ambiguity set is further investigated in conjunction with the corresponding worst-case distribution in Figure <ref>.
The vector n^DR-Str from the two discrepancy-based sets exhibits similar patterns in Figures <ref> and <ref>. However, for the set with L_2-norm, more sampling budgets are allocated in the last stratum compared to the set with 1-𝒲 distance. This aligns with that the worst-case distribution of L_2-norm occurs in the first input model with the spike near X=4 as shown in Figure <ref>. On the contrary, the worst-case distribution of 1-𝒲 distance, shown in Figure <ref>, occurs in the second input model with the spike near X=-2.5. It drives a higher budget to the corresponding stratum as shown in Figure <ref>, compared to the allocation in L_2-norm.
Next, Figure <ref> demonstrates that the worst-case distributions of the parametric set are moved to the side where |X| is large in comparison to the nominal one. Consequently, n^DR-Str in Figure <ref> concentrates more on the strata near |X|=2 than n^Str-M. Lastly, the sampling vector of the moment-based set in Figure <ref> shows the most irregular form among the four. While the other three sampling vectors are bimodal, this vector has three modes, similar to the worst-case distributions shown in Figure <ref>. Still, its sampling vector is rather smooth, while the worst-case distributions show spikes at certain input points. This is because the inner maximization problem of DR-strat collectively accounts for other conceivable distributions that may have spikes at different input points.
We then compare the estimator variances of DR-strat and the benchmark method, using the derived sampling vectors.
Figure <ref> depicts the worst-case estimator variance F_m ∈ℱ_mmaxVar[μ̂^DR-Str(n; F_m) ] for m=1, 2, as well as their maximum value. The maximum worst-case estimator variance under DR-strat is substantially smaller than the benchmark method's for all four sets, demonstrating its robustness.
The relative performance of the two methods varies depending on the ambiguity set designs, with the moment-based set exhibiting the most prominent difference, followed by discrepancy-based sets. The magnitude of worst-case variance is considerably larger for the moment-based set than for other sets. Also, DR-strat performs robustly even in circumstances where the true models moderately deviate from the nominal distributions. Online Supplement D.1 showcases such scenarios of the true input model realizations.
As a final remark, observing the worst-case distributions as well as the realizations with moderate deviations could help determine a proper set design. The moment-based set design tends to consider unrealistically radical distributions and produce overly conservative results. On the other hand, when the true model does not represent the pattern in the same parametric family, a parametric family-based design may produce an overly optimistic set, and the benefit using DR-strat may diminish. The discrepancy-based set designs appear to provide a suitable balance.
In addition, we conduct sensitivity analysis to assess how the degree of input model uncertainty affects the estimation performance. The true model might deviate from the prediction more (or less) than expected, and the ambiguity set is too small (or large). When compared to the benchmark model, DR-strat leads to lower estimator variance, even when the degree of uncertainty is different from the initial belief, demonstrating its robustness. Online Supplement D.2 provides detailed experimental results and analysis.
§.§ Case Study - Wind Turbine Simulator
We conduct a case study with a wind turbine simulator. Given a wind condition, the wind turbine simulators—including Turbsim <cit.> and FAST <cit.>—generate load responses. Among several load responses, we consider the blade tip defection, which is crucial in analyzing wind turbine reliability <cit.>.
The simulation input is a 10-min average wind speed. We use the truncated Rayleigh distribution over a support [3,25] with a scale parameter 10√(2/π), as recommended in the international standard IEC61400-1 <cit.>. We discretize the domain of wind speed into several bins (intervals) in accordance with the widely used binning method in the literature on wind energy. In order to closely mimic the original continuous Rayleigh distribution, we consider a very small bin width of 0.1m/s. The two input models under consideration have ambiguity sets with the following nominal distributions:
p̅_1,i∝x_i - 1.5/9^2 × 2/π e^-1/2( x_i - 1.5/9√(2/π))^2, p̅_2,i∝x_i + 0.5/11^2 × 2/π e^-1/2( x_i + 0.5/11√(2/π))^2, ∀ i = 1, …, |Ω|,
with the domain Ω = {x_i | x_i = 3 + 0.1×(i-1), ∀ i = 1, 2, …, 220 }. We take the average of these two nominal distributions to get the reference distribution.
In estimating the exceedance probability ℙ(Y(X)>l), we set the threshold l at 3.15. Because there are 220 bins, each of which is very narrow, it is not appropriate to use the bins as strata directly. Instead, we group them and take K=22 equally partitioned strata with S_k = {x_i | x_i = 3 + (k-1) + 0.1×(i-1), ∀ i = 1, 2,…, 10}, for k = 1,…,22 and total budget of N_T = 1000 simulation runs. We employ ambiguity sets in (<ref>), (<ref>), (<ref>), and (<ref>) with the set parameters provided in Online Supplement C.2.
Figure <ref> shows the budget allocation over strata. Both approaches allocate minimal budgets for strata with X ≤ 12 due to the rare exceedance events {Y(X)>l} in low wind speeds. The benchmark model's sampling vector n^Str-M peaks around wind speed of roughly 17m/s where both conditional output variance and input probability are somewhat large.
The proposed method's sampling vector n^DR-Str exhibits distinct patterns in different ambiguity set designs. For discrepancy-based sets, DR-strat distributes large budgets in the high wind speed region, where exceedance events are more likely to occur and the conditional output variance is higher, as shown in Figures <ref> and <ref>. Similar patterns may be seen in the budget distribution using a parametric family of ambiguity sets in Figure <ref>, although the budgets for the right tail are smaller than those obtained using discrepancy-based ambiguity sets. The budget allocation for the moment-based ambiguity set tends to concentrate on the mid-wind speed. This is due to the fact that the conditional output variance is in unimodal form with its mode near X=20. The probability of the mid-wind speed region is determined to be highest when finding the inner problem's worst-case estimator variance while satisfying the moment constraints.
Table <ref> compares the worst-case estimator variance in both input models. The ratio in the last row is calculated by dividing the benchmark model's maximum worst-case variance by that of the DR-strat. The DR-strat always yields a smaller worst-case variance, indicating its robustness. For the four different forms of ambiguity sets, we observe the various levels of variance reduction. The discrepancy-based sets, followed by the moment-based set, show the greatest reduction among the four ambiguity sets. Online Supplement D.3 provides more detailed experimental results, including the pmf instances within each ambiguity set and the worst-case distributions of the inner problem.
§ CONCLUSIONS
This paper proposes a robust stratified sampling method to address multiple uncertain input models. We formulate an optimization problem to minimize the maximum of worst-case estimator variances among candidate distributions based on the DRO framework. We solve the resulting bi-level optimization problem using BO to obtain the robust DR-strat sampling vector, which enables the efficient reuse of simulation results.
Our numerical experiments in two settings–toy example and a case study involving a wind turbine–suggest that the proposed approach shows robust performance when the true model realization deviates from the initial belief. In comparison to the benchmark model that does not incorporate uncertainty, it obtains lower estimator variance. We also offer a thorough analysis using four different kinds of ambiguity sets and discuss how they impact the estimation outcome and under what circumstances a particular set is preferable.
Future work could investigate other variance reduction techniques, such as importance sampling and antithetic sampling, in the presence of input uncertainty. We could also explore robust simulation with multi-fidelity models. For example, we could achieve an optimal balance between estimation accuracy and simulation budget by using high-fidelity models when necessary and supplementing with cheap, low-fidelity models as needed.
This work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. NRF-2021R1A2C1094699 and NRF-2021R1A4A1031019) and in part by U.S. National Science Foundation (CMMI-2226348 and IIS-1741166)
informs2014
|
http://arxiv.org/abs/2306.12389v1
|
20230621172147
|
Automated Reminders Reduce Incarceration for Missed Court Dates: Evidence from a Text Message Experiment
|
[
"Alex Chohlas-Wood",
"Madison Coots",
"Joe Nudell",
"Julian Nyarko",
"Emma Brunskill",
"Todd Rogers",
"Sharad Goel"
] |
stat.AP
|
[
"stat.AP"
] |
gobble
Functional data analysis:
Application to the second and third wave of COVID-19 pandemic in Poland
[
=================================================================================================
Millions of Americans
must attend mandatory court dates every year.
To boost appearance rates,
jurisdictions nationwide are increasingly turning to automated reminders,
but previous research offers mixed evidence
on their effectiveness.
In partnership with the Santa Clara County Public Defender Office,
we randomly assigned public defender clients to either
receive automated text message reminders (treatment)
or not receive
reminders (control).
We found the reminders reduced warrants issued for missed court dates by approximately
20%,
with of clients in the control condition issued a warrant
compared to of clients in the treatment condition.
We further found that incarceration from missed court dates dropped by a similar amount,
from in the control condition to in the treatment condition.
Our results illustrate the promise
of automated reminders to reduce the negative consequences of missing court.
arabic
§ INTRODUCTION
In the United States,
after a person is arrested and charged with a crime,
they are either held in jail as their case proceeds,
or they are released and asked to attend court of their own accord.
While many released defendants do indeed attend court—
as is legally required—
some fail to do so.
Non-appearance rates vary depending on jurisdiction and offense type,
ranging from less than 10% to as high as 50% <cit.>.
Failing to appear (FTA) at a required court date is a crime in 46 states <cit.>,
and non-appearance can prompt judges to issue a warrant
mandating the defendant's arrest (hereafter called a “bench warrant”) at their next
encounter with law enforcement.
Once arrested,
punishment can include time in jail
(e.g., California Code, PEN §§1320, 1320.5).
This incarceration
comes at a high cost to individuals and the communities they live in.
People in jail experience social and economic hardship,
including job loss, housing loss, family strain, and social stigma <cit.>.
These consequences may fall particularly hard on marginalized communities:
<cit.> show that pretrial incarceration is associated with reduced civic engagement (e.g., voting), especially for Black people,
and
<cit.> estimate that 62% of Black children in the U.S. have lived with an adult facing criminal charges—
nearly twice the rate observed for white children.
Past studies suggest that
many individuals miss their court dates
due to forgetfulness or confusion about the court system <cit.>.
As a result, court date reminders
are increasingly used
to help people remember and plan for
their upcoming court obligations.
Nearly half of counties nationwide
have either implemented or are planning to implement
court date reminders via text message, phone call, mail, or some other method <cit.>.
Yet research on the effects of automated text message reminders—one of the newest and most cost-effective options, now gaining popularity—is limited.[There is a larger literature on the effectiveness of
court date reminders by mail
or telephone call <cit.>,
and on the effectiveness of text message reminders to other participants in the criminal legal system
<cit.>.
For example, in an experiment in Arkansas, <cit.> found that text message reminders reduced missed probation and parole appointments by over 40%,
and <cit.> found that postcard reminders reduced non-appearance rates by up to 34%
in an experiment with
misdemeanor defendants in Nebraska.
See <cit.> and <cit.> for reviews of the relevant literature.
]
The literature that does exist paints an incomplete picture
on the efficacy of text message reminders to
increase court appearance and decrease the negative consequences of missing court (Table <ref>).
Two recent randomized controlled trials (RCTs) found significant and meaningful reductions in FTA rates from text message reminders
<cit.>;
two other RCTs found reductions in non-appearance rates,
though the estimates were not statistically significant <cit.>;
and one RCT estimated higher—but not statistically significant—warrant rates
among people who received a text message reminder <cit.>.
A study by <cit.> is one of the few to examine the impact of automated reminders on incarceration, finding no statistically significant effect of reminders on jail bookings.
To help resolve this ambiguity in the extent to which, if any, text message reminders increase court appearance and reduce incarceration,
we ran a pre-registered RCT with clients
of the Santa Clara County Public Defender Office (SCCPDO),
headquartered in San Jose, California.[
Our pre-registration is available at <https://aspredicted.org/SMY_N1R>.
Our original design included a second treatment arm, with alternative reminder text, but we later concluded that the two message variants were not meaningfully comparable and so shifted to showing participants only a single message type in our treatment condition.
We are currently running a new experiment that we believe is better designed to compare differing message templates, pre-registered at <https://aspredicted.org/FKC_XYY>.]
In addition to bolstering the general literature on text message court date reminders, our
study is the first to specifically examine the effect of reminders
for clients of a public defender.
Understanding the efficacy of reminders for this subpopulation is particularly important for ongoing policy debates,
as some have argued that mere representation by a public defender should be sufficient to ensure court appearance, obviating the need for reminders sent at additional cost to taxpayers.
Indeed, SCCPDO clients appear at their court appointments the vast majority of the time.
Yet there is still room for improvement,
with about 10–15% of scheduled court dates for SCCPDO clients ending in a bench warrant for non-appearance.
Given that individuals are often required to attend multiple court dates,
nearly one-third of SCCPDO clients received at least one bench warrant for missing court over the course of 2022.
Over half of these clients
were only facing misdemeanor charges,
and one out of every four
had no history of prior charges
on file with SCCPDO.
A single bench warrant for these clients
thus has the potential to quickly ramp up
an otherwise minimal brush
with the criminal legal system,
and underscores the importance of increasing appearance rates.
§ EXPERIMENT DESIGN
Our experiment consists of SCCPDO clients
who had court dates during two timespans in 2022 and 2023:
clients
between and ,
and clients between and .
To be eligible for inclusion in the experiment,
clients
must have had at least one court date in the timespans mentioned above,
had a cellphone number available in SCCPDO's case management system,
and had never previously received an automated reminder from SCCPDO.[
We briefly paused our experiment in between the two time periods while we updated our text message delivery system, as discussed in the Appendix.
Prior to the start of the experiment, as we developed our messaging system, we sent court date reminders to some SCCPDO clients; these clients were not eligible for inclusion in our experiment.
]
We focus on two outcome metrics:
(1)
issuance of a bench warrant for failure-to-appear (FTA)
at a client's first scheduled court date after assignment to treatment or control;
and (2)
whether a client was remanded to custody
on a bench warrant
at any point between assignment and the end of the experiment.
Judges often issue a bench warrant when a defendant does not attend a mandatory court date, though they can decline to do so if they believe the client has sufficient justification for not being present
(e.g., being sick with COVID).
Though we consider whether a bench warrant was issued at a client's first scheduled court date,
our findings are qualitatively similar if we look at other related outcomes
(e.g., bench warrant rates within 28 days of the first court date).
After a bench warrant has been issued,
a client may either voluntarily or involuntarily appear for a bench warrant hearing,
at which point a judge may choose to remand them to custody—i.e., hold them in jail for some time,
pending bail, later release, or case resolution.
For our second metric, we code the outcome as “1” for clients who were remanded at a bench warrant hearing where no new charges were brought,
and code the outcome as “0” for all other clients.
This metric directly corresponds to the target of our intervention—incarceration attributable to missed court dates.
However, our findings are qualitatively similar if we redefine the outcome to indicate whether a
client was remanded at any type of bench warrant hearing,
regardless of whether they were arrested on new charges.
The SCCPDO clients in our experiment were randomly assigned to treatment or control conditions with equal probability.
clients were assigned to the control condition,
which meant they did not receive any automated reminders;
and clients were assigned to the treatment condition,
which meant they received a series of automated reminders before their court date.
The covariate distribution was nearly identical
across experiment arms,
indicating that the randomization scheme worked as intended (Figure <ref>).
Prior to the first reminder, we sent an introductory text message to clients in the treatment condition explaining the reminder program
and explaining how to opt out, if desired.
Of the clients in the treatment arm, opted out of receiving text message reminders.
Reminders began seven days before each upcoming court date,
with another reminder three days before,
and a final reminder the day before the court date.
(See Figure <ref> for a diagram of these reminders.)
Clients were prompted to confirm their attendance by responding with “yes” or similar affirmations.
For example, our application recognized many possible confirmations, including “OK”, “Confirmed”, “I'll be there”, a thumbs-up emoji, and confirmations in Spanish
and Vietnamese.
If they confirmed,
we did not prompt for confirmation on subsequent reminders.
Translated versions of these reminders were provided in Spanish and Vietnamese for the of clients who had previously indicated a need for a translator in one of these languages (Figures <ref> and <ref>).
Ultimately, of clients in the treatment arm confirmed their attendance,
and among these clients,
received a bench warrant at their first court date;
in comparison, a bench warrant was issued for of clients who did not confirm their attendance.
This difference could be explained by the act of confirming,
self-selection,
or a combination thereof.
§ RESULTS
In the control condition,
of clients
received a bench warrant
at their first scheduled court date during our experiment window,
compared to for clients in the treatment condition.
This difference (, 95% CI )
corresponds to a reduction in bench warrant rates.
Similarly,
of clients in the control condition were remanded on a bench warrant at least once after assignment to our experiment,
compared to of clients in the treatment condition,
a difference (, 95% CI )
corresponding to a relative reduction of .
To improve the precision of our results, we also estimate the impact of text message reminders
via two logistic regression models—corresponding to each of our two outcomes of interest:
(Y_i=1) = logit^-1(α + β T_i + γ^T X_i),
where Y_i indicates one of our two outcomes (issuance of a bench warrant or remand to custody),
T_i indicates whether the client was in the treatment condition,
and X_i is a vector representing
a variety of observable features of the client, case,
and first scheduled court date.
In particular, X_i includes:
demographic information
(
the client's
age,
race,
whether the client identifies as male,
whether the client prefers a language interpreter,
whether the client's attorney indicated a possible mental health issue for the client,
and the distance between the client’s home address and the courthouse where their appearance is scheduled
);
client history
(
the number of bench warrants for non-appearance known to SCCPDO in the previous five years,
the inverse number of court dates known to SCCPDO in the previous five years,
the product of these two covariates, representing the client's bench warrant rate for failing to appear over the last five years,
whether the client was “new”, i.e., whether the earliest court date known to the public defender was in the preceding year,
and
the number of years since the client's phone records were updated
);
case information
(whether the most serious charge was classified as a misdemeanor or felony,
and indicators for which of high-level charge categories were present, e.g., burglary or robbery);
and court date information
(
the courthouse where the court date was scheduled,
the day of week,
the month,
and a number indicating the court date was the n-th scheduled appointment on a case
).
Under this model, the fitted coefficient β̂ is the estimated treatment effect.
Exponentiating β̂, we estimate that the odds ratio of being issued a bench warrant in treatment compared to control is
(SE ,
95% CI: )
(Table <ref>).
Based on a bench warrant rate of in the control condition, this estimate corresponds to a
decrease and
a relative reduction
in bench warrant rates attributable to receiving text message reminders.
Similarly, we estimate the odds ratio of being remanded to custody on a bench warrant in treatment versus control is
(SE ,
95% CI: ).
With a bench warrant incarceration rate of among clients in control,
this estimate corresponds to a decrease and a relative reduction in bench warrant incarceration attributable to receiving text message reminders.
§ CONCLUSION
Prisons and jails in the United States are overcrowded and underresourced <cit.>,
and arrests stemming from missed court dates are a significant contributor to incarceration.
As states attempt to reduce the number of people they incarcerate[For example, the Supreme Court of the United States ordered California to reduce the size of its prison population because overcrowding rendered prison conditions unconstitutional (see Brown v. Plata 2011, no. 09-12330).],
many are looking to court reminders as a way to increase court appearances and reduce jail time.
With an average marginal cost of roughly per defendant per case, our results suggest that a text message reminder program can be an effective and relatively inexpensive way to increase appearances and decrease incarceration.
Much remains unanswered about how to design behavioral nudges to be most effective at preventing bench warrants.
For example, the optimal timing and frequency of text message reminders is unclear.
It may be more effective to remind clients about court obligations more than a week in advance
or to do so more frequently in the week before.
The reminders we used also only briefly mentioned the possible consequences of missing court.
Perhaps other content—a stronger focus on the consequences, or a focus on possible supports—may be more effective at preventing bench warrants.
In addition, court date reminders may not help clients who are struggling with more fundamental barriers to court attendance, such as lack of transportation or childcare, or inability to take time off from work.
Other behavioral nudges, like transportation or financial assistance <cit.>, might further address these barriers and could complement court date reminders.
In addition to behavioral nudges,
policymakers
might consider alternate pathways to reducing pretrial incarceration.
For example, judges could issue a bench warrant for non-appearance only in the most egregious circumstances,
such as when there is clear evidence a defendant is unwilling to cooperate with the judicial process.
Some counties in California are working to improve appearance rates and other outcomes by
pairing defendants with case managers
that help to address underlying challenges, like housing instability and substance use, that their clients may be facing.
Ultimately,
while our work demonstrates the promise of behavioral nudges
for reducing incarceration, this approach is but one step
in more broadly reforming the criminal legal system.
§ ACKNOWLEDGEMENTS
We thank our partners at Santa Clara County, including Molly O'Neal, Sarah McCarthy, Terrence Charles, Sven Bouapha, Charlie Hendrickson, Srini Musunuri, and Angel Chan
for their efforts on this project.
Sophie Allen informed numerous aspects of this study through her fieldwork in Santa Clara County,
and we are grateful for her continued perspective.
Many other colleagues made valuable contributions to this work,
including:
Ro Encarnacion, Amelia Goodman, Dan Jenson, Nancy Mandujano, Ayesha Omarali;
as well as Tara Watford, Chris Correa, and others from The Bail Project.
This research was supported by grants from Stanford Impact Labs, Stanford Law School, the Harvard Data Science Initiative, and the Abdul Latif Jameel Poverty Action Lab.
apalike
§ APPENDIX
§ TREATMENT ASSIGNMENT
In the first phase of the experiment
(i.e., for clients with initial court dates between and ),
clients in the treatment condition received an
introductory text message up to seven days before their first court date reminder.
Occasionally, however,
court dates once eligible for reminders may have become ineligible
in this interim period
after the introductory message was sent
(e.g.,
because the attorney indicated they would appear on the client's behalf,
or because the recipient may have opted out of text message reminders immediately after their introductory message).
As a result,
of the clients in the treatment condition did not receive a reminder for their initially scheduled court date.
Nevertheless, we include in the treatment condition
all clients who received an introductory message,
regardless of whether or not a reminder was actually sent,
as the introductory text message could itself impact behavior.
In the second phase of the experiment
(i.e., for clients with initial court dates between and ),
we adjusted our protocol to address this issue,
sending the introductory message and the first court date reminder at the same time.
This change ensures that all clients in the treatment condition did in fact receive at least one reminder.
At the end of the first phase of the experiment, all clients in the first phase were transitioned to receive text messages reminders for any future court dates, regardless of whether they were initially assigned to treatment or control.
As a result, our estimate of the effect of reminders on incarceration is likely conservative, since some clients in the control condition received reminders for part of the observation window.
This spillover does not affect our estimate of reminders on the issuance of bench warrants, since that outcome is measured at a client's first court date, before any transitioning occurred.
No clients in the second phase of the experiment were transitioned,
i.e., clients in the control condition in the second phase did not receive reminders during the observation period.
To confirm that our assignment procedure indeed randomly assigned clients to treatment or control, we examined balance plots (Figure <ref>).
Across a wide range of covariates, we see that the distributions are nearly identical between the two conditions, as expected.
§ SPANISH AND VIETNAMESE REMINDER EXAMPLES
|
http://arxiv.org/abs/2306.06803v1
|
20230611231129
|
Stable Remaster: Bridging the Gap Between Old Content and New Displays
|
[
"Nathan Paull",
"Shuvam Keshari",
"Yian Wong"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
Stable Remaster: Bridging the Gap Between Old Content and New Displays
Nathan Paull
[email protected]
Shuvam Keshari
[email protected]
Yian Wong
[email protected]
July 31, 2023
========================================================================================================
The invention of modern displays has enhanced the viewer experience for any kind of content: ranging from sports to movies in 8K high-definition resolution. However, older content developed for CRT or early Plasma screen TVs has become outdated quickly and no longer meets current aspect ratio and resolution standards. In this paper, we explore whether we can solve this problem with the use of diffusion models to adapt old content to meet contemporary expectations. We explore the ability to combine multiple independent computer vision tasks to attempt to solve the problem of expanding aspect ratios of old animated content such that the new content would be indistinguishable from the source material to a brand-new viewer. These existing capabilities include Stable Diffusion, Content-Aware Scene Detection, Object Detection, and Key Point Matching. We were able to successfully chain these tasks together in a way that generated reasonable outputs, however, future work needs to be done to improve and expand the application to non-animated content as well.
§ INTRODUCTION
The way we perceive content has been revolutionized by the rapid progress of modern displays. From stunning 4K nature documentaries to sports broadcasts that provide such clarity that anyone can act as a referee, modern displays have enhanced the viewing experience. However, older content developed for CRT or early Plasma screen TVs has become outdated quickly and no longer meets current aspect ratio and resolution standards, resulting in a less enjoyable re-watching experience of beloved shows. Fortunately, we can solve this problem with the use of diffusion models to adapt old content to meet contemporary expectations.
Although Stable Diffusion<cit.> has gained popularity for image generation, its application to video often faces a challenge of temporal continuity. This poses a significant issue for aspect ratio expansion as the expanded content typically comprises static backgrounds. Our proposed project aims to address this limitation by utilizing the static spatial bias to govern the generation of new content and ensure that multiple instances of the same background region are not produced. By doing so, we can overcome the issue of temporal continuity in video generated using Stable Diffusion, thereby enhancing the quality and coherence of the output.
To achieve this goal, we will utilize a novel approach that combines Stable Diffusion with machine learning techniques. Specifically, we will use a machine learning model to identify and extract the static background regions from the input video. These regions will then be used as a reference to generate new content that preserves the temporal coherence of the video. Additionally, we will explore the use of other techniques such as motion estimation to further improve the quality of the output.
§ RELATED WORK
§.§ Modernizing Video Techniques
In recent years, there has been a growing interest in modernizing video through techniques such as super-resolution <cit.>, colorization <cit.>, and changing aspect ratio via outpainting <cit.>.
Super-resolution techniques aim to increase the resolution of videos beyond their original quality. Liu et al. <cit.> proposed a Bayesian approach to adaptive video super-resolution, which estimates motion, blur kernel, and noise level while reconstructing high-resolution frames. This approach achieved promising results that can adapt to various conditions. Shi et al. <cit.> used an efficient sub-pixel convolution layer for real-time super-resolution of 1080p videos on a single GPU, improving performance and reducing computational complexity compared to previous CNN-based methods. Kappeler et al. <cit.> proposed a CNN for video super-resolution that combines both spatial and temporal information, achieving state-of-the-art results with a relatively small video database for training.
Colorization is the task of coloring a grayscale video. Yatziv et al. <cit.> proposed a computationally efficient method for colorizing grayscale images and videos using luminance-weighted chrominance blending and fast intrinsic distance computations, resulting in high-quality outputs with reduced computational cost and user interaction. Zhang et al. <cit.> introduced an end-to-end network for video colorization that addresses the challenge of achieving temporal consistency while remaining faithful to the reference style. Their approach uses a recurrent framework that unifies semantic correspondence and color propagation steps, producing superior results compared to state-of-the-art methods.
Aspect ratio conversion is an evolving task, where older videos are adapted to fit on more modern devices with different aspect ratios. Guo et al. <cit.> proposed a method for converting video aspect ratios using a saliency model to determine regions of interest and applying a novel cropping and expanding mode to maintain visual quality and avoid distortion. Soe et al. <cit.> presented an idiom-based tool for video retargeting that allows users to control cropping and panning with selected cinematic idioms to achieve an optimal viewing experience on different platforms. However, these methods focus on cropping rather than generating new regions of the video to account for the aspect ratio change, and to our knowledge, this is one of the first works to use generative machine learning for this task.
§.§ Background Collapsing and Stitching
Background collapsing and stitching techniques are essential in image and video processing for tasks such as background removal, scene extension, panorama creation, and video retargeting. These techniques provide visually consistent and seamless results while maintaining the integrity of the foreground objects.
§.§.§ Background Collapsing
Background collapsing involves identifying and reducing redundant background regions in images or videos, allowing for the preservation of important foreground elements while resizing or retargeting. This technique often employs saliency maps or object detection algorithms to determine the importance of different regions in an image or video frame.
One prominent method for background collapsing is seam carving <cit.>, which involves removing or inserting pixels along optimal seams to resize images while maintaining the essential content. Seam carving has been further extended to videos by Rubinstein et al. <cit.>, who introduced a method for video retargeting that reduces or expands background regions while preserving the overall content and temporal coherence.
Another approach for background collapsing is patch-based image quilting <cit.>, which synthesizes textures by sampling patches from the input image and stitching them together in a visually consistent manner. This method has been extended to videos by Kwatra et al. <cit.>, who introduced an algorithm for video texture synthesis using a graph-based approach to synthesize temporally coherent video textures by stitching together small spatiotemporal patches from the input video.
§.§.§ Background Stitching
Background stitching techniques are used to combine parts of images or video frames to create a seamless and visually consistent output. These methods are essential for tasks such as panorama creation, video compositing, and background extension.
One common approach for background stitching in images is feature-based alignment <cit.>, which matches key points between overlapping regions of images and computes the transformation matrix to align and stitch the images together. This method has been further extended to videos for creating panoramic video sequences by Szeliski <cit.>.
Another technique for background stitching in videos is the content-aware video retargeting method proposed by Wang et al. <cit.>. This method employs a patch-based optimization approach to generate an output video with the desired target aspect ratio by stitching together patches from the input video while maintaining foreground object proportions and minimizing distortion.
Background collapsing and stitching techniques play a crucial role in image and video processing tasks. These techniques allow for the adaptation and enhancement of visual content while preserving the integrity of foreground objects and maintaining overall visual consistency. Advances in these techniques continue to improve the quality and versatility of image and video content for display on various devices and platforms.
§.§ Stable Diffusion and Video Generation
Image synthesis is a rapidly evolving field in computer vision, but it also has significant computational demands. Diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond by decomposing the image formation process into a sequential application of denoising autoencoders. Robin et al. <cit.> presented latent diffusion models, which significantly improve the training and sampling efficiency of denoising diffusion models without degrading their quality.
Dehan et al. <cit.> presented a method for video outpainting by converting a portrait (9:16) to landscape (16:9) video using background estimation, segmentation, inpainting, optical flow for temporal consistency, and image shifting to improve individual frame completions. They evaluated their method on the DAVIS and YouTube-VOS datasets. Ho et al. <cit.> introduced a method to generate high-definition videos using a base video generation model and a sequence of interleaved spatial and temporal video super-resolution models. Jin et al. <cit.> proposed a framework that can retarget the old video screen ratio to a wider target aspect ratio horizontally while preserving the quality of the objects using segmentation and inpainting networks. Relocating objects, especially those at the edges of the frames in the video, can be challenging.
These recent advancements in modernizing video techniques, stable diffusion, and video generation show promise in improving the visual quality and compatibility of older videos for display on modern devices.
§ METHODS
§.§ Dataset
Due to the nature of our project being the orchestration of several computer vision capabilities, we did not require labeled data, instead, we needed an animated video with an old aspect ratio. For our primary dataset, we have gathered videos from the animated television show 'Avatar the Last Airbender'. We have chosen this show as it has an aspect ratio of 4:3 and is popular enough to be recognized by many individuals. The image below shows how a single frame in the 4:3 aspect ratio would be displayed on many modern devices. These large black bars on the sides of the content obviously detract from the user's watching experience.
These large black bars are the areas we seek to fill in with content that does not distract from the existing frame but instead creates a more immersive experience. We plan to verify the accuracy/quality of the final video file generated by manual inspection as our goal is to assess human experience and immersion. As such if anything seems out of place or violates environmental rules set by the animator it would be deemed incorrect.
§.§ Overview
For this project, we have created a pipeline for expanding the aspect ratio of a given video which we have divided into 5 tasks which are shown in Figure 2.
As we can see in the image the pipeline stages are Scene Segmentation, Foreground Masking, Background Stitching, Background Outpainting, and Frame Resampling. We have further broken down Background Outpainting into two sections below (Outpaint Region Selection and Background Outpainting) in spite of them being treated as one task in our code. This is in part to discuss our methods in comparison to related work and due to the fact that these two tasks are semantically different even if it doesn't make sense to separate them in the pipeline itself.
We felt that the tasks in the pipeline were key tasks due to the following assumptions about animated content: Background within a scene is constant and maintains object permanence, camera motions generally follow affine transformations, and content at the edges of a frame will be background content. These assumptions allow us to streamline computation of this task, letting us process scenes independently, only try to generate background pixels, and stitch backgrounds together simply. There are likely some flaws in these assumptions or a set of assumptions that should be included with these above which we will discuss below in the results section as we identify shortcomings with our methodology.
§.§ Scene Identification and Segmentation
For scene identification and segmentation, we chose to use PySceneDetect's Python API as it could perform the scene segmentation and save output scenes to mp4 rather than just returning frame indexes within the parent mp4 file. We felt that this was key in any future work around parallelization as all tasks after this one in the pipeline can be run on scenes in parallel, sharply decreasing the overall runtime for expanding an episode.
When PySceneDetect is compared to alternatives such as SceneCutExtractor, and MatLabSceneDetection, and writing our own scene detection using functions within the OpenCV python API, our selection was quickly narrowed down to PySceneDetect and SceneCutExtractor. First, we felt the Python API offered by both of these libraries was crucial to making the pipeline easy to use and edit. Additionally, the ability to use a prepackaged library gave us much more time to work on the pipeline itself rather than focusing on just a single task. In our research, PySceneDetect was preferable to SceneCutExtractor because of its ability to save scenes to mp4, while SceneCutExtractor would save JSON or CSV files with frame indexes and evaluations. While this may provide more flexibility, we found the ease of use for PySceneDetect to be much more attractive.
Within PySceneDetect we used the content-aware scene detection which detects changes in the HSV color space to determine scene changes. Additionally, PySceneDetect uses ffmpeg to perform the scene cuts.
§.§ Foreground Masking
In this section of the pipeline, we make one further assumption, that objects in the foreground will first be rendered in full by the animator in addition the the assumption that objects in the background are permanent throughout the duration of a scene. This assumption motivates the idea of foreground masking so that we solely analyze the background when generating pixels.
To perform this masking we sought out a network that could recognize objects in the foreground, find bounding boxes for these objects (or masks if possible), and perform these two tasks at a relatively quick speed. We decided that bounding boxes was a minimum requirement as anything less than that would not allow us to run an algorithm such as GrabCut, however, if the method was able to generate masks on its own the need for GrabCut would be removed. These requirements led us to select Mask-RCNN, an object detection DNN built on a base ResNet Structure. Mask-RCNN not only detects a vast array of objects found in the COCO dataset but additionally generates masks for these objects and can be run at a speed of at least 5 frames per second on most GPUs. In our deployment on an Nvidia 2070 Super, we were able to get a speed of 7 frames per second.
With the selection of Mask-RCNN, all we had to do was to combine the masks of found foreground objects into a total mask that would separate the background from the foreground for the next stage in the pipeline, background stitching.
§.§ Background Stitching
The goal of background stitching was to create a total background for the scene. This is motivated by a common technique in the film used for creating semi-transparent characters. To accomplish this, two shots will be filmed sequentially, one with the character in the frame and one without the character in the frame. This allows editors to fill in the missing information with real information instead of generated information. Our goal was similar, to avoid using generated pixels wherever possible. This is because generating coherent pixels is computationally expensive and because we want to maintain pixels that exist in the original animation. If we generate the legs of a table on the boundary of a scene only for the camera to pan towards that table and have it disappear behind the original frame, we would immediately break immersion for the viewer.
Through this motivation for background stitching, we decided to use keypoint matching and affine transformation estimation. This method of keypoint matching and transformation estimation is common when generating panoramic photos from a set of distinct images. As such we chose to follow this method choosing SIFT as our method for keypoint detection and description. With these SIFT key points, we could then find a set of good matches to determine any affine transformations necessary for matching the set of images together.
Once we achieved a complete background we could then accurately determine which pixels can be sampled from information generated by the original animators and which pixels would need to be generated.
§.§ Outpaint Region Selection
Alongside the total background that we have constructed, we similarly construct a total mask. This mask will contain information regarding which parts of the background have been filled with information derived from frames within the scene and which parts of the background lack any information. This total mask allows us to determine which regions of the total background will require generated pixels.
Pixels will only be generated if they fall within the bounds of the new frames, this means that each time we will sample this total mask within the region of the new frame to determine if new pixels need to be generated. If no new pixels are needed then the answer is to simply sample the total background and return. If new pixels are needed however we will use this sampled mask to inform any outpainting and then we will add these generated pixels to the total background and update the total mask so that these pixels will not be generated again. In doing this we decrease the number of pixels that must be generated.
§.§ Background Outpainting
For the task of outpainting, we will use the practice of Stable Diffusion<cit.> as implemented by the diffusers python library<cit.>. This library provides pre-trained models that can be used through a simple Python API. Specifically, we chose the Stable Diffusion Inpainting Pipeline offered by this library as it allows the user to input a mask where pixels should be generated and allows for the user to input a prompt that will describe the generated pixels. While optimization of the prompt would likely improve results, we decided to use the generic prompt of 'animated background' for all generated pixels in the hope that it would create reasonable generated pixels. These generated pixels are then added to the total background generated from background stitching which will adjust the Outpaint Region selected for the next frame within the same scene.
This step of the pipeline we found to be our most time-consuming taking up to 40 seconds per frame to generate pixels. This in part is why some of the previous steps are necessary. Without any of the previous steps, a 20-minute episode at 30 frames per second would take nearly 400 hours to process. While we cannot provide a tighter upper bound than this, the lower bound is much lower as there is no generation of duplicate pixels. This means that longer scenes create shorter runtimes as no duplicate pixels are generated. This could likely be further optimized by choosing the frames that experience large translation transformations relative to the first frame along a set of key directions.
§.§ Frame Resampling
After we have generated all necessary pixels to fill in gaps within the background the task of frame resampling is rather simple. We begin by using the affine transformation found in the background stitching step to transform our sampling region. We then simply select all pixels in the total background that are within this region. Following this we calculate the inverse of this affine transformation and use this inverse to transform our sampled pixels back into a viewable frame. We continue this for each frame until the scene has been completely reconstructed. We can now save these frames in a new scene and then concatenate all scenes together to create the reconstructed episode.
§.§ Experiments and Results
Our primary stated goal was to create a pipeline that could expand the aspect ratio of old animated content without violating the temporal coherency of the content. We believe that overall we were successful in this endeavor with the results shown below.
In both of the figures above we can see some similar results. The easiest to see is the shortcomings with Figure 4 showing black spots on the left side of the frame and Figure 5 adding Christmas trees and distorted penguins. Additionally, both images seem to have some color distortion and both possess a vertical gray line on the right-hand side of the image. We also must admit that the color distortion is not constant throughout the scene either and this can be seen in the gif results linked in the Demo section below. All of these shortcomings do bring us short of the goal of perfectly adapting old animated content for modern screens with modern aspect ratios.
However, only mentioning these shortcomings would be ignoring the parts of the expansion that the pipeline gets correct. If you specifically look at the floor in both figures you can see how the scene is properly expanded. In the second figure, the mounds of snow are continued to the boundaries of the frame quite well and show that while this method was not completely successful that it is quite close. In the Future Work section below we continue to discuss what we believe can be done to solve these shortcomings and solve the stated goal.
While we can see the effects of Stable Diffusion on the figures above, we would like to specifically discuss some results from individual steps earlier in the pipeline so that we can analyze which steps are creating the error we see in the final product. We will first discuss the Foreground Masking section of the pipeline.
As we can see in the figure above, Mask-RCNN seems to detect some false objects in the animated domain. Additionally, we can see in the figure below how Mask-RCNN can also fail to detect objects when animators break body continuity to better display character motion.
Lastly, we can see how some of the object boundaries are very slightly off. While our method for background concatenation can help to limit some of this noise if several frames occur at similar camera positions, it is still possible for this noise to persist and affect certain frames.Overall these three results show that Mask-RCNN does not perform perfectly and adjustment or replacement of this model could lead to improved results at the output of the pipeline.
Finally, we would like to cover the results of the Background Stitching module. This module is what we believe to be the source of most of the shortcomings, especially the shortcomings related to color change around the edges of the image.
We believe that these off-color effects are due to the affine transformations performed in this step. In addition to these color effects, we also experience issues with key point matching on repetitive backgrounds where key points are harder to uniquely match. We believe that this and all other issues mentioned above are what lead to most lackluster results, but we are still excited by what all was in fact accomplished by this pipeline.
§.§ Demo
Our code is available as a GitHub repository named https://github.com/naston/StableRemasterStable Remaster. This Github provides instructions on how to set up the environment and run the demonstration code. The environment can be set up by creating an anaconda environment from the environment.yml file or by manually installing the libraries listed in that file. There is also the need to install ffmpeg for the full pipeline demonstration but this is not necessary for the scene-based demonstration. To run the full demonstration use the pipeline_demo.py file. To run the scene-based demonstration use the scene_demo.py file.
To view some example output from the scene_demo.py demonstration please visit the following https://imgur.com/a/dPxylzYlink. Here we have two gifs showing expanded scenes as well as images of the original scenes.
§ DISCUSSION
§.§ Team Work Assignment
§.§.§ Nathan Paull
At the start of the project, I conducted much of the initial research of related work regarding generated video and image content.
For the proposal, I wrote the initial problem statement and the proposed methodology. For the final report, I wrote the updated methodology ...
Regarding implementation, I developed the overall design for the project, creating the pipeline breakdown and finding libraries that can be used throughout the pipeline. This led to the creation of the environment files and many demonstration scripts as well.
Additionally, I wrote much of the code that interacted with scene segmentation and object masking code.
Lastly, I organized our individual Python notebooks into .py files and set up the demo, including the demo files.
§.§.§ Shuvam Keshari
At the beginning of the project, I reviewed papers on Stable diffusion models and how they have been applied in the domain of image generation along with helping set up the Github code repository. For the proposal, I wrote the introduction and the related work section and how we could use and apply some of the previous work to our problem statement. Regarding implementation, I added the code for keypoint matching and background stitching to Nathan's pipeline by researching and testing various openCV techniques on our dataset. As a collaborator on the code repository, I reviewed and identified possible bugs in the code outputs for troubleshooting as well and brought this up during regular team discussions. For the final report, I added the generated image outputs for demonstration for every step of the pipeline and created a flowchart as well. Lastly, I added the abstract and updated the related work section on Stable diffusion as done previously, and reviewed the entire paper for coherence and flow.
§.§.§ Yian Wong
In the initial stages of the project, I conducted a literature review, focusing on modernizing video techniques and stable diffusion for video generation. I later focused more on background collapsing techniques and their applications in image and video generation by researching and testing various techniques on our dataset. This helped our team to gain a better understanding of the state-of-the-art methods and informed our project's direction. I also collaborated with my teammates to troubleshoot any issues that arose during the development process. I implemented various background collapsing techniques across many frames to generate a coherent background for a given scene. For the proposal, I contributed to the related work section by summarizing the key findings from the literature review, highlighting the most relevant research to our project. Throughout the project, I actively participated in team discussions, providing input on the project's direction and offering feedback on the work of my teammates.
§.§ Future Work
There is a large amount of work that can be done to improve upon what we have accomplished in this paper. To start this pipeline has been designed in a modular manner so that future implementations can switch out components for more optimized versions. A primary example of this would be to remove Mask-RCNN, which performs object detection, with a model that performs the similar task of foreground detection. As discussed above Stable Diffusion composes a majority of our compute time, taking up to 40 seconds per frame. This is dramatically slower than all other stages in the pipeline and could be swapped out in favor of a faster algorithm. However, there is also room for fine-tuning of models like Mask-RCNN and Stable Diffusion for use in the animated domain. This would likely see much better results as animators often break the continuity of a characters body in order to accurately display motion.
As far as further replacement of pipeline steps we believe that there is a need for replacement of our implementation of background stitching as it relies of SIFT key point matching which fails on repetitive backgrounds like brickwork. This specifically led to huge issues with scenes where the camera panned across a semi-repetitive background. Similarly there is work to be done to limit the noisy affects of affine transformations on the output of the pipeline. This could even include the addition of a de-noising step at the end of the pipeline to reduce the affects of the many affine transformations. Some similar tasks to this would include
Additional extra steps in this pipeline could include tasks such as resolution scaling. We believe resolution scaling could provide two valuable capabilities. The first capability is to update resolutions to match modern displays in the same way we are updating the aspect ratio. Many displays work best with HD resolutions which many old animations do not have. Secondly, we believe down sampling could provide simpler tasks to models like Stable Diffusion, requiring the generation of fewer pixels which can then be scaled up to match a desired resolution. This would do a lot to help the computation bottleneck that we experience when running stable diffusion.
Another way to relax this computation constraint would be implementation of task parallelization. As each scene is treated independently it is not required that the computation of any two scenes be done sequentially. By running certain stages of the pipeline on scenes in parallel one could drastically cut down on the runtime of the entire pipeline. This however does make an assumption that certain scenes do not share a background, an assumption that we believe could be explored to determine which scenes would benefit from combined computation, allowing for temporal coherence across scenes and not just within them.
Lastly we believe that there is future work to be done on items that lie between the categories of background and object such as fire, lightning, and rain. These are items that should move with each frame, violating our assumption of a static background. Another assumption that is violated is the assumption of affine camera transformations. While neither of these assumptions are violated often, it is a valuable area of study to make this pipeline more robust.
§.§ Conclusion
In this paper we explore the ability to combine multiple independent computer vision tasks to attempt to solve the problem of expanding aspect ratios of old animated content such that the new content would be indistinguishable from the source material to a brand new viewer. These existing capabilities include Stable Diffusion, Content Aware Scene Detection, Object Detection, and Key Point Matching.
While we did successfully chain these tasks together in a way that generated reasonable output, we were not successful in this task. However, we still feel that the pipeline we have constructed pipeline serves as a great foundation for future work. Allowing for the introduction of new stages in the process or the replacement of old stages.
ieee_fullname
|
http://arxiv.org/abs/2306.02289v2
|
20230604073418
|
Evaluating the Impact of Community Oversight for Managing Mobile Privacy and Security
|
[
"Mamtaj Akter",
"Madiha Tabassum",
"Nazmus Sakib Miazi",
"Leena Alghamdi",
"Jess Kropczynski",
"Pamela Wisniewski",
"Heather Lipford"
] |
cs.HC
|
[
"cs.HC"
] |
Author name(s) for PDF metadata. Don't forget to anonymize for submission!
Evaluating the Impact of Community Oversight for Managing Mobile
Privacy and Security
Mamtaj Akter
Vanderbilt University
Madiha Tabassum
Northeastern University
Nazmus Sakib Miazi
Northeastern University
Leena Alghamdi
University of Central Florida
Jess Kropczynski
University of Cincinnati
Pamela J. Wisniewski
Vanderbilt University
Heather Lipford
University of North Carolina, Charlotte
============================================================================================================================================================================================================================================================================================================================
Mobile privacy and security can be a collaborative process where individuals seek advice and help from their trusted communities. To support such collective privacy and security management, we developed a mobile app for Community Oversight of Privacy and Security ("CO-oPS") that allows community members to review one another's apps installed and permissions granted to provide feedback. We conducted a four-week-long field study with 22 communities (101 participants) of friends, families, or co-workers who installed the CO-oPS app on their phones. Measures of transparency, trust, and awareness of one another's mobile privacy and security behaviors, along with individual and community participation in mobile privacy and security co-management, increased from pre- to post-study. Interview findings confirmed that the app features supported collective considerations of apps and permissions. However, participants expressed a range of concerns regarding having community members with different levels of technical expertise and knowledge regarding mobile privacy and security that can impact motivation to participate and perform oversight. Our study demonstrates the potential and challenges of community oversight mechanisms to support communities to co-manage mobile privacy and security.
§ INTRODUCTION
The majority of U.S. adults own smartphones <cit.>, and nearly half of them have reported downloading various third-party apps <cit.>. These mobile apps often require access to users' sensitive information, such as contacts, emails, location, photos, calendars, and even browser history <cit.>. Most apps request users' permission before accessing any information or resources. Yet users may have difficulty understanding these permission requests and the implications of granting them <cit.>. As a result, users struggle to make permission decisions or grant permission by mistake <cit.>. Even worse, there are ways for more malicious apps to circumvent the permissions system and secretly gather users' system resources and private information without consent <cit.>. Ironically, a recent Pew Research study reported that most US adults are concerned about how their personal information is being used by these third-party apps as respondents felt they lack control over their mobile privacy <cit.>.
This lack of understanding leads users to seek advice and guidance from others <cit.>. Several studies have demonstrated that users often learn about privacy and security from their social network, which influences them to change their own digital privacy and security behavior <cit.>. As such, networked privacy researchers acknowledged the importance of these social processes for managing individual and collective digital privacy and security <cit.>. Despite this prior work, few mechanisms to support these social processes have been developed and evaluated. In this paper, we explore community oversight, where trusted groups of users help one another manage mobile privacy and security.
In our previous work, we proposed a theoretical framework of community oversight <cit.>, describing how the concepts of transparency, awareness, trust, individual and community participation are needed within a particular mechanism. We have now implemented a mobile app, Community Oversight of Privacy and Security (CO-oPS), to explore these concepts in use and support a collaborative approach to mobile privacy and security management. The CO-oPS app allows individuals in a community to review one another's apps installed and permissions granted and provide direct feedback to one another.
In this paper, we present a field study of the CO-oPS app. Our aim was to understand the impact of using the app on participants' mobile app decisions and perceptions. We conducted a 4-week mixed-method longitudinal field study with 101 people in 22 self-formed groups. Each group installed, used, and evaluated the CO-oPS app, provided oversight to one another on their mobile app privacy decisions, and shared experiences through weekly surveys and optional interviews. We describe how users interacted within the app and the changes in their mobile app permission decisions after using the CO-oPS app. We also examine how participants' perceptions regarding co-managing their mobile privacy and security within their communities change throughout the study.
To do so, we measured constructs derived from our community oversight model <cit.> of perceptions of transparency, awareness, trust, and individual and community participation within the CO-oPS app. We tested for the pre-post study differences and detected increases for all of these measures that were statistically significant. Qualitative findings further explain these perceptions and identify co-management concerns: feelings of privacy invasion of their own and others, lack of trust in less knowledgeable community members, lack of close relationships, and communities' inadequate tech expertise. We also found that using the CO-oPS app helped participants increase their communities' collective capacity to address their mobile privacy and security concerns.
In sum, our study makes a unique contribution to SOUPS research community by investigating through a field study how a community oversight mechanism can help increase participants' collective capacity to support one another in co-managing mobile privacy and security together as a community. Specifically, we make the following unique research contributions: 1) Through a longitudinal field study, we describe the benefits and challenges of using a community oversight app to co-manage mobile privacy and security; 2) We provide empirical evidence of the potential for community oversight to increase users' awareness of mobile privacy issues, leading to individual changes in decisions and community exchange of knowledge; and 3) We present considerations and design-based recommendations towards features to support communities in providing oversight to one another.
§ BACKGROUND
Privacy and Security Management in Mobile Applications
Mobile applications often access sensitive information and share users' personal data with third parties <cit.>. As such, substantial work has been done to investigate and support end users in managing mobile app privacy and security. Researchers have looked at the existing privacy awareness and management approaches (e.g., app privacy permission prompts, privacy policies, etc.) and found that such mechanisms often fail to provide users with awareness and knowledge of privacy and security risks <cit.>. Moreover, users often do not understand mobile app permission dialogues <cit.> and are over-exposed to such requests <cit.>. Researchers have proposed several technology-based solutions to increase awareness and limit potential risks associated with third-party mobile apps <cit.>. For example, Sadeghi et al. suggested evaluating the app permissions against risks and automatically grant/revoke permission on users' behalf <cit.>. Others proposed mechanisms to inform users about the app privacy risks, recommend secure choices, and nudge them to review/revise permissions <cit.>. Others suggested tools to allow users to review data before sending it to the server, visualize data flow <cit.>, and replace personal information with mock data without affecting app functionality <cit.>.
While this body of research has emphasized enhancements to technology to help individuals manage privacy and security while using mobile applications, none looked at how knowledge and influence from social groups help in individual privacy and security decision-making. Our research focuses on assessing and supporting these social processes involved in privacy and security management.
Community-based Approaches for Privacy and Security
In general, research shows that people frequently take collaborative approaches to make privacy and security decisions <cit.>, and users often rely on social factors while making such decisions. Chin et al. discovered that smartphone users are more likely to consider social signals, such as reviews and ratings from other users, rather than privacy indicators regarding Android permissions when making app use decisions <cit.>. Das et al. demonstrated that social factors (e.g., community adoption of security features) could increase individuals' security awareness and encourage them to adopt security features <cit.>. As such, researchers have proposed using social and community influence to assist individuals in making decisions about digital privacy and security <cit.>. Squicciarini et al. developed CoPE, a tool to support users in collaboratively managing their shared images in social network sites <cit.>.
Past research has also examined privacy management approaches involving one party performing oversight for another. Organizations adopt mobile device management (MDMs) systems to remotely control and secure the data stored in employees' mobile devices <cit.>. Parents use adolescent online safety apps to monitor and protect teens by restricting their online behavior <cit.>. The results from these studies suggest that a collaborative approach, rather than one-sided control, could benefit both parties and lead to more privacy-preserving outcomes. Finally, several studies leveraged crowdsourcing to use mass user data to support individual users in making improved mobile privacy and security decisions <cit.>. For instance, Ismail et al. utilized crowdsourcing to recommend permissions that can be disabled for enhanced privacy without sacrificing usability <cit.>. However, these approaches showed little consideration for the trustworthiness of information from a random crowd. On the other hand, researchers found that users are more willing to adopt and share privacy advice from a trusted community <cit.>, and they often communicate first with friends and family to learn about potential privacy and security threats and mitigation strategies <cit.>.
In summary, our work builds upon the past literature in social cybersecurity, MDMs, parental control apps, and crowdsourcing to implement and evaluate a novel model of community-based oversight (i.e., self-selected groups) for mobile privacy and security through a large-scale field study. Since the network structure of oversight (e.g., individual for MDMs, many-to-one for crowdsourced recommendations, and unidirectional from parent to child for parental control) in these prior works is vastly different than ours, this new model of community oversight warrants deeper empirical investigation.
In <cit.>, we were the first to propose a novel framework of community oversight for helping people manage their mobile privacy and security together. Through a participatory design study, we identified mechanisms that would allow users to support others in the community in making privacy and security decisions regarding mobile app permissions.
We also designed a prototype mobile app that allows users to collaborate and share information with people they know to help make mobile app permissions decisions <cit.>. While this body of our prior studies provides a valuable basis for the design of community-oriented privacy and security management systems, they only present a theoretical view of users' preferences in community decision-making. In contrast, this study contributes to the literature by providing an in-situ evaluation of how trusted groups of people use and interact with different community-oriented features to collaboratively manage their mobile privacy and security.
§ DESIGN OF THE CO-OPS APP
We developed the Community Oversight of Privacy and Security (CO-oPS) Android app <cit.> based on the model of community oversight proposed in our prior work <cit.>. This model outlines the need for community oversight mechanisms to support individual and community participation through awareness and transparency features that build trust between community members. Thus, our CO-oPS app design includes four key features: 1) People page, 2) Discovery, 3) Permissions, and 4) Community Feed.
The Discovery page allows community members to review one another’s installed apps (Figure-<ref>(b)), and the list of permissions granted or denied to each app (Figure-<ref>(c)). Users also can review the count of total community members who have the same apps installed or permission granted. To help users change the app permissions easily, the Permission page provides a “SETTINGS” link that forwards users to Android Settings to modify app permissions. On the Discovery page, users can also hide some of their own apps from their community, ensuring their personal privacy. To provide feedback to one another, users can direct message and can openly discuss any privacy and security issues on the Community feed page (Figure-<ref>(d)). This community feed has another important function: when someone in the community changes their app permission, the CO-oPS app creates an automatic post on the community feed about that change. It also posts weekly protips to educate community members regarding safe apps and permissions.
§ STUDY CONSTRUCTS
To evaluate the impact of using the CO-oPS app, we measured a set of constructs that we surveyed before, during, and at the end of the field study. We measured all constructs by presenting participants with various statements relevant to each construct. Participants were asked to rate each statement on a 5-point Likert scale from 1 (strongly disagree) to 5 (strongly agree).
First, we developed new constructs derived from the theoretical framework for community oversight proposed in our prior work <cit.>, consisting of transparency, awareness, trust, individual participation, community participation, and community trust. We validated these new constructs through standard psychometric tests (i.e., Cronbach's alpha <cit.> to confirm internal consistency), which is reported in Table-<ref>. Then, we utilized three pre-validated scales from prior research <cit.> to measure community belonging, self-efficacy, and community collective efficacy. All scale items are included in Appendix A. Below, we define each of the constructs, along with our hypotheses.
Transparency:
As Das et al. demonstrated <cit.>, social proof - seeing others adopt a privacy and security behavior - often helps individuals adopt the same behavior. Therefore, to encourage individuals in a community to make informed decisions for their mobile privacy settings, the behaviors of others must first be transparent. Therefore we define transparency as an individual's perceived visibility of their community's mobile apps installed and the permissions granted/denied.
H1: At the end of the study, community members will perceive higher levels of transparency in their community's mobile privacy and security behaviors.
Awareness:
Endsley demonstrated <cit.> that situational awareness - the understanding of what is going on around someone - is a key component in effective decision-making. In a later study <cit.>, DiGioia and Dourish suggested that being informed about digital privacy and security norms and practices along with the actions performed by the community are necessary for an effective social influence process. We developed our awareness measure as an individual's perception about the awareness of their own and others' apps installed, permissions granted/denied, along with the changes made.
H2: At the end of the study, community members will perceive higher levels of awareness regarding their community's mobile privacy and security practices.
Trust:
In <cit.>, we identified that having the information available and being informed about mobile privacy and security practices might not be sufficient for community oversight. This is because individuals need to be able to trust the quality of the information and perceive the information as dependable to learn from and be influenced by it.
H3: Community members will have a higher level of trust in one another's mobile privacy and security decisions.
Individual Participation:
While an effective social process needs transparency, awareness, and trust in one another, individuals also need to be willing to engage in this process <cit.>. Users need to be motivated to utilize the knowledge gathered from their community in order to make decisions. They also need to be willing to provide oversight to others. Thus we define individual participation as an individual's willingness to take steps to make changes in their own mobile privacy and security behaviors (uninstalling unsafe apps or denying dangerous permissions) and also providing oversight to others' mobile privacy and security behaviors (providing feedback and guidance to others).
H4: Community members will perceive higher individual participation at the end of the study.
Community Participation:
Community oversight mechanisms can take place in different types of communities, such as, families <cit.>, coworkers <cit.>, friends, and social networks <cit.>. Yet not all types of communities may have an equal level of willingness to take part in different forms of community oversight.
For example, in <cit.>, we found that communities with closer relationships might be more willing to help one another make decisions than communities with weaker ties. Therefore, we define community participation as an individual's perception of their community to collectively work together, e.g., help one another, exchange feedback and guidance, and engage in open discussions.
H5: At the end of the study, participants will perceive a higher level of community participation.
Community Trust and Belonging:
Individuals are likely to help one another if they feel like they belong and can trust their community members. We define Community Trust as an individual's perception of trusting their community to keep their personal information (e.g., apps installed) private and care for one another's mobile privacy and security. For community belonging, we utilized a pre-validated measure <cit.> that has been used in exploring community support mechanisms outside of privacy and security. The community belonging construct measures an individuals' feelings about how much they matter to their community. While our participants already knew each other, participating together in the CO-oPS app could lead them to feel stronger bonds and care between each other. Therefore, our hypotheses are:
H6: An individual's community trust will be higher at the end of the study.
H7: Community belonging will be higher after the study.
Efficacy:
Two of the outcomes we wanted to measure are perceptions over the efficacy of individuals and groups to manage their mobile privacy and security. Thus, we used pre-validated measures for self-efficacy <cit.>, and community collective efficacy <cit.> in our study. The self-efficacy <cit.> construct measures an individual's perceived capacity to manage their own mobile privacy and security. The community collective efficacy <cit.> construct measures an individual's perceived collective capacity to manage their community's privacy and security together. Our hypotheses are:
H8: Individual's self-efficacy will be higher after the study.
H9: Community collective efficacy will also be higher at the end of the study.
§ METHODS
Study Overview: The overall goal of our study is to evaluate the CO-oPS app in building the capacity of the communities to manage their mobile privacy and security collectively. We also wanted to understand what impacts this community-based approach may have in changing participants' perceptions and behaviors toward their individual and collective mobile privacy and security management. To achieve these goals, we recruited small self-organized communities (2-6 Android phone users) who knew each other. Each community member installed the CO-oPS app and participated for four weeks. Measures were gathered before app installation, each week of the study, and at the end. Each week participants were asked to complete different in-app tasks that allowed them to explore the features of the CO-oPS app. Finally, participants were invited to participate in an optional follow-up interview. In each step of the study, we explicitly provided the definition of the term "community" as "your group members who are participating in this study." Each participant was compensated with a $40 Amazon gift card for completing the field study, with an additional $10 Amazon gift card for participating in the interview. Some participants withdrew from the study after two weeks due to technical difficulties with their smartphones and were compensated half the amount. Twenty-nine participants discontinued participation after week one, perhaps due to natural attrition, and were not compensated. Data were discarded from all who did not complete the study.
Participant Recruitment:
We recruited a total of 101 participants that were associated with 22 communities. We initially recruited the primary contacts of each community who completed a pre-screening eligibility survey that verified whether they met the inclusion criteria of the study prior to providing their informed consent. The inclusion criteria for participation included: 1) reside in the United States, 2) be 13 years or older, 3) have an Android smartphone, and 4) be willing to install and use the CO-oPS app. Here, we also specified that they “must participate in a group with two other people you know," which determined the minimum group size required to participate in this study. After completing the screening survey, the initial contacts were asked to share this eligibility survey with people they knew to invite them to participate in this study as their community members. Therefore, the initial contact of each group self-selected their community based on the above criteria (1-4). As such, all group members knew the initial contact but in some cases, were only loosely acquainted with one another. For the teen participants, we required their parents to complete this survey and provide their consent.
Our study was Institutional Review Board approved. The target characteristics of our participants were all Android smartphone users of any age range (minors, adults, and older adults). Therefore, we did widespread recruitment through social media, email, phone calls, and word-of-mouth. The recruitment process started in January 2022 and ended in August 2022. Overall, we recruited 22 communities (101 participants) where the size of the communities ranged from 2 to 6. Table-<ref> summarizes the gender, age groups, ethnicity, and education of our participants. Our participants were primarily young, between the ages of 13 to 34. Most of them had a college degree. The majority of the participants were Asian, followed by African American, Hispanic/Latino, and White/Caucasian. Table-<ref> illustrates the frequency of the group compositions. Most of the groups consisted of family members, friends, and others (e.g., neighbors, co-workers, and acquaintances).
App Tasks:
During the field study, our participants were asked to explore different parts of the CO-oPS app through a set of tasks each week. These tasks prompted them to become familiar with CO-oPS features and introduced them to the goal of collaboratively managing mobile privacy and security. Table-<ref> depicts the weekly tasks. For example, Week 1 tasks asked participants to become aware of their own mobile privacy and security decisions, whereas Week 2 tasks asked them to perform oversight of others in their community. Participants could check off completed tasks in the app to remove them from their task list, but we otherwise did not track or require completion to continue in the study.
Survey Design:
Each participant completed two Qualtrics surveys (pre-study and post-study) before and after the field study, which contained four constructs: self-efficacy, community belonging, community trust, and community-collective efficacy. The pre-study survey also collected participants’ demographic information, e.g., age, gender, ethnicity, and education. During the field study, participants also completed a shorter Qualtrics survey each week (weekly survey), containing all constructs of the community oversight model. Links to the weekly surveys were delivered through the CO-oPS app, which redirected participants to the Qualtrics web survey.
Follow-up Interview:
At the end of the field study, we invited participants to an optional 30-minute one-on-one interview session on Zoom to learn about their experience using the CO-oPS app with their community. Fifty-one participants from 18 communities participated in the follow-up interviews. We started the semi-structured interview by asking about mobile privacy and security practices before participating in the study. Next, we asked about their overall experience of using the CO-oPS app. Participants were also encouraged to express their perceived benefits and concerns about different features of the CO-oPS app. Appendix B presents some sample interview questions we asked during the follow-up interviews. The interview sessions ranged from 40-70 minutes and were audio/video recorded.
Data Collection and Analysis:
The study produced a rich dataset: 1) quantitative data from survey measures, 2) CO-oPS app usage logs, and 3) qualitative data from follow-up interviews. We first categorized the survey responses as pre-study, week-1, week-2, week-3, week-4, or post-study, depending on the timestamps of the survey completion. Then, we verified the construct validity of our measures using Cronbach's alpha <cit.> and created sum scores to represent each construct. Next, we conducted Shapiro–Wilk tests and found that the sum scores of the constructs were not normally distributed (ps<.01). Therefore, we performed the non-parametric Wilcoxon rank-sum test to identify significant differences between the pre-study and post-study measures (Table-<ref>). We also present the descriptive statistics for
each pre- and post-study survey item of the newly
developed constructs (Appendix D)
We instrumented the CO-oPS app to log participants’ usage data. We also stored the list of the apps installed and permissions granted/denied during the installation of the CO-oPS app and at the end of the field study. We analyzed the usage log to identify how and at what frequencies participants utilized different features of the CO-oPS app. We also analyzed the pre- and post-study app/permissions lists to investigate the changes made to the apps and permissions during the study. Due to some technical issues with the CO-oPS logging feature, we could not log the in-app activities of the first seven communities. Therefore, the app usage data was received from only the last fifteen communities (N = 68 participants).
We qualitatively coded our interview data using inductive analysis techniques <cit.> to understand how participants perceived the CO-oPS features that tie to the constructs we were measuring. Thus, our qualitative analysis complemented the quantitative results from our surveys. We first familiarized ourselves with our data by reading through each transcript and then template-coded our data based on the community oversight concepts described in Section 4. Specifically, we coded for 1) the level of transparency on the information shared by them or others, 2) the types of information they felt helped raise their awareness about the community's mobile privacy and security practices, 3) the level of trust of one another's privacy behaviors and advice, 4) how and whether they would individually participate in such a community, 5) how participants discerned community participation, 6) the trust and belonging they felt with their communities, and 7) their individual and community-level capacity to manage mobile privacy and security. The first author worked closely with three researchers to code the data iteratively and formed a consensus among their codes. The remaining authors helped guide their analyses and interpretation of the results. Appendix C presents the codes and illustrative quotations for each analysis theme.
§ RESULTS
On average, participants spent 32 minutes in the CO-oPS app over four weeks, ranging from 19 minutes to 1 hr 31 minutes. Table-<ref> summarizes the activity types that participants performed with the CO-oPS app. Table-<ref> summarizes Cronbach's alpha, means, standard deviations, skewness, and kurtosis of each construct measured during the study. All Cronbach's alphas were greater than 0.80, which suggests good internal consistency of our measures. Next, we tested for within-group differences in the constructs based on whether the participants completed it at the start or end of the study. We will discuss each measure below, along with the corresponding findings from the qualitative data and usage logs related to each construct.
Transparency:
As shown in Table <ref>, participants reported higher (p=.010, M1=4.07, M2=4.36) levels of transparency (an individual's perception of whether CO-oPS gave them a transparent view of the apps installed and permissions granted on their community’s mobile devices) at the end of the study. Hence, this result supported our hypothesis (H1). Our qualitative results also confirmed that almost all participants felt that CO-oPS made their community's mobile privacy and security decisions visible to them. Three-quarters of the participants interviewed (76%, N=39) explicitly said they liked the CO-oPS feature that let them check own apps and permissions. Participants often mentioned that having the ability to see all their installed apps on one screen provided them transparency of their app usage. Two-thirds of the participants (67%, N=34) also brought up the visibility of others' apps and permission. To this end, they said reviewing others' apps and permissions provided them a sense of purpose for using CO-oPS with their communities. Interestingly, while these participants appreciated the ability to review one another's apps and permissions, they often referred to the importance of the CO-oPS app-hiding feature because it made them feel less intrusive to others' privacy. As such, C18P1 said, "Because some of the apps can be hidden if someone likes, that gives me the feeling of a relief when I see others' [apps installed].”. However, some participants (25%, N=13) believed that this app-hiding feature defeats the main purpose of CO-oPS.
Some participants, on the other hand, perceived transparency as a two-way privacy violation, e.g., the privacy of themselves and the privacy of others. For example, more than one-third of the participants (38%, N=20) felt that their personal privacy was being violated as others, who were not close, could see their personal information (e.g., installed apps and permissions). Some participants (27%, N=14) also specifically said they might forget to hide an app after installation, which could leave their apps visible to others. On the contrary, one-fourth of the participants (25%, N=13) felt that this transparency of others' apps and permissions can be privacy-invasive to others as well. C11P2 said, “While using the app, my friends and I discussed privacy more than security because we can see the apps on our friend phones and I think that's not a good thing.”.
The results from the log analysis (Table-<ref>) also supported the above concerns. For instance, around half of the participants (49%, N=34) hid one or more of their installed apps from their communities. Participants, on average, hid six mobile apps ranging from one to 17. The most frequent types of apps that participants hid were games, video streaming apps, banking apps, and online shopping.
Awareness: Participants overall reported a higher (p<.001, M1=3.95, M2=4.37) level of awareness (individual's perception of whether CO-oPS made them more aware of their community’s mobile privacy and security decisions) after the study. Hence, this result supports our hypothesis (H2). Our qualitative results showed that using the CO-oPS app helped participants raise their overall awareness of mobile privacy and security issues, along with their awareness of one another's privacy and security practices. For example, almost all participants (94%, N=48) felt that they became more aware of mobile privacy and security issues because CO-oPS enabled them to focus on permissions. They also became more aware of which of their personal information was being accessed by their installed apps. For example, C15P1 said, “It just makes it more obvious. It's very focused on permissions. So I think having that focus, it's very beneficial. People in the community, I see are now more concerned... for their permissions specifically. I totally see how it changed our perspectives.”. Some of these participants (39%, N=20) often brought up the weekly pro tips they got on the CO-oPS app as it helped them increase their awareness regarding mobile privacy and security in general.
Most participants (86%, N=44) also said they became more aware of whether a permission is necessary for an app. They often mentioned that comparing their own app permissions with others helped them increase this knowledge. Almost all of these participants (82%, N=42) also mentioned that it helped them keep track of their own apps, as they found more installed apps on CO-oPS' Discovery page than they were aware of. They also often mentioned some granted permissions found on the CO-oPS app (e.g., microphone, camera, location, contacts, etc.) that they did not remember granting.
Around half of the participants (57%, N=29) said they became more aware of their community members' privacy and security behaviors. Here, they mostly mentioned one or two people in their community whose apps they could keep an eye on to ensure their safety. Lastly, some participants (39%, N=20) also said that they appreciated the CO-oPS feature that informed everyone about the permission changes made by any member as this helped them decide whether to imitate that change. Around one-fifth of the participants (18%, N=9) felt that CO-oPS app did not make them aware of the app changes made in the community - the apps community members installed or uninstalled on their phones. This was not a feature we implemented, and these participants felt like it had been overlooked and desired that awareness.
The findings from our log analysis (Table-<ref>) are reflective of the quantitative results. For example, most of our participants (87%, N=61) checked their own app permissions during the study. Participants, on average, reviewed the permissions of 23 apps and primarily explored the permissions of apps that are about gaming, online shopping, social media, and financial payments. Alongside reviewing their own apps and permissions, more than two-thirds of the participants (65%, N=46) reviewed others’ app permissions. On average, they explored 18 app permissions of their community members. The most common types of apps being reviewed were social media, banking, and gaming apps.
Trust:
Similar to the above two constructs, the post-study responses saw a higher (p<.001, M1=3.61, M2=4.28) level of trust (individual's perception of whether CO-oPS helped them foster trust in one another’s mobile privacy and security decisions) among the community members. This result confirmed our hypothesis (H3). Almost half of the participants (51%, N=26) said they found the advice provided by their community was dependable. They were overall appreciative of the feedback and guidance they received from more tech-savvy community members, as it helped them learn more about risky apps and unnecessary or dangerous permissions. However, the trust did not always extend to all community members. For example, C18P1 said: "In this app, you're trusting each other's decisions. But for me, in this community, only [Name] is more tech-savvy. And most of the people are not. And these decisions are not always well-informed, right? So, I follow only [Name] to check what he has.".
Conversely, participants felt that trusting others' privacy and security practices might be challenging in some cases. Around half of the participants (49%, N=25) said some of their community members were less knowledgeable about mobile privacy and security issues. Therefore, they could not trust those people's mobile privacy and security decisions to learn from. As C11P3 said, “I don't think they were much of aware. They do not care of all this, you know, privacy and security stuff,... so I am not sure I followed them, their permissions and stuff.”. Interestingly, they also often mentioned that those with less knowledge were less tech-savvy (37%, N=19) in general.
Individual Participation:
Participants reported a higher (p<.001, M1=3.78, M2=4.23) level of individual participation (perception of whether the CO-oPS app helped individuals participate in their own and others' mobile privacy and security decisions) after using the CO-oPS app for four weeks. This supported our hypothesis (H4). Our qualitative results also revealed that participants overall took the initiative to change their apps and permissions and also provided their oversight to others. Notably,
more than two-thirds of the participants (67%, N=34) said that they made changes to their own apps and permissions. Participants often said that they made these changes after reviewing their own permissions and identifying the unnecessary or concerning ones by themselves. Some other participants said that comparing their own app permissions with their community's inspired them to change their app permissions. Some of these changes were made because of feedback received from other community members. C02P1 said, “I did some changes. I denied some of my permissions. [Name] asked me to remove the microphone from one of the apps I use for workouts. I have removed it now. ... also, you can always just check and then you just have to learn what permissions are suspicious and what are necessary.”.
Next, more than one-third of the participants (41%, N=21) explicitly said that they provided feedback to their community members to warn about the apps that they thought might be risky or the permissions granted that might be a cause of privacy concerns. To provide feedback, participants did not just use the CO-oPS messaging feature, they also mentioned using other media, e.g., text messages, social media private messages, phone calls, or talking in person.
Log results (Table-<ref>) demonstrate that individuals did provide oversight during the study. We found that 74%, N=51 participants sent messages to someone in their communities, where twelve messages were about warnings regarding risky apps (games, social media). Thirty-five messages contained warnings regarding specific app permissions they found on their community members’ phones. They mostly provided feedback about location, camera, microphones, and contact permissions. For instance, C09P1 messaged C09P3: “You’re granting Douyin a ton of permissions. maybe we should keep the Chinese spyware to a minimum.”
However, some participants expressed a number of factors that reduced their motivation to participate. More than one-third of the participants (41%, N=21) believed that they were less tech-savvy than others in their community and therefore they doubted their feedback would be useful to others. Interestingly, some participants (39%, N=20) felt that the people who participated with them were not close and therefore they did not care about those people's mobile privacy and security. A few participants (29%, N=15) expressed that they had very few mobile apps installed on their devices, and so, they did not need to be concerned about mobile privacy and security. Ironically, some of these participants also believed that they did not have anything to be concerned about because the personal information that is stored in their mobile phones is not very sensitive in nature. A few also felt that their information was already leaked by some online entities and so it was too late to start caring about mobile privacy and security. As such, C14P2 said: “I don't see the point now because you can't just control what they [apps] already stole from you. I use very few apps, and all my data is already out there."
Community Participation:
The community participation measure (individual's perception of whether the CO-oPS app enabled the community to help one another make their mobile privacy and security decisions) increased (p=.003, M1=3.86, M2=4.18) over the duration of the field study. This confirmed our hypothesis (H5). More than three-fourths of the participants (78%, N=40) said the CO-oPS app allowed them to learn from their community regarding mobile privacy and security management and exchange their knowledge regarding app safety and privacy. Most of these participants (71%, N=36) also mentioned that using CO-oPS helped them initiate more open discussions regarding mobile privacy and security in their community than ever before. They said these discussions most often took place offline when they saw one another in different social gatherings. Around half of the participants (53%, N=27) specifically discussed receiving feedback and advice from their community. C17P1 said, “I mean, offline, or virtually, we kind of worked together, we talked, we get each other’s knowledge. But that also happened with the co-ops app, that there were so many options to get in touch with each other by that messaging or, notifying them, or community discussion... I will say it kind of, we helped one another learn as a team.”.
However, some participants said the CO-oPS app might not help increase community participation when the members are extremely or not particularly tech-savvy. One-third of the participants (31%, N=16) envisioned that when the community members are less tech savvy, they might not be able to provide oversight to each other. On the other hand, 27% of the participants (N=14) said that their entire community was very tech-savvy and well aware of the mobile privacy and security issues, and therefore they did not find it necessary to engage in discussion or exchange feedback with one another. C11P5 said: “My community is from a computer science background. I think we are already aware of these things. So, we don't need others' advice.”.
Community Trust and Belonging: While community trust increased over the course of the study (p=.048, M1=4.03, M2=4.22), the difference between community belonging was not statistically significant (p=.209, M1=4.09, M2=4.20). Thus, hypothesis (H6) is supported, but (H7) is not supported. In our qualitative analysis, we found that all of our participants (100%, N=51) said they personally knew each member of their communities. Most of our participants (86%, N=44) mentioned having close relationships, e.g., family members, friends, co-workers, and neighbors, with some members of their communities. Thus, using CO-oPS did not appear to bring groups closer together.
However, perceptions of trust and community relationships were still important in how individuals interacted with each other in CO-oPS. Around half of the participants (47%, N=24) said that they had trust in their community that their apps and permission information would not be misused. One-fourth of our participants (24%, N=12) said they had peace of mind because they would rely on their community members who would actively monitor their mobile privacy decisions and warn them if anything is found concerning. Here, we often noticed that participants referred to some specific community members, not the entire community, who they would rely on. C02P1 said, "With [Name] in my group, at least I know that if he saw something he didn’t think wasn’t proper, he will definitely let me and my husband know...We have that kind of relationship, so we know we can trust him.”.
However, a few participants felt that sharing the apps installed might cause some security issues due to the lack of trust in certain community members. For example, a few participants (18%, N=9) envisioned security concerns in sharing their financial apps, such as banking or mobile payments, with their community. They often brought up hypothetical scenarios of a family member (e.g., children) knowing what apps they have installed, who would somehow get access to their phone, log in to their financial apps, and transfer money. A couple of participants also imagined situations when community members might judge or bully them because of their choice of gaming apps.
Self-efficacy:
Our participants reported higher levels of self-efficacy (individual's capacity to manage their mobile privacy and security) at the end of the study (p<.001, M1=3.95, M2=4.32). This confirmed our hypothesis (H8). Most participants (80%, N=41) said they gained confidence in managing their mobile privacy and security, particularly by reviewing their installed apps and granted permissions and identifying whether there is anything concerning. C10P1 said, "So, I can now think through it, like what is the purpose of this permission? Like if the permission conflicts with the purpose of the application, I can just turn it off. You see, this is new. I now can differentiate what's necessary or what's not." Interestingly, more than half of the participants (57%, N=29) said they now have become more knowledgeable about changing permissions, mostly because they could easily navigate to the app permission settings from the CO-oPS apps. This perception was not universal, though. Around one-third of the participants (31%, N=16) also said they already had the ability to manage their own apps and permissions prior to participating in this study, and they never reached out to others for help.
Commmunity Collective Efficacy:
Participants reported higher community collective efficacy (individuals' belief that their community can co-manage mobile privacy and security) at the end of the field study (p<.01, M1=3.80, M2=4.12). This confirmed our hypothesis (H9). Reflecting this, most participants (88%, N=45) felt they could easily reach out to their community and work together as a team for their mobile privacy and security decisions. Most of these participants (67%, N=34) mentioned that they have at least one person in the community they could reach out to ask questions about whether an app was safe to use or a permission should be allowed. C03P5 said, “When I'm giving permissions, I now can tell that could be the things that are needed for a discussion. I do go to [Name] to ask what he thinks. what he thinks the permission is needed or not needed for the app. I do my permissions like this now.”
Behavioral Impact:
Our log analysis results provide further insights into participants' overall behavioral changes regarding mobile privacy and security. We found that 87% of the participants (N=61) changed at least one of their app permissions during the study. Participants, on average, changed 29 permissions, where all permissions were changed to “deny.” They mostly turned off the permissions accessing their location (approximate and precise), camera, storage, and contacts. For instance, C15P4 changed the Location (Approximate) permissions of Chase, Snapchat, and Gyve apps installed on his phone. However, participants did not show a similar decrease in the number of apps they had on their phones. Around 78%, N=53 participants installed new apps, whereas only a few participants (16%, N=11) uninstalled any apps. Participants, on average, installed two new apps, where the most common types of apps were mobile payment, banking, online shopping, social media, and games. On the other hand, the participants who uninstalled their apps mostly discarded gaming apps along with a few spiritual, fitness, and dictionary apps from their phones. Perhaps learning what apps others in their communities were using provided participants with ideas for additional apps they would be interested in.
§ DISCUSSION
While our prior work conceptually proposed community oversight as a mechanism for supporting privacy and security management <cit.>, this work is the first field study to empirically examine the real-world feasibility of implementing community oversight as a mechanism for co-managing mobile privacy and security among trusted groups. Our results largely confirm what was envisioned in that prior work: that community oversight does have the potential to help people help each other when it comes to decisions about mobile apps and app permissions <cit.>. Users' perceptions of their own and their community's capabilities to manage their mobile app privacy and security increased as a result of the study. The majority of participants modified their permissions, reducing what they were sharing with apps, and stated that their awareness of permissions and mobile apps also increased. Below we further discuss our overarching findings and their implications for the design of community oversight mechanisms.
Building Community Collective Efficacy
The goal of the CO-oPS app, as with many collaborative systems, is to build and support the collective capacity of groups to work together to achieve a common goal, in this case, to manage apps and app permissions. Thus, building community collective efficacy for mobile privacy and security is the primary end goal of CO-oPS. To that end, we believe our study was successful. The interview comments suggest that the community oversight mechanism helped our participants increase their ability to support each other in their mobile privacy and security decisions. Participants mentioned their change to a more collaborative perspective: the app facilitated knowledge sharing amongst their community and an ability to rely on others to help in decision-making.
Our results also provide an empirical validation of the components of the community oversight model <cit.>. Again, both survey and interview results demonstrated the roles of transparency, awareness, trust, and participation in providing community oversight. Future work could examine what factors are most related to community collective efficacy and thus are most important to provide in a community oversight mechanism.
Role of Tech Expertise
One of the key themes was that the level of tech expertise among community members plays a key role in bolstering or hindering community oversight. For instance, our participants expressed concerns about the potential lack of participation in communities when most members are sufficiently tech-savvy or knowledgeable about mobile privacy and security. Others expressed concerns about there being a lack of knowledge in their communities and less trust in the decisions of those with less expertise. Kropczynski et al. <cit.> also noted the importance of those with tech expertise in older adult communities for spreading privacy and security knowledge, even among those with low self-efficacy. This suggests that community oversight mechanisms may be most beneficial and appropriate when there are asymmetrical relationships among the community members in such a way that some community members need support while other members could provide that support. A key challenge is then how to incentivize those with sufficient expertise to participate in such communities, particularly to help community members they are not as close to or not already providing tech care to <cit.>.
However, when this asymmetry in expertise combines with a power imbalance, which is often seen in families, the collaborative joint oversight might cause tension. Akter et al. <cit.> demonstrated that although teens had more expertise than their parents, they did not feel empowered to oversee their parents because of the existing power hierarchies. In families, parents often use parental control apps, a more restrictive approach that fosters monitoring and surveillance to ensure teens' mobile online safety, privacy, and security. Teens often perceive this unidirectional oversight mechanism as overly restrictive and privacy-invasive <cit.>. Therefore, adolescent online safety researchers emphasize adopting a softer version than parental control or community oversight - a middle ground that allows parental oversight with bidirectional communication and teens' self-regulation <cit.>. So, the community oversight mechanism might need to incorporate additional features to help such unique types of communities with asymmetries in expertise and power.
Tensions around Transparency and Privacy
Another common concern was privacy issues arising from transparently sharing apps and permissions with others. While many appreciated such transparency, participants regularly chose to hide certain apps from other people. Some participants found this transparency too invasive and anticipated potential problems resulting from others knowing about what apps they use. Other concerns also arose from being able to determine if the advice given to another was taken or not, based on whether someone's permissions remained the same or changed. These concerns will likely be elevated as community size grows, where communities contain more members who are not close to one another. A recent study that explored collaborative mobile privacy management among families also found similar results where participants expressed concerns in including extended families with distant relationships <cit.>. To resolve these tensions, as with many collaborative systems, users may want more granular controls on who can see what apps and permissions rather than sharing equally throughout the community.
Incentives to Participate
Prior work identified that users might not be motivated to provide oversight to those not close to them <cit.> or those outside of existing care relationships such as between parents and teens <cit.>. Indeed, some of our participants expressed similar sentiments and were not concerned about the decisions made by those not close to them. Despite this, the majority of participants did perform oversight, and many interviewees described discussions and behaviors that were sparked as a result of that oversight. Yet with some incentives, participating in a user study, in this case, individuals performed the oversight, benefiting other community members. Thus a key question remains as to how to incentivize such oversight to different community members and how those incentives may need to change over time.
Implications for Design
Our results demonstrate how features that provide transparency and awareness and support trust between community members are essential components of community oversight. Mechanisms must also enable and encourage individual and community participation in the collaborative efforts of privacy and security management. Our results provide further insights into the features and mechanisms needed in a tool for communities to participate in collaborative oversight of their mobile privacy and security.
Making Privacy Features Visible: While the CO-oPS app had a feature that allowed users to hide any of their installed apps from others, it often failed to provide users with a sufficient sense of privacy. This may be because they were not well aware of this feature or were unsure how well it functioned. Participants also reported concern over forgetting to hide apps as they install new ones. Thus, mechanisms to keep users aware of this app-hiding feature will be necessary. Das et al. <cit.> and DiGioia and Dorish <cit.> also emphasized the importance of visibility so that users can be aware of the availability of the security feature and adopt it. To help users be aware of this feature, users can be prompted regularly or upon installing new apps to ask if they would like to hide. If community members hide too many apps, however, oversight will be more limited. Thus, designers should also explore additional privacy features that can protect an individual's privacy while still allowing useful sharing to the community.
Raising Mobile Privacy Knowledge:
One of our findings suggested that participants would not trust the mobile privacy and security behaviors of people who were less knowledgeable. This suggests that collaborative decision-making would not effectively function when there is little trust within the group. Increasing trust within communities may be very challenging, and how to do so remains an important open question.
In <cit.>, participants also envisioned such situations and recommended including external expert users whom the community members can turn to for guidance when they do not have the necessary expertise. Several other networked privacy researchers also demonstrated the need for knowledgeable expert stewards <cit.>. Therefore, we recommend app designers explore ways to include mobile privacy and security experts in communities. Another possibility is, rather than bringing experts into the community, to raise the expertise of certain motivated community members. This could include nudges towards additional information or resources, possibly personalized to those most amenable to such additional knowledge.
Increasing Community Participation:
We found that our participants expressed several concerns about community motivations to provide oversight to one another. Individuals and communities, as a whole, need incentives to utilize a community oversight mechanism and continue to support each other <cit.> in their knowledge-sharing and decision-making. Such needs for incentivizing individual participation in communities to support collective participation were also suggested by Watson et al. in <cit.> and Moju-Igbene et al. in <cit.>). Therefore, community oversight mechanisms need to include features that encourage such engagement and make the engagement of others apparent. For example, community members can be notified of any new apps installed or permissions granted on anyone's phone. Moreover, nudges could remind community members to review random members' apps and permissions. Additionally, lightweight feedback features might also help users to engage more. For instance, instead of messaging, users might prefer just to flag unsafe apps/permissions to notify others quickly.
Limitations and Future Work
We would like to highlight the limitations of our study that should be addressed in future work. First, our sample was skewed toward Asian adults, most of whom completed college and graduate-level education. Therefore, our results may not be generalizeable to other communities of different ethnicity, education, and age groups. Future work should explore communities with broader demographics, ethnicity, and socio-economic status <cit.>. Another limitation is that we asked our initial participants to form their communities with people they knew, which sometimes led to groups where everyone did not have strong bonds with each other. This may have led them to evaluate our app differently than if we studied with communities of families or close friends only. However, this also provided important insights into the importance of community trust in fostering oversight. Future work should examine how factors of group structure and relationships, including group size and varying levels of expertise, impact the motivation of participation and oversight activities of community members.
Although our qualitative results suggested that the CO-oPS app supported all necessary components of community oversight, this does not imply that our participants perceived usefulness, ease of use, and behavioral intent to adopt <cit.>. This is because they used the app, as we requested, to perform various tasks as part of the study. Therefore, in future studies, we would want to evaluate its usability to address users' experience issues and measure technology acceptance <cit.> to identify how to design for widescale adoption of an app to help people collaborate with their loved ones to manage mobile privacy and security. Lastly, the study design did not include a control condition, which means that any effects from the community oversight mechanism cannot be differentiated from changes that may have occurred through using the app, such as increased attention on app permissions and privacy and security. Therefore, the results cannot conclusively demonstrate a causal relationship between the usage of CO-oPS with communities and the dependent variables we analyzed. However, our qualitative insights provide evidence that some of the positive effects could be attributed to using the CO-oPS app. Moreover, there might be a survivorship bias effect in our results, as those who dropped out did not perceive any benefits to the app. Future research should investigate whether the same findings would hold for control groups
and prevent potential survivorship bias.
§ CONCLUSION
Managing mobile privacy and security as an individual is hard. We believe community oversight is one potential social mechanism that can allow community members to exchange help regarding their mobile privacy and security decisions. Our CO-oPS app was developed to evaluate this idea of community oversight in building community collective efficacy for groups managing their mobile privacy and security together. Our results provide empirical evidence that community oversight can potentially have an impact on individuals and communities alike. Given the continued proliferation and adoption of smartphones and mobile apps, we believe apps that facilitate community oversight are an essential tool for communities to help one another keep their personal information safe and secure. We will continue to build upon this work to examine how we can help people successfully co-manage mobile privacy and security within their communities.
§ ACKNOWLEDGMENTS
We acknowledge the contributions of Nikko Osaka, Anoosh Hari, and Ricardo
Mangandi, in the CO-oPS app development. We would also like to thank the individuals who participated in our study. This research was supported by the U.S. National Science Foundation under grants CNS-1814068, CNS-1814110, and CNS-2326901. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. National Science Foundation.
plain
§ SURVEY SCALES
Community Oversight Model Constructs: (Derived from Chouhan et al.'s conceptual model of Community Oversight <cit.>)
Transparency
1. The app gave me a transparent view into the apps installed and permissions granted on my own mobile device.
2. The app gave me a transparent view of the apps installed and permissions granted on the mobile devices of others.
3. The app gave us all a transparent view of the apps installed and permissions granted on the mobile devices of our community.
Awareness
1. The app made me more aware of my own mobile privacy and security decisions.
2. The app made me more aware of the mobile privacy and security decisions of others.
3. The app increased overall awareness of the mobile privacy and security decisions of our community as a whole.
Trust
1. The app helped me foster trust in the mobile privacy and security decisions of others in my community.
2. The app helped others in my community foster trust in my mobile privacy and security decisions.
3. The app helped foster trust in the mobile privacy and security decisions of our community as a whole.
Individual Participation
1. The app helped me make privacy and security decisions for myself.
2. The app helped me be involved in others' privacy and security decisions.
3. The app helped individuals in the community participate in privacy and security decisions of our community.
Community Participation
1. The app enabled me to participate in a community that helps one another regarding our mobile privacy and security decisions.
2. The app enabled others to participate in a community that helps one another regarding our mobile privacy and security decisions.
3. The app enabled the community to help one another regarding our mobile privacy and security decisions.
Community Trust (Derived from Chouhan et al.'s conceptual model of Community Oversight <cit.>)
1. I trust others in my community to protect my private information.
2. I trust others in my community to give me advice about mobile privacy and security.
3. Others in my community trust me to protect their private information.
4. Others in my community trust me to give them advice about mobile privacy and security.
Community Belonging (Pre-validated by Carroll et al. <cit.> and Sarason et al. <cit.>)
1. I can get what I need in this community.
2. This community helps me fulfill my needs.
3. I feel like a member of this community.
4. I belong in this community.
5. I have a say about what goes on in this community.
6. People in this community are good at influencing each another.
7. I feel connected to this community.
8. I have a good bond with others in this community.
Self-Efficacy (Pre-validated by Kropzynski et al. <cit.> based on a modified version from Bandura <cit.>)
1. I know that if I worked hard to learn about mobile privacy and security, I could make good decisions.
2. Mobile privacy and security decision-making is not too complicated for me to understand.
3. I think I am the kind of person who would learn to use best practices for good mobile privacy and security decision-making.
4. I think I am capable of learning to help others make good mobile privacy and security decisions.
5. Given a little time and training, I know I could learn about best practices for good mobile privacy and security decision-making for myself and my community.
Community-Collective Efficacy (Pre-validated by Kropzynski et al. <cit.> based on a modified version from Carroll et al. <cit.>)
1. Our community can cooperate to improve the quality of our decisions about mobile privacy and security.
2. Despite other obligations, we can find time to discuss our decisions about mobile privacy and security.
3. As a community, we can handle the mistakes and setbacks resulting from our decisions about mobile privacy and security without getting discouraged.
4. I am confident that we can be united in the decisions we make about mobile privacy and security that we present to outsiders.
5. As a community, we provide care and help for one another regarding our mobile privacy and security decisions.
6. Our community can leverage outside resources and services for our members to ensure the quality of mobile privacy and security decisions.
7. Our community can provide information for people with different interests and needs when it comes to mobile privacy and security decision-making.
§ SAMPLE QUESTIONS OF FOLLOWUP INTERVIEW
* Prior to participating in this study, how did you decide which apps are safe or unsafe to install on your mobile devices?
* How did you decide whether to accept or deny a permission request for an app?
* Did you ever review the permission lists of the apps installed on your phone? Why or why not? How?
* How frequently did people in your community discuss mobile privacy and security issues with one another?
* During the study, how frequently did your community members discuss mobile privacy and security decisions with one another?
* During the study, how did you communicate with others who were part of your community?
* During the study, how did you manage your mobile privacy and security decisions? Did you see any changes compared to prior to the study? Why or why not?
* Can you explain how and why the app did or did not help provide transparency into the mobile privacy and security decisions of other people in your community?
* How and why did the app or did not help raise awareness in your community about mobile privacy and security?
* How and why did the app or did not enable you and individuals in your community to provide feedback and guidance about others’ mobile privacy and security?
* How and why did the app or did not help you work together as a community about mobile privacy and security?
* Were there any problems or concerns you or others in your community encountered when using the app?
* If given the option, would you want to continue using the CO-oPS app after this study? Why or why not?
* Who do you think would be benefited the least from using the app and why? Who would be most benefited and why?
* Is there anything else that you would have liked the app to do? Any changes you would have liked on how the app currently works?
§ CODEBOOK
|p3.5cm|p12.5cm|
Codebook
Codes Illustrative Quotations
2c
– continued from previous page
Codes Illustrative Quotations
2|r|Continued on next page
2c
2|l|Transparency
1-2
Visibility to own S&P (security and privacy) (76%, N=39) “So actually using co-ops, like, for me, I got to see the list, like, what the apps, what the actual permission all the apps are using and like, what access they have. Like the list of all at the same place. For me, it was like, good to have this.” -C11P3
1-2
Visibility to others S&P (67%, N=34)
“I think having this app actually made me more, see these things of others, because it made it easier now to check, not only yours, but also other people's security settings.” -C18P1
|==|
Violation of own privacy when other's view (38%, N=20) “As some of the community are not someone who not much close, I wasnt that much confident when it came to share my apps and show my things, you know.” -C08P2
1-2
Violation of own privacy when forget to hide (27%, N=14)
“I didn't want to show a few apps, to my community members, but, as CO-oPS crashed the first time and I had to reinstall it. Then, I forgot to hide those apps. And so I think that is a privacy issue, which most people won't like it.” -C11P1
1-2
Violation of others’ privacy (25%, N=13) “While using the app, my friends and I discussed privacy more than security because we can see the apps on our friend phones and I think that's very not a good thing. I did not feel good.”-C11P2
1-2
Defeats the purpose (25%, N=13)
“But sometimes, so while people are using some apps and keeping it private to them, they dont share with anyone but yeah, then I think this app wont help much for anyone” -C06P2
|==|
2c
2|l|Awareness
1-2
Overall S&P awareness (94%, N=48) “Sometimes we allow some permissions without understanding what's been packed. So after exploring that CO-oPS, I usually get to think twice about my apps, which really cool, I am more concerned about whether to allow or not allow any permissions to secure your phones. I would say it's very helpful to change my mind. And it helped me to be more careful about my mobile security.” -C15P1
1-2
Compare own S&P with others (86%, N=44) “I think this is a great feature. Because with this, you are able to see and compare like, if what you are using and what others are using, it is like comparable Or you can just know what you are doing others are not. I guess you can help yourself.” -C17P1
1-2
Keep track of own S&P (82%, N=42)
“Earlier I couldn't know about what is there and what is not because I thought I had few apps the apps I did install. Then here [on CO-oPS app] I see I have more apps that I did not see it before... I think it helps, it feels like gives you to see what do you have on the phone, and the stuff that are accessed by the apps.” -C08P1
1-2
Aware of others' S&P (57%, N=29) “So like seeing the option of like, every single app, and then seeing like what's granted and denied, that definitely helped a lot to see what each member, what apps did they have, and also what like permissions they grant. So it helps me realize what they're granting or not granting, so that I need to I help them or not.” -C02P4
1-2
Aware of community's S&P Changes (39%, N=20) “One of the benefits of it is, on the community section, I can go through my friend's app changes, which permissions of which apps you changed. And I can go ahead and do that and change it and have fun. Okay.” -C11P1
1-2
Increased awareness from pro tips (39%, N=20) “So on this pro tip section in where you can know the basic information, like basic knowledge that you can just learn from and become careful about the app settings... I think this section talked some senses in us.” -C06P2
|==|
Doesnt inform community about Apps Changes (18%, N=9) “So, you see in the community, we get to know about the changes for the permissions, but we do not get any community posts for the app installing or installing. I think this is also important. When someone gets rid of an app, everuyone should know, right.” -C15P1
|==|
2c
2|l|Trust
1-2
Trust others’ advice (51%, N=26)
“[Name] let me reconsider what I am doing, because when he tells me warns me, you are more likely to take it seriously. It'll come to light in your mind for sure. Yeah, I did change some of the things, yeah I think he was right. I see the stuff he warns me about are all good.” -C11P2
1-2
Less aware community members (49%, N=25) “I dont think they were much of aware. They do not care of all this, you know, privacy and security stuff, so I am not sure they used it much.”-C11P3
1-2
Less tech-savvy community members (37%, N=19) "For example, my mom... whenever she goes to the Facebook or YouTube, she asks questions. So, she cant be able to understand these privacy and security, its just so beyond her capacity. So I doubt she would be someone to rely on."-C16P1
|==|
2c
2c
2|l|Individual Participation
1-2
Made own S&P changes (67%, N=34) “I got rid of some of my permissions. I haven't really thought of that before. Right now it has come to my knowledge that yes, it is a big problem and even scary. But I have that control, if you know what permissions are problematic, and what are necessary, you can always try clean up. Now cleaning up my phone has become a bit of a priority to me.” -C02P1
1-2
Provided feedback (41%, N=21) “I reviewed X’s mobile privacy, I saw he was giving a permission, don't remember which one, then I told him that, allowing that permission is not good. And then I gave some good reasons why this is important to change this or not.
-C17P1
|==|
Less tech-savvy (41%, N=21) "I dont think anyone needed my advice. I know they are careful, much careful than I am because they all are very savvy.” -C18P2
1-2
Others are not close (39%, N=20) I dont think I did much… I would be interested to help someone when I care them, maybe my parents mostly.”-C14P2
1-2
Fewer app users (29%, N=15) “I did not use it much. I'm a very minimalist in my apps. So at this point, the apps that I have, I know what I have. My advice to others is use minimal apps and make your life easy..”-C03P5
|==|
2c
2c
2|l|Community Participation
1-2
Learned from community (78%, N=40)
“We could review each other's permissions and we could Share, so we could be careful about our privacy and things. And having your community’s apps and permission in CO-oPS, you can just learn by yourself like maybe you don't really have to grant this permission.” -C18P2
1-2
Increased discussion in community (71%, N=36) “We had frequent discussions when we had discussions about what kind of security and permissions we have or on each other's phone, or in general the security issues out there. And I think the other day when we met, we were giving away some information. I think we also mentioned some of our apps are taking unnecessary data. For those apps purpose, the permissions were not necessary. So we asked to turn it off. And I don't know if they did change that, but I did. But yeah, that kind of interaction truly happened among us. And we had we shared opinion and try to suggest each other that this is not right.” -C06P4
1-2
Received feedback from community (53%, N=27)
“One of the action items we had a task like look through permissions and tell them like, hey, like, maybe you shouldn't do it. I think I received a message from X like, Hey, you have Bose like music app has access to your GPS location for some reason. Oh, wow. Which I did not notice it before. This was like, I really thanked him.” -C09P2
|==|
Less tech-savvy community (31%, N=16) "I think when your community is not tech savvy, they wont feel the importance of this security and privacy. I can see to be an effective community at least some people must be tech savvy so that they can educate everyone else." -C16P1
1-2
Tech savvy community (27%, N=14)
“We didnt find it useful, not really, because my community is from computer science background. I think we are already aware of these things. So, we dont need others advice.” -C11P5
|==|
2c
2c
2|l|Community Trust and Belonging
1-2
Good relationship with community (86%, N=44)
“We live in a same community, so we have a very good relation with the other people like X and all the other four members because we almost live in very close to and very similar minded community. So and I have personally good relationship with X that also drives me to participate in this research. So yeah, we try to go outing and explore things together.” -C15P1
1-2
Trusted others to keep S&P Info Private (47%, N=24) “I guess, like the thought that they are my close circles. Like I know sharing my apps with them is safe.” -C15P1
1-2
Depend on the community for S&P (24%, N=12) “With X in my group at least I know that if he saw something he didn't thought wasn't proper, he will definitely let us know, let me and my husband know.. We have that kind of relation, he, Yeah, he would let us know and he would tell us this just to delete that, we have that kind of relationship so, we know we can trust him, We know that,” -C02P1
|==|
Had security concern for sharing S&P (18%, N=9) “I have my Chase app, if someone on the family, like my sons, know I have this app and can somehow get my phone,... if the app is logged in already, they can just transfer the money immediately.”-C02P3
|==|
2c
2c
2|l|Self Efficacy
1-2
Gained confidence in S&P (80%, N=41)
“Okay, so, I will say that what is the purpose of this app? Like if it is like Facebook or WhatsApp, then it will use my contacts, my contact information can use or my photos they can use. But why they should go to my phone call manage permission or there will track my other applications permission. That doesn't make sense. So it conflicts with the purpose of this application. See, this is new. I can now differentiate whats necessary or what not." -C10P1
1-2
Now know how to change permissions (57%, N=29)
“So I actually now can use the settings to go directly change the permissions. Its much easier now. It has become like I randomly go check some apps and do changes instantly if I feel like.” -C12P2
|==|
Already confident in S&P (31%, N=16)
“I would say that'd be me. I'm pretty knowledgeable regarding, you know, the whole privacy and phones, I try to be secure about my own apps. Yeah, I think I am very careful with permissions and such. I know how to change things.” -C22P1
|==|
2c
2c
2|l|Community Collective Efficacy
1-2
Felt teamwork for S&P (88%, N=45)
“I mean, offline, or virtually, we kind of worked together, we talked, we get each other’s knowledge. We could easily just start a discussion about any apps and permissions stuff... I will say it kind of, we work together in this.” -C17P1
1-2
Reached out to community (67%, N=34)
“ think one thing is that I’m a little more confident of
it now. So, when I’m giving permissions, I now can tell that
could be the things that needed for a discussion. I do go to
[Name] to ask what he thinks would do. what he thinks if
the permission is needed or not needed for the app. I do my
permissions like this now.” -C03P5
1-2
§ DESCRIPTIVE STATISTICS OF COMMUNITY OVERSIGHT CONSTRUCT ITEMS
|
http://arxiv.org/abs/2306.08788v1
|
20230615000023
|
Improved Measurements of the IXPE Crab Polarization
|
[
"Josephine Wong",
"Roger W. Romani",
"Jack T. Dinsmore"
] |
astro-ph.HE
|
[
"astro-ph.HE"
] |
0000-0001-6395-2066]Josephine Wong
Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 452 Lomita Mall, Stanford, CA 94305, USA
Department of Physics, Stanford University, 382 Via Pueblo Mall,
Stanford CA 94305
[email protected]
0000-0001-6711-3286]Roger W. Romani
Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 452 Lomita Mall, Stanford, CA 94305, USA
Department of Physics, Stanford University, 382 Via Pueblo Mall,
Stanford CA 94305
0000-0002-6401-778X]Jack T. Dinsmore
Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 452 Lomita Mall, Stanford, CA 94305, USA
Department of Physics, Stanford University, 382 Via Pueblo Mall,
Stanford CA 94305
X-ray polarization from the Imaging X-ray Polarimetry Explorer (IXPE) provides an important new probe of the geometry of the pulsar emission zone and of particle acceleration in the surrounding pulsar wind nebula (PWN). However, with IXPE's modest ∼ 20-30^'' spatial resolution, separation of the pulsar signal from the nebula is a challenge. Conventional analysis defines an “off” phase window as pure nebular emission and subtracts its polarization to isolate the phase-varying pulsar (“on-off fitting"). We present a more sensitive scheme that uses external measurements of the nebula structure and pulsar light curve to isolate their contributions to the phase- and spatially-varying polarization via least-squares regression (“simultaneous fitting"). Tests with simulation data show ∼30% improvement in pulse phase polarization uncertainties, decreased background systematics, and substantially improved nebular polarization maps. Applying “simultaneous fitting" to early IXPE Crab data extracts additional phase bins with significant polarization. These bins show interesting departures from the well-known optical polarization sweeps, although additional exposure will be needed for precise model confrontation.
§ INTRODUCTION
The Imaging X-ray Polarimetry Explorer (IXPE) <cit.> is NASA's first satellite dedicated to measuring X-ray polarization. Since it was launched in Dec 2021, it has observed several pulsar wind nebula (PWN), including Crab, Vela, MSH 15-5(2), and B0540. In these sources, a central pulsar is embedded in a bright halo of relativistic charged particles accelerated by the pulsar's spin-down. The pulsed emission is generally thought to originate from the magnetically-controlled flow within and just beyond the light cylinder. Far from the light cylinder, particles and fields form a randomized wind that powers the nebula (the PWN). The nature of particle acceleration in both regions is still not fully understood although many PSR/PWN models have been proposed. For PWN reviews, see <cit.> and <cit.>. IXPE measurements of synchrotron X-ray polarization allows us to probe the underlying magnetic structure of these zones and test these models.
Due to IXPE's modest HPD (Half Power Diameter)∼ 20-30^'' spatial resolution, isolating the pulsar signal from the surrounding nebular emission is a challenge. The conventional method is to assume an “off" phase of pure nebular emission and subtract it to isolate the varying pulsar contribution <cit.>. In this paper, we introduce a more sensitive method using Chandra measurements of the nebula map and pulsar light curve, with high spatial and time resolution, to weight their respective contributions to the phase- and spatially-varying polarization, extracting the polarization properties via least-squares regression. In Section 2, we describe this method and demonstrate, using simulated Crab observations, that it produces an improved nebula polarization map and more accurate pulsar polarization curve. In Section 3, we apply this technique to the IXPE Feb/Mar 2022 observation of the Crab and comment on new features visible in the data. In Section 4, we discuss notable differences between our X-ray measurements and previous optical polarization measurements and further extensions of our work. We conclude with brief remarks in Section 5.
§ SIMULATION ANALYSIS
We used IXPE's simulation and analysis software ( and , using V11 response functions) <cit.> to generate a mock 100ks observation of the Crab PWN. In the simulation, the pulsar light curve and phase varying spectrum are derived from Chandra (CXO) HRC/LETG observations <cit.> and the nebula image comes from a contemporaneous CXO ACIS image (OBSID 23539, PI Slane) obtained to support the IXPE study.
ACIS CCD limitations produce two artifacts in the image. First, photons collected during readout produce trails along the charge-transfer direction: a bright narrow narrow streak from the pulsar and a diffuse rectangular patch from the nebula. We excised the streak, replacing it with a random sample of events, half each from two streak-sized regions above and below. The streak is close to the nebula symmetry axis, so this gives a fairly continuous flux and spectrum in the corrected image. The diffuse flux from the nebula was corrected by defining a nebula boundary, marking the zones extended to either side along the readout axis and replacing events in these zones with events sampled from a readout-free background region. Second, CCD pile-up eliminates counts in the pulsar core and leaving an annulus of pulsar counts in the PSF wings. We excised a circular region of radius ∼3.2^'' and replaced it with photons drawn from a ∼18^''-wide surrounding hemisphere (excluding flux at the base of the jet).
The IXPE detector reads out pixels directly without charge transfer (and with small ∼ 1.1 ms dead time/event). Thus our corrected image better matches, in morphology and spectrum, the nebular flux that IXPE records. To this corrected PWN image, we add the phase-variable pulsar point source, with photons spatially distributed according to the IXPE PSF.
To complete the simulation, we assembled a plausible model of the X-ray polarization. For the pulsar, we used OPTIMA optical polarization measurements <cit.> but compressed the sweep so that the position relative to the optical peaks was mapped to the X-ray peaks. For the nebula, we assumed that magnetic field lines follow elongated features (e.g. toroidal near the central wisps, arced with the limb at the outskirts, and parallel to the jet) to construct a polarization map.
In the on-off method, we divided the data into 29 equal-spaced phase bins, with 0.897-1.103 (6 phase bins, see Fig. <ref>) as the off-pulse phase. For the pulsar polarization, we restricted our analysis to a circular 20^'' radius aperture centered on the pulsar to minimize nebula contamination and a single energy bin between 2-8 keV (IXPE optimal energy range). To determine the nebula polarization, we used 5 energy bins and a 150^'' × 150^'' area, divided into a 15×15, 10^'' pixel grid, mapping the flux from the off-pulse phases. This sub-HPD pixel grid gives a more detailed map of the polarization morphology of the nebula, although the polarization value assigned to each pixel will be slightly influenced by adjacent pixels. This may be mitigated by PSF-based deconvolution of the final maps.
In the simultaneous method, we used the latter binning scheme and solved for the desired q and u of the pulsar phase-resolved and nebula spatially-resolved spectra using the observed Q and U fluxes as:
Q_ijklm = I_ psr, ijklm× q_ psr, ij + I_ neb, ijklm× q_ neb, jkl
U_ijklm = I_ psr, ijklm× u_ psr, ij + I_ neb, ijklm× u_ neb, jkl
where the indices i, j, k, and l represent the phase, energy, and spatial position of the bin and m=1-3 refers to the three IXPE telescopes. Assuming equally-spaced phase bins,
I_ psr, ijklm = ℐ_ psr, ijm×PSF_jklm
I_ neb, ijklm = ℐ_ neb, jklm / i_ max .
where ℐ_ psr, ijm and ℐ_ neb, jklm are the expected counts determined from CXO measurements of the phase-dependent pulsar spectrum and the energy-resolved image of the nebula, passed through to account for the instrument response. The three IXPE telescopes have significantly different PSFs (and slightly different effective areas), hence the m dependence. In practice, we use a long 1Ms simulation to predict the counts for a shorter observation to reduce statistical errors.
We want to find the parameters that minimize the Gaussian error, where the variance of Q_ijklm and U_ijklm are given by :
var(Q) = N(2/μ^2-q̅^2)
var(U) = N(2/μ^2-u̅^2)
cov(QU) = -Nq̅u̅
We have now constructed an over-determined least-squares problem for which a best-fit solution must exist. The fitting was performed using the scipy function, , which allows for the specification of parameter bounds. We found that it was sometimes helpful to introduce physical limits {-1, 1} on the Stokes parameters to obtain a good fit to the model. Because bounded-value least squares is an iterative optimization algorithm, when values reach the physical q,u ∈{-1,1} bounds, the error bars could not be obtained analytically. To handle such cases, we recovered error bars with bootstrap analysis. We found that 500 bootstrap iterations was enough for the uncertainties to converge to within 1.5%, and confirmed that, when bounded value least-squares converged without hitting the bounds, standard error propagation gives accurate fit errors.
Figures <ref> and <ref> show the reconstructed polarization of the two fitting methods as well as the input model. To compare the two methods, we used three summary statistics elaborated below: the median error bar size, the GoF, and the number of measurements. For the pulsar measurements, the median error bars decrease by × 1.20, averaged between q and u; there are also (29/23=1.26)× more measurements in the simultaneous case. We can further characterize the systematic errors in the recovery by a `Goodness of Fit'
GoF_q= {∑_n [( q-q_ mod)/σ_q]^2 / n
} ^1/2
and similarly for u, with an average × 1.08 improvement in recovery of the original model, even relative to the improved errors. Thus, in total, simultaneous fitting can be considered to provide a 1.20× 1.26 × 1.08 ≈ 1.63× improvement in recovery of the pulsar polarization.
For the nebula, since simultaneous fitting uses nearly all the data away from the pulsar, especially off the brightest portion of the peaks, the effective nebula exposure is larger by 1/Δϕ_ off≈ 4.9× than in the simple “off" portion of the pulse phase used to map the nebula in the on-off method. Further, the method takes account of the small expected pulsar flux in off phases to provide a cleaner measurement of the true nebulae structure very close to the pulsar. The polarization maps in Figure <ref> show substantially improved source recovery and decreased typical uncertainty in the recovered pixels. Quantitatively, the median error bars decrease by ∼ 2× for both q and u. Although the GoF for simultaneous fitted nebula is slightly larger than the on-off nebula, since the uncertainty is halved, this actually means that simultaneous fitting is able to recover polarization values closer to truth.
§ APPLICATION TO FIRST IXPE CRAB OBSERVATION
In February and March 2022, IXPE made a ∼91 ks observation of the Crab, its first PWN source. At the time, the mirrors were misaligned from the pointing axis by ∼3 arcmin, and we do not have energy-dependent response functions calibrated for this offset. This (and the incursion of Poisson statistics in low count bins, see below) led us to analyze in a single 2–8 keV IXPE energy bin.
Moreover, it has been discovered that errors in the present track reconstruction of IXPE photon conversion points are correlated with the initial direction of the photoelectron track, and hence, with the inferred event polarization. This leads to “polarization leakage," which induces polarized fringes about sharply localized X-ray sources (point source and compact nebulae). As described by <cit.>, these fringes average away for point sources, and hence do not affect aperture polarization measurements, but do affect the edges of extended sources, such as the Crab Nebula. The paper also gives a prescription for correcting these fringes, assuming a smooth Gaussian blurring of the point sources.
We implement here an improved version of this correction, using more detailed IXPE PSFs for each mirror assembly, derived from ground calibration data and on-sky observations of point sources. The mirror PSF are lightly smoothed to suppress numerical noise. We then compute maps of the Hessian terms H_xx, H_yy and H_xy. On-sky images have residual blur beyond the mirror PSFs, produced by incomplete aspect correction and imperfect estimation of the photon conversion points. We treat these as simple Gaussian blurs with σ_ G=2.1^'', 1.4^'', 1.2^'' for detector units (DUs) 1, 2, 3 respectively. In addition, the correlation between the conversion point and the EVPA induces a `leakage' blur, which is energy dependent and grows at large photon energies. We model the effect as σ_ L = (10+3Δ E)^1/2 arcsec, with Δ E =E_ keV-4keV for photon energy above 4 keV and Δ E=0 below. This is common to all detectors. The effective PSFs for unpolarized sources are
P_I^*(𝐱⃗) = (P_M ⋆ G(σ_G ))(𝐱⃗)
+ σ _L^2/4 (H_xx(𝐱⃗) + H_yy(𝐱⃗))
P_Q^*(𝐱⃗) = σ _L^2/4 (H_xx(𝐱⃗) - H_yy(𝐱⃗))
P_U^*(𝐱⃗) = σ _L^2/2 (H_xy(𝐱⃗))
with P_M the appropriate mirror PSF and ⋆ G the symmetric Gaussian convolution. For a polarized source, P_I,Q,U are mixed by the blurring effect as
P_I(𝐱⃗) = P_I^* (𝐱⃗) + 1 2[q_ src P_Q^*(𝐱⃗) + u_ src P_U^*(𝐱⃗)]
P_Q(𝐱⃗) = q_ src P_I^*(𝐱⃗) + P_Q^*(𝐱⃗)
P_U(𝐱⃗) = u_ src P_I^*(𝐱⃗) + P_U^*(𝐱⃗),
where the q_ src and u_ src are the estimates of the true source polarization.
In our implementation, we iterate, starting by fitting to raw IXPE I, Q, U data, to arrive at leakage-corrected polarization fits in ∼5 steps. In the method, we form leakage-corrected q and u maps for each phase and energy bin and detector. These are fed to the simultaneous fitting minimizer, which separates the nebula and pulsar signals. In our case, during fitting, the energy bins are collapsed. The results are final derived nebula q and u maps and pulsar phased q and u values, corrected for the spatially-dependent polarization leakage. In simulation, our prescription has shown to improve recovery of the original q_src and u_src. For details, see <cit.>. Our analysis below includes this correction, which can make substantial modifications to the nebula polarization map near the pulsar and at the outer edges. Near-pulsar corrections, in turn, feed back to changes in the phase-resolved polarization.
Using the same procedure as for simulated data, we performed on-off and simultaneous fits to the Crab observation. For on-off, we used the same phase bins, area, and off-pulse range as the initial IXPE discovery paper on the Crab () and replicated those results.
For simultaneous fitting, we used a 150^'' × 150^'' area, binned into a 15 × 15 pixel grid. Using the phase bins of <cit.>, we also obtain significant polarization detection in the main peak (P1) phase bin [0.12,0.14] with a higher significance of PD = 15.1 ± 2.1%. The second peak (P2) phase bin [0.515, 0.545] was also found to be significantly polarized with PD = 8.8 ± 2.8%; these errors decrease from the on-off values.
Seeing that we can measure polarization with higher significance, we decided to use smaller bins around the peak phases, where we can expect fast sweeps in polarization angle, to better measure the polarization curve.
Here we employ 16 phase bins, as compared to the 11(+off) phase bins used in the on-off analysis. With these new bins and corrections, we obtain refined Stokes q and u phase points (Figures <ref>, <ref> and Table <ref>), which may be compared with the standard on-off results.
Polarization angle (PA) and degree (PD) are useful for comparison with models, but these do not have simple Gaussian errors. We can however plot the confidence regions for the phase bins around the two peaks (Figure <ref>), where three phase bins have above 3σ significance, and four more are above 2σ. In P1, we see a counterclockwise (CCW) sweep from ∼ 100^∘ - 150^∘. P2 hints at a CCW sweep as well, though since it only has one significant bin, more data is required to see this.
Figure <ref> shows the reconstructed nebula PD map, cut at 4.7σ significance, with green bars showing PA and magnetic field direction. As seen in , the PD is reduced at the sides of the torus where the PA sweeps rapidly through 180^∘. With leakage correction, these feature are enhanced, appearing as PD holes on opposite sides of the nebula. In general, the PD is largest towards the northern and southern edges of the nebula; in part, this may be because such regions have toroidal PA at a similar angle across the PSF, minimizing polarization beam dilution. A highly polarized PD > 50% region west of the jet is also present. Generally, the magnetic field lines follow the filamentary structure partly visible in the background image.
§ DISCUSSION
In this paper, we show that simultaneous fitting has several merits. As tested with simulated data, it provides improved recovery of the pulsar polarization sweep with smaller statistical uncertainties on both pulsar and nebula measurements. This allows us to use smaller phase bins to more finely resolve the pulsar polarization. Our method essentially uses externally measured, high precision data on the time and spatial intensity structure to provide a weighting that improves extraction of the polarization signal. Perhaps the greatest limitation in the present implementation is our assumption of Gaussian statistics. While this allows an efficient linear algebra solution for the many pulse phase and nebula image polarization values, it does break down when the counts in a given bin are too low. We have noticed that such Poisson effects may be important in the low count outskirts of the PWN, although in the computations presented here <2.5% of the 16× 15× 15 × 3= 10,800 data bins have <10 counts. However, this does limit our ability to extend our decomposition to be fully energy-dependent, as the higher energy IXPE bins often have low count rate. With additional Crab exposure it will be straight-forward to extend the analysis to a modest number of spectral bins.
The polarization of the optical Crab pulsed emission has been measured repeatedly in the 50 years since its discovery. The measurements of <cit.> with the OPTIMA fiber photometer are of particularly high quality. The central fiber used for these measurements had a diameter of 2.3^'' on the sky and thus included both the Crab pulsar and nebular emission such as the “inner knot". Accordingly these authors define a “background" phase of ϕ=0.78-0.84 (dashed lines in Figure <ref>) and subtract the Q and U fluxes from this phase to get the pulsed Crab emission. This turns the optical Crab curves into “on-off" measurements. In Figure <ref>, we compare this pulsed optical polarization signal (measurements kindly supplied by G. Kanbach) with our new IXPE measurements. Alignment of the optical and X-ray curves to the radio phase convention was checked via their light curves, with the X-ray peak at ϕ=0.99 and the optical peak at ϕ=0.994 <cit.>.
There have been several attempts to compare the OPTIMA polarization data with theoretical pulsar models <cit.>, none particularly satisfactory. Clearly some ingredients are missing in our basic understanding the pulse emission. In particular, absorption effects or contributions from multiple emission regions may introduce complications beyond the essentially geometric models that have been applied to date. Such effects should differ between the X-ray and optical bands. So it is encouraging that we do see statistically significant differences between the optical OPTIMA and IXPE polarization curves.
Most notably, the P1 q sweep starts from larger negative values and is delayed in phase. Negative u values also persist later in P1. Similarly u is much more negative in the core of P2. We do not find any significant measurements in the pulse minimum bin which is substantially nebula dominated, with a maximum of ∼ 20% pulsar counts in the central pixel (and steeply tapering in adjacent pixels). Additional signal-to-noise (S/N) can allow better separation of the pulse signal near this phase. Near the peaks the polarization values have significant sensitivity to our choice of bin boundaries, so we suspect that unresolved rapid sweeps still suppress the polarization. With additional exposure, IXPE can measure several more phase bins with good precision. Comparison of the optical and X-ray signals can then give new insights into the geometry of the pulse emission zones. By September 2023, IXPE will conclude a follow-up 300 ks observation of the Crab, which, combined with the current dataset and event quality weighting, should more than double the S/N presented here.
§ CONCLUSION
Analysis of the first IXPE Crab observation using simultaneous fitting has improved our measurements of the Crab polarization. Compared to the original paper (), we recover more bins of significant pulsar polarization and are able to use a finer phase resolution to see departures from the well-measured optical polarization. Nebula features, such as the PD holes at the edge of the torus, are better recovered. With the additional S/N from planned Crab follow-up exposure, we can substantially extend these gains. The method is of course general and can be applied to any phase varying source embedded in extended emission. We anticipate that application to other IXPE sources, such as MSH 15-5(2) and possibly B0540 and Vela, can provide improved measurements as well.
This work was supported by NASA under grant NNM17AA26C.
The Imaging X-ray Polarimetry Explorer (IXPE) is a joint US and Italian mission. The US contribution is supported by the National Aeronautics and Space Administration (NASA) and led and managed by its Marshall Space Flight Center (MSFC), with industry partner Ball Aerospace (contract NNM15AA18C). The Italian contribution is supported by the Italian Space Agency (Agenzia Spaziale Italiana, ASI) through contract ASI-OHBI-2017-12-I.0, agreements ASI-INAF-2017-12-H0 and ASI-INFN-2017.13-H0, and its Space Science Data Center (SSDC) with agreements ASI-INAF-2022-14-HH.0 and ASI-INFN 2021-43-HH.0, and by the Istituto Nazionale di Astrofisica (INAF) and the Istituto Nazionale di Fisica Nucleare (INFN) in Italy.
This research used data products provided by the IXPE Team (MSFC, SSDC, INAF, and INFN) and distributed with additional software tools by the High-Energy Astrophysics Science Archive Research Center (HEASARC), at NASA Goddard Space Flight Center (GSFC).
aasjournal
|
http://arxiv.org/abs/2306.01533v1
|
20230602133634
|
Enhance Temporal Relations in Audio Captioning with Sound Event Detection
|
[
"Zeyu Xie",
"Xuenan Xu",
"Mengyue Wu",
"Kai Yu"
] |
cs.SD
|
[
"cs.SD",
"eess.AS"
] |
Numerical Solution of HCIR Equation with Transaction Costs using Alternating Direction Implicit Method
Elham Mashayekhi^a,Javad Damirchi^a, Ahmad Reza Yazdanian^b[Corresponding author: [email protected]],
^Faculty of Mathematics, Statistics and Computer Science, Semnan University, Semnan, Iran
^Faculty of Financial Sciences, Kharazmi University, Tehran, Iran
July 31, 2023
============================================================================================================================================================================================================================================================================
Automated audio captioning aims at generating natural language descriptions for given audio clips, not only detecting and classifying sounds, but also summarizing the relationships between audio events.
Recent research advances in audio captioning have introduced additional guidance to improve the accuracy of audio events in generated sentences.
However, temporal relations between audio events have received little attention while revealing complex relations is a key component in summarizing audio content.
Therefore, this paper aims to better capture temporal relationships in caption generation with sound event detection (SED), a task that locates events' timestamps.
We investigate the best approach to integrate temporal information in a captioning model and propose a temporal tag system to transform the timestamps into comprehensible relations.
Results evaluated by the proposed temporal metrics suggest that great improvement is achieved in terms of temporal relation generation.
Index Terms: Audio captioning, Sound Event Detection, Temporal-enhanced model
§ INTRODUCTION
Increasing amount of research has shed light on machine perception of audio events, for instance label-wise classification and detection.
Recently automated audio captioning (AAC) <cit.> has gathered much attention due to its resemblance to human perception, which involves not only detecting and classifying sounds, but also summarizing the relationship between different audio events <cit.>.
Over the last few years, AAC has witnessed remarkable advances in recent works.
The utilization of pre-trained audio classification and language generation models improve the captioning performance significantly <cit.>.
The incorporation of semantic guidance (e.g., keywords <cit.>, sound tags <cit.> or similar captions <cit.>) and new loss functions <cit.> are also hot topics.
While previous work endeavors to better detect audio events and improve caption quality, little attention is paid to summarizing relations between different sound events in a caption.
The current captioning model rarely outputs sentences involving temporal conjunctions like “before”, “after” and “followed by” that suggest the sequential relations between events.
A statistical examination on a well-performing AAC model <cit.> indicates that only 11.1% generated captions include precise temporal relations.
Different from vision-based captioning where a plethora of spatial attributes can be extracted, audio events' relations are mainly focused on their time specificity as shown in <Ref>.
Whether two audio events occur sequentially or simultaneously is important to understand the audio content correctly, which is as critical as whether two objects in an image are adjacent, stacked, or overlaid.
Sound event detection (SED), a task to detect on- and off-sets of each sound event, on the other hand, provides extensive information on the temporal location of each event.
Previous works integrated SED outputs by direct concatenation to improve the overall quality and accuracy of generated captions <cit.>.
However, whether such straightforward fusion methods can help a captioning model learn about temporal relations between events remains unexplored.
SED output contains information about the occurring probability of hundreds of sound events in each frame.
These redundant low-level features are difficult to align with the temporal conjunction words in a caption, making it difficult for the captioning model to leverage SED outputs.
In this work, we first directly integrate SED outputs by concatenation () and attention (), to investigate the performance of direct SED integration methods.
The results demonstrate that such approaches bring little improvement in temporal relationship description accuracy.
Therefore, it is necessary to distill high-level, comprehensible temporal information from SED outputs, for a better alignment with audio caption content to mimic humans' temporal information processing procedure.
Inspired by this, we first analyse the current AAC data and propose a 4-scale temporal relation tagging system (i.e. simultaneous, sequential) based on human annotations.
A clear matching mechanism is further proposed to infer the temporal relations from SED outputs and align with the temporal tags.
Based on this, we propose a temporal tag-guided captioning system (), which takes temporal tag guidance inferred from SED output that represents the complexity of temporal information to facilitate the model to generate captions with accurate temporal expressions.
To measure the quality of generated captions in terms of temporal relationship descriptions, we propose ACC_temp and F1_temp.
Evaluated by these temporal-focused metrics and commonly-adopted captioning metrics (e.g., BLEU) indicate that temp-tag-AAC significantly outperforms the baseline model and the direct SED integration approach, especially in temporal relationship description accuracy.
Our contributions are summarized as follows:
* Innovative utilization of SED to enhance the temporal information in AAC, with a temporal tag to better imitate humans' inference on temporal relations.
* Metrics that are specifically designed to measure a system's capability in describing sound events' temporal relations.
* Validation shows that the proposed temp-tag-AAC leverages SED outputs to significantly improves the accuracy of temporal expression as well as the caption quality.
§ TEMPORAL-ENHANCED CAPTIONING SYSTEM
This section illustrates our temporal-enhanced captioning system shown in <Ref>, includes: 1) the baseline model for audio captioning; 2) the SED model that predicts the probability of events; 3) two direct approaches for integrating probability as temporal information; 4) proposed temp-tag-AAC approach.
§.§ Baseline Approach
The baseline framework follows an encoder-decoder architecture which achieves competitive performance in DCASE challenges <cit.>.
*Audio Encoder PANNs <cit.> CNN14, a pre-trained convolutional neural network, is adopted to extract the feature from the input audio 𝒜.
We use a bidirectional gated recurrent unit (GRU) network as the audio encoder to transform the feature into an embedding sequence 𝐞^A ∈ℝ^T × D.
The combination takes advantage of the pre-trained large model while setting some parameters trainable for adaptation to the target captioning task.
𝐞^A = 𝐄𝐧𝐜𝐨𝐝𝐞𝐫(𝐏𝐀𝐍𝐍𝐬(𝒜))
*Text Decoder
We use a unidirectional GRU as the text decoder to predict the caption word by word.
At each timestep n, a context vector 𝐜 is calculated by attention mechanism <cit.>, given 𝐞^A and the previous hidden state 𝐡_n-1:
α_n,t = exp(score(𝐡_n-1,𝐞_t^A))/∑_t=1^Texp(score(𝐡_n-1,𝐞_t^A))
𝐜 = 𝐀𝐓𝐓𝐍(𝐡_n-1,𝐞^A) = ∑_t=1^Tα_n,t𝐞_t
Then the text decoder predicts the next word based on previously generated words w_0:n and 𝐜.
At the first timestep, w_0 is a special “<BOS>” token denoting the beginning of a sentence.
§.§ SED Architecture
To ensure the reliability of the SED results, we use a separately-trained SED model.
It adopts a convolutional recurrent neural network architecture with 8 convolutional layers attached by a BiGRU.
The convolution layers take a structure similar to the CNN10 in PANNs, with the difference that we use a downsampling ratio of 4 on the temporal axis.
Compared with other SED models provided in PANNs which typically utilize a downsampling ratio of 32, we keep a relatively high temporal resolution for more accurate SED.
Given an audio clip, the SED model outputs the predicted probability 𝐞̃^S∈ℝ^T̃× M, where T̃ and M denote the sequence length and the number of sound event categories respectively.
Due to the higher resolution of the SED model, T̃ > T.
The probability is temporally aligned to the audio embedding to obtain 𝐞^S ∈ℝ^T × M by pooling on every T̃/T segments along the temporal axis.
§.§ Direct SED Integration
*Cat-prob-AAC The probability is concatenated onto audio embedding, resulting in 𝐞^A_new∈ℝ^T × (D+M), which is used as the input to the decoder instead of the original 𝐞^A.
𝐞^A_new = 𝐂𝐎𝐍𝐂𝐀𝐓(𝐞^A, 𝐞^S)
*Attn-prob-AAC
Another attention is used to integrate the probability and context vector 𝐜 obtained from <Ref>.
The result is used as the input to the GRU instead of 𝐜.
𝐜^new = 𝐀𝐓𝐓𝐍(𝐜, 𝐞^S)
§.§ Temp-tag-AAC
In our proposed temp-tag-AAC system, we transform the SED outputs into quantized temporal tags to make it easier for the model to learn the correspondence between SED outputs and captions.
We use double threshold post-processing <cit.> with a low threshold of 0.25 and a high threshold of 0.75 to obtain the on- and off-sets of detected sound events from probability 𝐞̃^S.
To infer relation between two different audio events, we compare the overlap of them and the duration of the shorter event.
If overlap is less than half of duration, these two events are considered to occur sequentially; otherwise, they are considered to occur simultaneously.
Based on the relations in audio clip obtained above, a 4-scale temporal tag representing the complexity of temporal information is extracted according to <Ref>.
The process is shown as in <Ref>.
During inference, the temporal tag inferred from the SED outputs is used as w_0 fed to the decoder as temporal guidance, replacing the original <BOS>.
To help the model learn the correspondence between the temporal tag and the temporal descriptions in captions better, the ground truth tag is fed to the decoder during training.
The ground truth tag is extracted from the annotations based on the occurrence of conjunction words according to <Ref>.
We manually collect these conjunction words by analyzing the existing AAC datasets, such as “while”, “and”, etc. indicating “simultaneously,” and “follow", “then”, etc. indicating “sequentially”.
§ EXPERIMENTAL SETUP
§.§ Datasets
AudioSet <cit.> is a large-scale weakly-annotated sound event dataset, where sound events appearing in each audio clip are annotated, which consists of 527 categories.
AudioSet also provides a small-scale strongly-annotated subset <cit.> which contains additional on- and off-sets of present events.
AudioCaps <cit.> is the current largest AAC dataset, containing 50k+ audio clips collected from AudioSet.
According to the extraction method mentioned in <Ref>, 13487, 29399, 5438 and 8472 captions in AudioCaps annotations belong to the 4 scales respectively.
The latter two more complex scenarios account for ≈1/4 of the total.
Clotho <cit.> is another AAC dataset, containing 5k+ audio clips.
The distribution of ground truth tag numbers in Clotho annotations is
8246, 18077, 926, 2396 respectively.
The latter two scales account for ≈1/10, which is much more imbalanced than AudioCaps.
As a matter of fact, Clotho derives from Freesound, where the audio clips often contain only the indicated sound with minimal background noise <cit.><cit.>.
§.§ Hyper-parameters
The SED model is first pre-trained on the weakly-annotated AudioSet, and then fine-tuned on the strongly-annotated AudioSet subset <cit.>.
It achieves a d' of 2.37 on the strongly-annotated AudioSet evaluation set, compared with 1.39 in <cit.>, indicating that it provides reliable results for caption generation.
The training of audio captioning models, including the baseline model and other three approaches, follows the setup in <cit.>.
Models are trained for 25 epochs.
Cross-entropy loss is used along with label smoothing (α = 0.1).
We use a linear warm-up and an exponential decay strategy to schedule the learning rate, whose maximum value is 5× 10^-4.
Scheduled sampling is used with the proportion of teacher forcing decreasing linearly from 1 to 0.7.
Beam search with a size of 3 is adopted during inference.
§.§ Metrics
Generated captions are evaluated by our proposed temporal metrics and commonly-adopted metrics in AAC task.
Temporal Metrics
To better evaluate whether a generated caption includes temporal relations, we take the time-related conjunction words as a clue.
Captions can be classified upon whether there exists sequential conjunction words or not.
The conjunction words include “follow, followed, then, after”, which are only used to suggest temporal relations between sound events.
For example, “Door closed then a man talking" is regarded as a positive example (with temporal output) since “then” appears.
We exclude simultaneous conjunction words (e.g. “and, with, as, while”) because they might carry semantic conjunction function and does not always signify temporal relations.
Whether these words express temporal relations cannot be recognized automatically and accurately.
Naturally, the temporal evaluation can be regarded as a binary classification evaluation: to determine whether there are sequential conjunction words or not in a caption.
We therefore use the binary classification evaluation metrics ACC_temp and F1_temp to measure the accuracy of temporal relation description.
Within 5 reference captions, the maximum value contains the most detailed information and is taken as label to ensure the metric correctness.
Overall Quality Evaluation Metrics
We also adopt common audio captioning metrics to evaluate the overall quality of generated captions, including BLEU <cit.>, ROUGE_L <cit.>, METEOR <cit.>, CIDEr <cit.>, SPICE <cit.> and FENSE <cit.>.
For FENSE we do not penalize grammatical errors to focus on evaluating the accuracy of captions' semantic information.
§ RESULTS AND ANALYSIS
§.§ Temporal Relation Enhancement
Comparing temp-tag-AAC with the baseline model, our tag mechanism greatly improves the accuracy of temporal expressions on both datasets (shown in <Ref>).
For AudioCaps, both ACC_temp and F1_temp are significantly improved, suggesting the effectiveness of our method in enhancing temporal relations.
Due to the imbalanced categories of Clotho, F1_temp is more reliable compared with ACC_temp, which also indicates a better capability to generate temporal-rich captions.
Without guidance, the baseline model tends to output general conjunction words that do not represent specific relations (“and” and “while” are typical examples), resulting in loss of attention to the temporal relations between sound events.
Temp-tag-AAC restricts the output by inputting a tag which guides the model to use conjunction words to describe temporal relations.
Typical examples are shown in <Ref>:
by incorporating the temporal tags, temp-tag-AAC successfully expresses the temporal relations while the baseline model simply uses “and”.
§.§ Overall Quality of Generated Sentences
The overall quality is evaluated by commonly-adopted metrics and shown in columns 3 to 8 of <Ref>.
On AudioCaps, temp-tag-AAC outperforms the baseline model on some metrics, but falls behind on others, indicating that our method is comparable to the baseline.
However, on Clotho, the quality of the caption sentences decreases, though the accuracy of temporal relations still sees an increase.
The performance drop is attributed to the data discrepancies between AudioSet and Clotho.
As stated in <Ref>, Clotho audio samples exhibit vastly different characteristics from those in AudioSet.
As a result, the SED model trained on Audioset tends to output complex temporal tags (i.e., “3”) for Clotho data when only one sound event is present.
The captioning model trained with such tags is prompted to generate sentences with complex conjunctions for single-event audios, which undermines its ability in generating reference-alike captions.
The declined quality on Clotho indicates that adaptive SED deserves further exploration for generalization purpose.
§.§ Comparison Between Different Approaches
Comparing three different methods of integrating temporal information, we can conclude that direct integration by concatenation or attention only slightly improves the temporal description accuracy, but are far less effective than temp-tag-AAC.
This validates our intuition that human-like quantized prompts are more conducive to learning the correspondence between temporal information and conjunctions than direct outputs of SED.
§ CONCLUSIONS
This paper aims to improve the performance of expressing temporal information in AAC task.
We demonstrate that direct integration of SED outputs provides little help in improving the temporal relation description accuracy.
To overcome such challenge, we propose temp-tag-AAC which mimics human judgment by introducing 4-scale tags to guide the model to utilize temporal information.
Binary classification metrics ACC_temp and F1_temp are proposed to measure the accuracy of the temporal relation description.
Experimental results show that temp-tag-AAC significantly improves the temporal relation description accuracy.
With the guidance from the temporal tag, temp-tag-AAC uses conjunctions to express the temporal relations between sound events.
It is also comparable with the baseline in terms of the overall semantic quality of generated captions.
IEEEtran
|
http://arxiv.org/abs/2306.10621v1
|
20230618190156
|
UniSG^GA: A 3D scenegraph powered by Geometric Algebra unifying geometry, behavior and GNNs towards generative AI
|
[
"Manos Kamarianakis",
"Antonis Protopsaltis",
"Dimitris Angelis",
"Paul Zikas",
"Mike Kentros",
"George Papagiannakis"
] |
cs.GR
|
[
"cs.GR",
"cs.LG",
"68U05"
] |
[email protected]
0000-0001-6577-0354
FORTH - ICS, University of Crete, ORamaVR
Heraklion
Greece
0000-0002-5670-1151
[email protected]
University of Western Macedonia, ORamaVR
Kozani
Greece
0000-0003-2751-7790
[email protected]
FORTH - ICS, University of Crete, ORamaVR
Heraklion
Greece
0000-0003-2422-1169
[email protected]
University of Geneva, ORamaVR
Geneva
Switzerland
0000-0002-3461-1657
[email protected]
FORTH - ICS, University of Crete, ORamaVR
Heraklion
Greece
0000-0002-2977-9850
[email protected]
FORTH - ICS, University of Crete, ORamaVR
Heraklion
Greece
This work presents the introduction of , a novel
integrated scenegraph structure, that to
incorporates behavior and geometry data on a 3D scene. It is specifically designed to
seamlessly integrate Graph Neural Networks (GNNs) and address the challenges associated with transforming a 3D scenegraph (3D-SG) during generative tasks. To effectively capture and preserve the topological relationships between objects in a simplified way, within the graph
representation, we propose , that seamlessly integrates Geometric Algebra (GA) forms.
This novel approach enhances the overall performance and capability
of GNNs in handling generative and predictive tasks, opening up new possibilities and aiming to lay the foundation for further exploration and development of
graph-based generative AI models that can effectively
incorporate behavior data for enhanced scene generation and
synthesis.
Meta-Learning for Airflow Simulations with Graph Neural Networks
George Papagiannakis
July 31, 2023
================================================================
§ INTRODUCTION
The recent success of pre-trained foundation models, such as GPT (Generative Pre-trained Transformer), has paved the way for evolution in geometric deep learning <cit.> and GNNs <cit.>. Such advancements have greatly improved the generation of static 3D scenes <cit.> by incorporating relational patterns within the graph topology as node or link features. Typically, these scenes rely on well-defined 3D-SGs. The creation of immersive VR experiences require the incorporation of behavioral information and interactions, that are specified with the adoption of the graph structure Lessons-Stages-Actions (LSA) <cit.>.
Nevertheless, the efficient input of all encapsulated data to GNNs poses a challenge, as it requires managing three distinct graph structures (see Figure <ref>), namely 3D geometry, interactive event-based animations encapsulated as behaviours (LSAs) and GNNs. This introduces a great complexity in maintaining transformations between these graphs that may lead to a potential bottleneck. To address these limitations, we propose the Universal Scenegraph (UniSG), a novel data structure aimed at providing a no-code approach featuring GNNs, that generate new nodes, edges, and features, reflecting the creation of 3D models, scenes, and behavioral steps. UniSG paves the way towards generative AI techniques, by integrating Entities-Components-Systems (ECS), 3D-SGs, and LSAs with GNNs, simplifying the creation of 3D scenes with embedded behavior, and mitigating existing process bottlenecks.
UniSG leverages a representation form that is able to capture and preserve relative topological information between parent and child entities. Rather than relying on conventional Euclidean-based matrix form or Euler angles or dual/single quaternions, commonly employed in 3D scenes, we utilize Geometric Algebra (GA) based forms, such as multivectors; the resulting model is denoted as . GA-based representations
enable the encapsulation of diverse transformation
data in a unified format, facilitating
<cit.>
deeper geometric connections, thereby influencing the performance of GNNs across various tasks
(see Section <ref>).
§.§ The importance of GNNs for Generative AI
GNNs have gained significant attention in recent years due to their effectiveness in handling graphs of varying types, sizes, structures, connectivity patterns and data with complex relational structures, due to the high flexibility and adaptability of their architectures. Their design makes them particularly well suited for generative and predictive AI tasks that involve graph-structured data, like complex 3D scenes, where nodes represent objects and edges encode relationships or connections between them, as they are able to capture spatial relationships, model dependencies and extract meaningful representations. Specifically for an entity-component-systems (ECS) in a scenegraph CG framework <cit.>, a GNN involves heterogeneous nodes, representing entities and diverse components, containing object-related data (transform, mesh, image texture data, etc.).
GNN aggregation allows the capture of the graph's local dependencies, while its propagation through the graph allows the capture of global dependencies. In this context, complex interactions between nodes may also be captured by iterative node representation refinement, using message-passing mechanisms. Such rich information about the nodes and their spatial relationships, learned from the training data, may be encoded in meaningful and low-dimensional embeddings, that involve fixed-length vectors or a continuous feature space. The GNN model may be trained in a) supervised manner, involving annotated 3D-SGs, aiming to predict missing elements or labels, and b) unsupervised manner involving graph similarity or reconstruction losses, aiming to optimize the generative model.
§.§ GA and GNNs
The combination of GA with
GNNs offers several benefits across different domains
and tasks<cit.>.
GA-based approaches have demonstrated superior
information (inherent structures and correlations among multiple dimensions) preservation, as multi-dimensional data are represented through multivectors. This leads to improved
performance, compared to traditional techniques, in tasks including as
time series processing, hyperspectral image analysis, and traffic
prediction <cit.>. They also exhibit reduced overfitting risks, compared to real-valued counterparts, making them more effective in capturing complex features while maintaining the
multi-dimensionality of the data.
GA is particularly advantageous in handling rotational data,
making it valuable for computer vision tasks, like pose estimation or protein prediction <cit.>.
GA-based formulations enable better regression on rotations and
can reduce errors in high-noise datasets while learning fewer
parameters. Additionally, GA-based graph feature embedding
enhances the quality and presentation of graph features in GNNs. By leveraging the high algebraic dimensions
of GA, feature information distortion across hidden layers can be
minimized, resulting in improved performance in graph-related tasks.
Furthermore, GA-based approaches can
reduce computational complexity by utilizing appropriate multivector
representations and exploiting the algebraic properties of GA.
This reduction in complexity enables more efficient data processing
and analysis, with fewer parameters to be learned without
compromising performance.
In summary, the integration of GA with Neural
Networks offers benefits, such as enhanced representation of
multi-dimensional data, improved information preservation,
effective handling of rotational data, better graph feature
embedding, robustness to poor network conditions, and reduction
of computational complexity. These advantages make GA a
valuable framework for various scientific domains and tasks,
facilitating more accurate and efficient data processing and
analysis.
Paper Overview. In Section <ref> we introduce
the UniSG model, whereas in Section <ref> we
propose the enhanced model that exploits GA-based
representation forms. These models are implemented and available
to use within the Elements project, which now includes
enhanced GA-functionalities, as described in Section <ref>.
Results obtained for our models performance are presented in
Section <ref>, followed by Conclusions, Future Work
and Acknowledgments.
§ UNISG: A UNIVERSAL SCENEGRAPH
The UniSG system, introduced in a concise manner in
<cit.>, exhibits a heterogeneous graph structure
built upon the Entity Component System in a Scenegraph
(ECSS) model, such as the one proposed in <cit.>.
This graph encompasses diverse component types capable
of storing both geometric and behavioral information
relevant to interaction with the 3D scene and events
triggered by specific conditions. Specifically, the UniSG
graph incorporates three types of components: , ,
and . The components maintain a count of node
types among their children, while the components store
a 16-dimensional vector obtained by flattening the corresponding
transformation matrix.
The components house a feature vector of size 1024,
representing the mesh using a suitable encoder such as
the AtlasNetEncoder <cit.> combined with
a Poisson sampling process. This encoding methodology ensures
a fixed-size representation regardless of the complexity of
the original mesh. Subsequently, the resulting vector can be
decoded using the AtlasNetDecoder to generate a point cloud,
which can then be further reconstructed into a triangulated mesh.
To incorporate behavioral functionality, the UniSG system
introduces a forth component that stores data
pertaining to desired behavioral characteristics, accompanied
by appropriate systems responsible for processing
this data. These ECS components and systems effectively
represent user actions required within a training scenario,
akin to those stored in the Lesson-Stages-Actions (LSA)
data structure <cit.>.
The nodes adhere to a standardized structure for
all actions and store action-specific data and conditions in
vector form. The diverse systems continuously traverse
the graph or its designated sections to validate whether the
specified conditions are met.
The architectural elements of the ECS framework are depicted in
Figure <ref> as follows. The black nodes represent
entities, while
the blue nodes represent components, which encapsulate various
data such as transformations, meshes, and actions.
Systems, represented by red lines, process the data contained
in components and perform specific tasks while traversing the
graph. Graph features, highlighted in yellow, are represented in vector form, enabling their utilization by GNNs for further analysis and processing.
Figure <ref> also exemplifies the implementation
of an "Insert" action within the UniSG system. In this specific
scenario, the
system is responsible for verifying whether the placement
of the scalpel on the knee adheres to the specified spatial
boundaries. This check is performed when the system visits
the component.
To consolidate disparate data types into a unified format,
various file formats commonly employed have been merged into
a single master file. Pixar's Universal Scene Description
(USD) (<http://graphics.pixar.com/usd/>) future-proof format has been selected for
its exceptional versatility, enabling the inclusion of more
advanced features such as VR-Recording <cit.>.
§ : EMPOWERING UNISG WITH GEOMETRIC ALGEBRA
The original UniSG model employed a component, which stored
the topological relationship between an entity and its parent as
a 16-dimensional array vector.
This vector was obtained by flattening a 4x4 transformation
matrix, resulting from the multiplication of Translation, Rotation,
and Scaling matrices.
In this paper, we propose the model, which overcomes
the limitation of relying solely on matrix-derived vectors.
The model suggests the utilization of alternative
forms of transformation data, allowing for a more diverse
range of representations. Particularly, we advocate for the
adoption of GA to express data that
represents geometrical relationships. The integration of GA is
not merely intended to promote its acceptance, but rather to
demonstrate its potential to yield improved outcomes in various
scientific domains, particularly those involving predictive and
generative tasks, with a special focus on GNNs.
§ WITHIN THE ELEMENTS PROJECT
The proposed structure is already implemented within
the Elements project, introduced in <cit.>, similar
to its predecessor UniSG.
Elements, presents
a pioneering open-source pythonic framework based on
entity-component-systems (ECS) implemented within a scenegraph architecture.
It is explicitly tailored to address the demands of scientific, visual, and
neural computing applications. Comprised of three vital Python components—pyECSS,
pyGLV, and pyEEL—the Elements package offers a foundational implementation of the
ECS paradigm, accompanied by practical examples that proficiently familiarize
even inexperienced computer graphics programmers with fundamental principles
and methodologies. Notwithstanding its straightforwardness, Elements retains a
transparent nature, affording users the ability to scrutinize and manipulate
each stage of the graphics pipeline. Leveraging Python's inherent advantages
in rapid prototyping and development, users can augment Elements' capabilities
by introducing novel components and systems or refining existing ones.
The collection of jupyter notebooks within the pyEEL repository serves as a
demonstrative repository for showcasing the influence of Elements' present and
future features across diverse scientific domains and packages, thereby establishing
a valuable pedagogical resource for both novice and intermediate developers.
To facilitate the transition to GA forms, pyEEL now
incorporates a series of Jupyter notebooks that serve three
purposes: (a) introducing basic
GA concepts to users unfamiliar with GA, (b) demonstrating
the equivalence between different representation forms in a
digestible manner for intermediate GA users, and (c) presenting
more advanced applications of these principles, such as model
animation using GA, for experienced GA users.
§.§ Geometric Algebra powered 3D scenegraph
Currently, matrix representations dominate the field due to
their ease of implementation and compatibility with GPU
shader-level operations. Although quaternions have mitigated
issues such as gimbal lock and interpolation artifacts when
evaluating rotation matrices, GA introduces a further advancement
in representation forms. By utilizing translators, rotors, and
dilators as GA-based counterparts for translation, rotation,
and dilation, respectively, we can achieve improved results
both quantitatively (reducing the number of keyframes required for
interpolation) and visually <cit.>.
Complex operations, such as
extracting geometric information from motors (i.e., geometric
products of a translator
and a rotor), are now performed with ease, by
leveraging the capabilities of the
well-maintained Clifford Python package
<cit.>, facilitating efficient transmutation between different forms.
Specifically, let M be a 4x4 matrix representing a rotation
followed by translation. It is well known that the top
left 3x3 submatrix is a rotation matrix and the 3 first elements
of the last column is the translation vector t. From R
matrix we can extract the angle/axis, and therefore determine
the equivalent unit quaternion q that expresses the same rotation.
Finally, having the quaternion and the translation vector you
can easily concatenate them to obtain the respective
dual-quaternion dq. The following is summarized in
(<ref>), where rotational data
are represented in cyan, translational in blue and mixed data in purple.
M =
[ m_1 m_2 m_3 t_1; m_4 m_5 m_6 t_2; m_7 m_8 m_9 t_3; 0 0 0 1 ]⇔
R = [ m_1 m_2 m_3; m_4 m_5 m_6; m_7 m_8 m_9 ]& t = (t_1, t_2, t_3)
⇔(Angle,Axis) & t⇔Quaternion q & t⇔Dual-Quaternion dq.
From the translation vector t, we can easily determine the
corresponding translator T_PGA in 3D PGA as follows:
T_PGA = 1 -0.5e'_0(t_1e'_1+t_2e'_2+t_3e'_3),
where e'_0,e'_1,e'_2 and e'_3 are basis vectors of 3D PGA.
Similarly, we can derive the
corresponding translator T_CGA in 3D CGA as :
T_CGA = 1 -0.5e_0(t_1e_1+t_2e_2+t_3e_3)(e_4+e_5),
where e_0,e_1,e_2, e_3,e_4 and e_5 are basis vectors of 3D CGA.
Extraction of the vector t from both T_PGA and T_PGA is apparent as long as the multivectors are normalized; otherwise,
a division by the scalar part is initially required.
Given a unit quaternion q=q_0 + q_1i+q_2j+q_3k, we can easily determine the
respective rotor R_PGA in 3D PGA and
R_CGA in 3D CGA (see <cit.>) as
R_PGA = q_0 - q_3e'_12 +q_2e'_13 - q_1e'_23, and
R_CGA = q_0 - q_3e_12 +q_2e_13 - q_1e_23,
where {e'_12, e'_13, e'_23} and
{e_12, e_13, e_23} are respectively PGA and CGA
basis vectors. In conclusion, the following
equivalencies holds:
T_PGA⇔t⇔T_CGA, and R_PGA⇔q⇔R_CGA.
Lastly, in <cit.>, it is shown that
given a PGA motor M_PGA resulting from the geometric product
of T_PGA and R_PGA, one may extract the latter two.
The same holds for a CGA motor M_CGA resulting from the geometric product of the translator T_CGA and the
rotor R_CGA,
yielding:
M_PGA⇔R_PGA & T_PGA, , and M_CGA⇔R_CGA & T_CGA.
Using all equivalencies described above we can now extend
(<ref>) to the complete equivalency list of representation forms; all equivalencies can occur using
functions implemented within the Elements framework:
Transformation Matrix M⇔Rotation Matrix R & vector t⇔
(Angle,Axis) & t⇔Quaternion q & t⇔Dual-Quaternion dq.
⇔M_PGA⇔R_PGA & T_PGA⇔M_CGA⇔R_CGA & T_CGA.
§ RESULTS
To validate the effectiveness of our proposed approach,
we conducted three experimentation tasks in the domains of
classification, generative modeling, and topology prediction for
3D scenegraphs. In
Figures <ref> and <ref>, we present the obtained results
using different representation forms for the
component of the model. Specifically, we compare the use
of a) flatten matrices (representing the original UniSG), b) CGA
and c)PGA multivectors, d) a vector for translation combined with
an angle and an axis for rotation , as well as a e) dual-
quaternion representation.
Each task is accompanied by a comparison graph, demonstrating
the performance of the GA-based representations in relation to the
conventional Euclidean-oriented formats. The results consistently
show that the utilization of GA-based representation forms, such
as CGA/PGA multivectors and dual-quaternions, either outperforms
or performs on par with the traditional flatten matrices
representation.
§.§ Classification
Our methodology was evaluated through a classification task
involving a neural network architecture composed of two
Convolutional layers. The GraphSAGE convolution operation was
applied to the input graph within this framework. To assess the
performance of our approach, we curated a dataset comprising of
100 3D scenes. These scenes were generated using a random noise-
based data augmentation technique, which involved perturbing the
components of two behaviorally rich 3D scenes modeled using both
the UniSG and system. The scenes selected for
augmentation were a surgical operating room (OR) and a living
room. The dataset was split into training and testing sets, with a
ratio of 70% for training and 30% for testing. The neural
network model was trained for 20 epochs, and the GNN attention
mechanism was employed. In the experimentation phase of our
approach, we performed 10 runs for each experiment, which,
remarkably, achieved a 100% accuracy on both the training and
testing splits, demonstrating its effectiveness.
In the experimentation results of the classification task,
depicted in Fig <ref>, we notice a low initial
mean accuracy on all methods, indicating a possible need for
longer training or model adjustments. Accuracy improves
consistently over epochs, exhibiting a few fluctuations in CGA and
PGA. The steepness of the Vector+Angle/Axis curve indicates that
the model learns quickly as its accuracy get 100% after 7.5
epochs. All curves seem to be converging to 100% accuracy after
17 epochs, a clear sign that it is performing well on the training
data.
We also notice a low initial loss on
all curves, with vector+Angle/Axis curve to be minimizing faster,
after 10 epochs, than the others. All loss curves seem to converge
after 18 epochs, indicating a well performing model.
§.§ Generative AI using UniSGˆGA
Our approach was further tested on a generative task. For
this purpose, we generated a dataset of 1000 unique scenes with
meaningful layouts, specifically representing a surgical operating
room (OR). These scenes were then utilized to train a Conditional
Graph Variational AutoEncoder (CGVAE). The primary objective of
the CGVAE is to enable the addition of objects to an existing or
empty scene based on their category, either sequentially or in
bulk. Ultimately, since the utilized structure includes behavior
components, for all object entities, and the respective systems, we
aim to train our autoencoder with scene objects that incorporate behavior and
provide a complete generative AI solution (currently only topology generation is evaluated).
To achieve this, each entity node within the
was labeled with its corresponding category, e.g.,
"Scalpel". During the training process, the Encoder module, which
encompasses a GNN with Graph Convolutional
layers, encodes the N nodes of the graph using their inherent
F features and their associated category embeddings. For each of the
nodes a vector E is produced, by passing the labels through the embeddings,
resulting in a NxE matrix. The resulting encodings/latent space representation
for each node, 𝒵, are concatenated with their respective category embeddings, by
concatenating the input graph node matrix, of size NxF, with the embeddings,
resulting in a Nx(F+E) matrix. This concatenated representation, denoted
as 𝒵, is subsequently fed into the Decoder module, which consists of
two Multilayer Perceptrons (MLPs): one for decoding
the node features from 𝒵̂ and one for decoding the
adjacency matrix from 𝒵̂ (see Figure <ref>).
Our training procedure incorporates several loss
functions. Specifically, we employ mean squared error (MSE)
loss for node feature reconstruction, binary cross-entropy
(BCE) loss for adjacency matrix reconstruction, and
Kullback-Leibler (KL) divergence loss to encourage
diversity in scene generation.
As the model is conditionally trained using these categories, a conditional sampling of the generated scenes is possible, based on specific object categories. This allows the generation of scenes that are greatly influenced by the categories of existing or newly introduced nodes.
The experimentation results of the generative task,
we see the loss in Fig <ref> (Left),
that depicts the discrepancy between the generated
and the target output. In this regard we notice
that all mean losses are initially relatively low,
with PGA and CGA significantly lower, meaning that
all models produce high-quality outputs from the start.
All loss reductions are minimized rapidly consistently
below 1.0. Although all methods seem to converge very
early, CGA and PGA mean loss curves are always below
the others; which is indicative that the model is well-performing
and that it has learned the generative task.
§.§ Topology prediction
Finally, a topology prediction task was utilized to
further evaluate the differences between UniSG and
the GA-empowered .
In such tasks, it is common to seek accurate predictions
regarding the spatial relationships between objects,
including relationships such as "above", "below",
"right-of", as well as higher-level relationships like
"part-of" or "connected-to". Our approach was
specifically evaluated on a topology prediction task
involving the identification of the "on-top-of"
relationship between two objects. To address this
prediction task, we made modifications to our previous
model by transforming the Graph Variational AutoEncoder
into a simplified Graph AutoEncoder that focused
on adjacency matrix reconstruction for predicting
the desired topology link based on the graph structure.
It is worth noting that while our modified model proves
effective for certain topology prediction tasks, it may
not capture the complexity of relationships or high-level
semantics within the UniSG.
The experimentation results of the topology prediction
task, depicted in Fig <ref> (Right), show
that mean loss (on 10 runs) with CGA and PGA are initialy low and are
minimized rapidly compared to other methods. Although
all methods seem to converge early, after 15 epochs, CGA
and PGA mean loss curves are always below the others,
indicating a well-performing model. The loss curves do not
show any signs of overfitting which is a direct consequence
of the performed data augmentation, increasing diversity and
quantity, of the training samples.
For each of the 10 runs, a single random scene was generated
with 10000 cubes, and link prediction was performed on each run on a single scene.
§ CONCLUSIONS AND FUTURE WORK
In this work, we introduced , an integrated graph structure designed to be seamlessly compatible with Graph Neural Networks (GNNs) while incorporating behavior data. A key contribution of is its ability to overcome the challenges associated with transforming a 3D scenegraph (3D-SG) when conducting generative tasks. By leveraging GA forms, effectively captures and stores the topological relations between objects within the graph, while enhancing the performance and capability of GNNs when handling predictive and generative tasks.
This advancement paves the way for more efficient and intuitive approaches in generating complex 3D scenes with embedded behavior.
As a future endeavor, our plan is to train the GNN architecture
of using an extensive corpus of 3D scenes encompassing
both content and behavior. This training dataset will consist of
various types of scenes, including models and even segments of
educational curricula. Through this training process, we aim to
evaluate the performance of UniSG on intricate generative AI
tasks, with the ultimate objective of enabling the generation of
behavior-embedded 3D scenes in a streamlined manner, towards a no-code authoring pipeline.
The project was partially funded by
the National Recovery and Resilience Plan "Greece 2.0" - NextGenerationEU, under grant agreement No TAΣΦP-06378 (REVIRES-Med), and Innovation project Swiss Accelerator under grant agreement 2155012933 (OMEN-E), supported by Innosuisse.
ACM-Reference-Format
|
http://arxiv.org/abs/2306.01665v1
|
20230602164042
|
SourceP: Smart Ponzi Schemes Detection on Ethereum Using Pre-training Model with Data Flow
|
[
"Pengcheng Lu",
"Liang Cai",
"Keting Yin"
] |
cs.SE
|
[
"cs.SE",
"cs.AI"
] |
SourceP: Smart Ponzi Schemes Detection on Ethereum Using Pre-training Model with Data Flow
Lu Pengcheng, Cai Liang, Yin Keting
Zhejiang University
Hangzhou, China
[email protected], [email protected], [email protected]
July 31, 2023
======================================================================================================================================
As blockchain technology becomes more and more popular, a typical financial scam, the Ponzi scheme, has also emerged in the blockchain platform Ethereum. This Ponzi scheme deployed through smart contracts, also known as the smart Ponzi scheme, has caused a lot of economic losses and negative impacts. Existing methods for detecting smart Ponzi schemes on Ethereum mainly rely on bytecode features, opcode features, account features, and transaction behavior features of smart contracts, and such methods lack interpretability and sustainability. In this paper, we propose SourceP, a method to detect smart Ponzi schemes on the Ethereum platform using pre-training models and data flow, which only requires using the source code of smart contracts as features to explore the possibility of detecting smart Ponzi schemes from another direction. SourceP reduces the difficulty of data acquisition and feature extraction of existing detection methods while increasing the interpretability of the model. Specifically, we first convert the source code of a smart contract into a data flow graph and then introduce a pre-training model based on learning code representations to build a classification model to identify Ponzi schemes in smart contracts. The experimental results show that SourceP achieves 87.2% recall and 90.7% F-score for detecting smart Ponzi schemes within Ethereum's smart contract dataset, outperforming state-of-the-art methods in terms of performance and sustainability. We also demonstrate through additional experiments that pre-training models and data flow play an important contribution to SourceP, as well as proving that SourceP has a good generalization ability.
blockchain, Ethereum, smart contract, Ponzi scheme, pre-training model
§ INTRODUCTION
Blockchain technology is rapidly evolving as an open ledger of recorded transactions maintained in a distributed network of mutually untrusted peers (i.e., peer-to-peer network), where each peer verifies transactions through a consensus protocol <cit.>. Blockchain has now developed applications in areas such as the Internet of Things <cit.>, voting systems <cit.>, authentication <cit.>, ledgers <cit.>, disaster relief <cit.>, healthcare <cit.>, and edge computing <cit.>, which have attracted significant interest from industry and academia <cit.>. Also, blockchain has become a common infrastructure for the emerging metaverse and Web3, and DeFi, DAO, and Non-Fungible Tokens developed from blockchain technology are becoming more and more popular <cit.>.
Ethereum <cit.> is currently the most popular public blockchain platform with smart contract functionality <cit.>. Ethereum handles peer-to-peer contracts through the cryptocurrency Ether(ETH) and can store data, run smart contracts, and share information globally. First proposed by Nick Szabo in the mid-1990s <cit.>, smart contracts contain contractual terms that will be automatically executed when predefined conditions are met, stored, replicated, and updated in a distributed blockchain. The combination of blockchain technology and smart contracts makes the dream of a “peer-to-peer marketplace" come true, which means that there will be no third-party intervention in transactions between buyers and suppliers via smart contracts in a blockchain marketplace <cit.>.
Smart contracts on the Ethereum platform are Turing-complete, and through Turing-complete smart contracts, users can not only trade in cryptocurrency but also perform arbitrary actions on the blockchain <cit.>. Because of this, decentralized applications (DApps) <cit.> consisting of smart contracts are currently growing rapidly, such as CryptoKitties <cit.> and IDEX <cit.>. Yet at the same time, smart contracts have given a new opportunity for the spread of Ponzi schemes <cit.>.
Born 150 years ago, Ponzi schemes are a classic financial fraud that lures new users in with the promise of high profits, with the core mechanism of compensating the investors and creators who joined before with the investments of new investors, creating no value themselves <cit.>. Once no new investors join, the Ponzi scheme quickly collapses, new investors who are not compensated lose their money, and the scam makers and early participants receive illegal income. Smart contracts allow the creators of Ponzi schemes to remain anonymous, while their immutability makes them extremely difficult to change after being deployed to the blockchain, the feature that criminals exploit to deploy scams on smart contract platforms like Ethereum <cit.>. Such Ponzi schemes deployed on smart contracts are called smart Ponzi schemes. Ponzitracker <cit.> found close to $3 billion amount of Ponzi schemes in 2022, related to blockchain-based cryptocurrencies. The recent smart Ponzi scheme event on the Forsage website involved more than $300 million <cit.>. The danger of financial fraud is so serious that reducing smart Ponzi schemes is imminent. Meanwhile, smart contracts are the building blocks of DApps, and if individual smart contracts in DApps are Ponzi schemes, then the DApp may also be dangerous. Therefore, detecting whether smart contracts deployed in Ethereum are smart Ponzi schemes and flagging them are important to prevent financial fraud and maintain the healthy development of DApps and blockchain platforms.
To the best of our knowledge, the existing methods for detecting smart Ponzi schemes fall into three main categories; the first category of methods uses bytecode or opcodes from smart contracts as features to train classifiers or perform static analysis <cit.>; the second category of methods uses the transaction behavior features of smart contracts <cit.>; the third category of methods uses both operational code features and account features to train the model <cit.>. However, these three methods face some limitations, firstly byte code and operation code features lack interpretability, and adding or removing some operation codes in smart contracts can be easily done to circumvent detection. However, all three methods face some limitations. Firstly, bytecode and operation code features lack interpretability, and adding or removing some opcodes in smart contracts can be easily done to circumvent detection. Transaction behavior features and account features require a large amount of data collection and suffer from identification lag, and it is also difficult to accurately locate fraudsters in the anonymous Ethereum platform. Meanwhile, in the context of rapid smart contract updates, the sustainability, and performance of currently available detection methods for new types of smart Ponzi schemes have declined a lot.
Our Method.
To address these challenges, we propose a smart Ponzi scheme detection approach based on pre-training models and data flow called SourceP. Pre-training models in current natural language processing (NLP) techniques have facilitated many tasks <cit.>, thus, it is reasonable to explore the great potential of pre-training models in smart Ponzi scheme detection. Since smart contracts on Ethereum are typically written in Solidity <cit.>, SourceP will detect Ponzi schemes on smart contracts written in the Solidity language.
SourceP embodies a very key innovation point, namely the use of only the source code of smart contracts as a feature. Figure <ref> shows the significant differences between existing traditional methods and SoucreP in detecting smart Ponzi schemes through flowcharts. Our method no longer requires access to the bytecode, transaction, and account information of smart contracts, reducing the difficulty of data acquisition and increasing the interpretability of the model, while avoiding the various problems associated with too many features. The specific implementation steps of SourceP can be seen in Figure <ref> and Figure <ref>. First, the smart contract source code is converted into a data flow, then the source code information is fed into the pre-training model together with the data flow representing variable dependencies, and finally, the identification results of smart contract Ponzi scheme are output and evaluated. Ultimately, we find that SourceP outperforms previous methods in smart Ponzi scheme detection.
The main contributions are summarized as follows:
* We propose a method, SourceP, to automatically detect Ponzi schemes in smart contracts. To the best of our knowledge, SourceP is the first method to detect smart Ponzi schemes using only the source code of smart contracts as features. It introduces a pre-training model for learning code representations and uses the code semantic information provided by the data flow converted from the source code for detection.
* We conducted extensive experiments to demonstrate that SourceP outperforms the state-of-the-art in detecting smart Ponzi schemes in terms of performance and sustainability under the same conditions, and has a good generalization ability.
* We have exposed the dataset and source codeof this study for other researchers to replicate our methods and evaluation to facilitate future research in this direction.
https://github.com/Lpover/SourceP
§ BACKGROUND
§.§ Ethereum and Smart contract
Ethereum is the first open-source public blockchain platform that supports advanced and custom smart contracts with the help of a Turing-complete virtual machine called Ethereum Virtual Machine (EVM) <cit.>. EVM is the environment in which smart contracts run, and each node in the Ethereum network runs an implementation of EVM and executes the same instructions. Smart contracts are written in languages such as Solidity and Serpent <cit.>, and then the smart contract code is compiled into EVM bytecode and deployed on the blockchain for execution. Once the smart contract is deployed on the blockchain, the corresponding bytecode and creation transactions are permanently stored on the blockchain. Ethereum is now becoming the most popular platform for smart contract development and can be used to design various types of decentralized applications (DApps) that can be used for digital rights management, crowdfunding, and gambling <cit.>.
§.§ Ponzi Schemes on smart contract
The “Ponzi scheme" is a fraudulent investment scam that promises high rates of return with little risk to the investor. Ponzi schemes create returns for early investors by acquiring new investors. It is similar to a pyramid scheme in that it is based on using new investors' money to pay off early investors. Both Ponzi schemes and pyramid schemes eventually bottom out when the influx of new investors dries up and there is not enough money to turn around. Then such scams simply fall apart and all those who have not yet recouped their investments never get them back.
Smart contracts provide a perfect breeding ground for the modern fraudster. Whereas traditional financial fraudsters need to worry about the law, third-party institutions, and their public image, smart contracts don't have the same problems.
Ponzi schemes on the blockchain have proliferated. Vasek et al. <cit.> analyzed the supply and demand of bitcoin-based Ponzi schemes and identified 1780 different Ponzi schemes. Chainalysis <cit.> investigated cryptocurrency crimes from 2017 to 2019 and found that 92% of cryptocurrency scams were Ponzi schemes. Ponzi schemes on Ethereum are packaged as investment projects or gambling games that promise huge returns to those who invest. Some large-scale Ponzi schemes even build sophisticated websites and run aggressive marketing campaigns to attract investors. It is difficult for investors with little knowledge of blockchain to distinguish the true nature of smart contracts that are disguised as high-yield investment schemes <cit.>.
Bartoletti et al. <cit.> found that from August 2015 to May 2017, 191 smart Ponzi schemes active on Ethereum had collected nearly $500,000 from more than 2,000 different users. They specifically analyzed 184 real smart Ponzi schemes on Ethereum and classified smart Ponzi schemes into four types, chained-shaped, tree-shaped, waterfall as well as handover, most of which are chained-shaped with 82% of the total, while the other types have only 2% of the total and the rest are The remaining are unclassified other types. Next, we will elaborate on these four types of smart Ponzi schemes.
Chain-shaped.
The Chain-shaped scheme is like a chain, where each investor is followed by only one new investor, and the old investor makes money by inviting new investors and repays their initial investment if there is enough money in the scam. The new investor then tends to invest more money to join because he is promised a return of several times the cost.
Tree-shaped.
Unlike Chain-shaped, Tree-shaped does not link new investors to old investors in a 1:1 ratio. Rather, it is structured like a tree where one investor can invite multiple investors to join, i.e. a parent node can have multiple children. The main attraction of this scheme for participants is that they can use their referral codes to invite more people to join, and each investor will be rewarded more for the more new investors he brings in. This is the same profit model as multi-level marketing, and if enough benefits are brought in, the Ponzi scheme could grow exponentially in size.
Waterfall.
The Waterfall is almost structurally identical to the Chain-shaped. Each node connects up to two other nodes: the node that joined before and the node that joined after. However, the difference lies in the distribution logic of the funds invested by each node, where the investors' benefits depend on their position on the chain, their initial investment, and the value of the investment made by the invited new investors. The benefit distribution scheme of this scam is relatively complex, but the earlier the investor joins, the greater the benefit they receive.
Handover.
Unlike Chain-shaped which has a fixed minimum entry fee, Handover also has an initial entry fee but it increases each time a new investor joins. This ensures that when a new member joins, the previous investor gets paid, but also means that the program becomes increasingly risky as more users invest due to the inflated entry fee.
§.§ Smart Ponzi Scheme Case
Figure <ref> shows an example of a smart Ponzi scheme, a Chain-shaped scheme of smart Ponzi source code fragment written by Roan <cit.>. Each time a new investor participates in this smart Ponzi scheme, the join() function is called and then adds the new user to the list of investors with their investment amount, and then transfers 10% of the investment amount to the owner of the smart contract. Then, in a first come, first served order, if the total amount of the pool is already more than twice the amount invested by the first investor who has not been repaid, then he is compensated with twice the amount invested in ETH.
Roan said of the Ponzi scheme, “That’s a complete lie, but enough to sucker someone in." <cit.>
§.§ Data Flow
The data flow is a graph <cit.>, which is also called a data flow graph (DFG), and it is a graph that represents the dependencies between variables in a code, where the nodes represent the variables and the edges represent the sources of the values of each variable. Data flow graphs are very useful in code analysis tasks <cit.>. Unlike abstract syntax tree (AST), data flow has the same structure under different abstract syntaxes of the same source code, and such a structure provides critical code semantic information for code understanding. Moreover, dataflow has a simpler structure compared to AST, which is more efficient when used in models. So SourceP will use the dataflow graph as the input to the model. How to extract the dataflow from the source code will be described in Section <ref>.
§.§ Pre-training model
Pre-training models can benefit a variety of downstream tasks by storing knowledge into huge parameters and fine-tuning them on specific tasks, the rich knowledge implicit in huge parameters has been extensively demonstrated through experimental validation and empirical analysis <cit.>. Pre-training models such as BERT <cit.>, ELMo <cit.>, GPT <cit.>, and XLNet <cit.> have been successful on various tasks. At the same time some pre-training models for learning code representations have also emerged, such as CodeBERT <cit.>, CuBERT <cit.>, GPT-C <cit.>, and Code-GPT <cit.>. SourceP uses GraphCodeBERT <cit.>, a pre-training model for learning code representations trained on the CodeSearchNet <cit.> dataset, as the main part of the model. The specific method of the pre-training model used by SourceP will be described in Section <ref>.
§ OUR METHOD
Method overview.
Our method is divided into two main phases: 1) the input normalization phase, which converts the source code of smart contracts into AST and DFG; 2) the smart Ponzi scheme detection phase, which feeds the source code and DFG into a pre-training model and outputs the final detection results. In the following, the first two subsections describe the details of each phase in detail. The next subsection deals with the functions used to incorporate the DFG into the Transformer. The last subsection is a detailed description of the three pre-training tasks of the pre-training model.
§.§ Data Flow Graph Generation
Source code to AST.
First, we have the source code SC={sc_1, sc_2, …, sc_n}, then we convert the source code into abstract syntax trees (ASTs). Here we need to use a tool called tree-sitter <cit.>, which can build a source code file into an AST. However, the tree-sitter does not provide official Solidity language support, so we need to use tree-sitter-solidity <cit.> to convert the Solidity language source code to AST. This tool contains a grammar for tree-sitter which major inspiration and some structures have been taken from tree-sitter-javascript <cit.>. AST includes the syntax information of the code, and the leaf nodes in the tree are used to identify the sequence of variables that are denoted as Var={v_1,v_2,...,v_n}.
AST to DFG.
The data flow is a graph, so we can think of each variable in the AST as a node. The edge connecting two nodes is denoted as ε=⟨ v_i, v_j⟩, which means that the value of the j-th variable comes from the i-th variable. The set of all directed edges in the graph is denoted as Edge={ε_1, ε_2, …, ε_n}. So the final data flow graph is represented as 𝒢(SC)=(Var, Edge), this is the DFG we have constructed to represent the dependencies between variables of the source code. Figure <ref> shows the process of converting the smart Ponzi scheme source code to AST and DFG.
§.§ Model Structure
In this section, we will describe the model structure of SourceP in detail. Our method primarily follows GraphCodeBert <cit.>, so the model architecture follows BERT<cit.> and the multi-layer bidirectional Transformer<cit.> is the backbone of the model. Figure <ref> shows the structure of the whole model.
The input to the model is the data flow graph converted from the source code, since the detection task is different, the paired comments of the input GraphCodeBert do not need to be considered. We modified the input by followed to Peculiar's <cit.> method. The difference is that Peculiar uses a crucial data flow graph (CDFG) that retains only the data flow of key nodes as input to the model, but we still choose to use the DFG as input to the model. This is because Peculiar is designed to detect reentrant vulnerabilities in smart contracts and only cares about the dependencies of functions related to reentrancy vulnerabilities, while CDFG can remove redundant data flow information. But the case of smart Ponzi schemes is more complex and it may lurk in any function and variable, so it is necessary to use DFG as the input to the model.
So the input to the model is a sequence X={[CLS], SC,[SEP], Var}. Where [CLS] is a special classification token and [SEP] is a special token used to split the two different data types. The remaining two segments in X, SC={sc_1, sc_2,..., sc_n} is the collection of source code and Var={v_1,v_2,...,v_n} is the set of variables of data flow graph 𝒢(SC)=(Var,Edge).
Then the sequence X will be converted to the input vector W^0. For each token in X, the corresponding token and position embedding are summed to construct its input vector. We use a special position embedding for all variables in X as a way to indicate that these variables are nodes in the data flow. The model then uses the input vector W^0 to contextualize the representation through N transformer layers W^n=transformer_n(W^n-1),n∈[1,N]. In our model, the value of N is 12. The construction of each transformer layer is the same, and U^n is obtained by first applying a multi-headed self-attention operation<cit.> and then applying a Layer Normalization operation. The feed-forward layer and a Layer Normalization operation are then used on U^n. This way we get the output W^n of the n-th layer from the input W^n-1. Here is the calculation for each transformer layer.
U^n=LayN(MulA(W^n-1)+W^n-1)
W^n=LayN(FeeF(U^n)+U^n)
Where LayN means a Layer Normalization operation, MulA means a multi-headed self-attention mechanism and FeeF is a two layers feed-forward network. In the n-th transformer layer, the multi-headed self-attention operation computes the Û^n.
Q_i=W^n-1P_i^Q, K_i=W^n-1P_i^K, V_i=W^n-1P_i^V
head_i=Softmax(Q_iK_i^T√(d_k)+M)V_i
Û^n=[head_1;...;head_m]P_n^O
Where the output W^n-1∈ℝ^|X|× d_h of the previous layer is linearly projected onto a triplet of queries, keys, and values using the parameters P_i^Q,P_i^K,P_i^V∈ℝ^d_h× d_k and d_k is the dimension of a head. M∈ℝ^|I|×|I| is a mask matrix, M_ij is 0 if the i-th token is allowed to participate in the j-th token, otherwise it is -∞. And P_n^O∈ℝ^d_h× d_h is the model parameters.
The model finally outputs the predicted label ŷ through a linear classifier and the Softmax function.
ŷ=Softmax(Û^n)
§.§ Graph-guided masked attention
In order to incorporate the structure of the data flow graph into the Transformer, GraphCodeBERT <cit.> proposed this function. The masked attention function avoids the key k_i that the query q_j focuses on by changing the attention score q_j^T k_i to -∞ so that the attention weight becomes 0 after using the softmax function. To represent dependencies between variables, a node-query q_v_i is allowed to attend to on a node-key k_v_j if there is a direct edge from node v_j to node v_i where ⟨ v_j,v_i⟩∈ Edge or if i=j. otherwise, attention is masked by adding -∞ to the attention score. And to represent the relationship between source code tokens and data flow nodes, we first define a set Edge^' where ⟨ v_i,sc_j⟩/⟨ sc_j,v_i⟩∈ Edge^' if the variable v_i is identified from the source code token sc_j. Then, we allow node q_v_i and code k_sc_j to attend each other when and only when ⟨ v_i,sc_j⟩/⟨ sc_j,v_i⟩∈ Edge^'. We use the following graph-guided masked attention matrix as the mask matrix M in (<ref>).
M_ij=0 if q_i∈{[CLS],[SEP]}
or q_i,k_j∈ P∪ SC
or ⟨ q_i,k_j⟩∈ Edge∪ Edge'
-∞ otherwise
§.§ Pre-Training Tasks
Masked Language Modeling.
The task follows Devlin et al.<cit.> to apply a Masked Language Modeling (MLM) pre-training task. In particular, we randomly sampled 15% of the tokens from the source code and paired annotations. We replace them with a [MASK] token 80% of the time, replace them with random tokens 10% of the time, and leave them unchanged 10% of the time. The goal of MLM is to predict the original tokens of these sampled tokens, which is effective in previous work <cit.>. In particular, if the source code context is not sufficient to infer the masked code tokens, the model can make use of the annotation context, encouraging the model to unify the natural language and programming language representations.
Data Flow Edges Prediction.
The purpose of this task is to learn representations from the data flow. The motivation is to encourage models to learn structure-aware representations that encode “where the value comes from" relationships in order to better understand the code. In particular, we randomly extract 20% of the nodes Var_s in the data flow, mask the direct edges connecting these extracted nodes by adding -∞ to the masking matrix, and then predict the Edge_mask of these masked edges. Formally, the pre-training objective of the task is calculated as (<ref>), where Edge_sc=Var_s× Var∪ Var× Var_s is the set of candidates for edge prediction, δ(e_ij∈ Edge) is 1 if ⟨ v_i,v_j⟩∈ Edge otherwise 0. The probability p_e_ij of the existence of an edge from the i-th node to the j-th node is calculated by dot product according to the Sigmoid function, using the representation of two nodes in the model. To balance the positive-negative ratio of the examples, we sampled the same number of negative and positive samples of Edge_mask.
loss_EdgePred=-∑_e_ij∈ Edge_sc[δ(e_ij∈ Edge_mask)log p_e_ij
+(1-δ(e_ij∈ Edge_mask))log(1-p_e_ij)]
Node Alignment.
The purpose of this task is to align the representation between source code and data flow, which is similar to data flow edge prediction. Instead of predicting the edges between nodes, we predict the edges between code tokens and nodes. The motivation is to encourage the model to align variables and source code to the data flow.
§ EXPERIMENTS
To evaluate our method, we designed experiments to answer the following research questions (RQs):
* RQ1: How well does SourceP perform in detecting smart Ponzi schemes by relying only on source code features, and how do its precision, recall, and F-score compare to current state-of-the-art methods?
* RQ2: How sustainable is SourceP in detecting smart Ponzi schemes?
* RQ3: How do SourceP's pre-training tasks and data flow affect detection results?
* RQ4: How well does SourceP generalization ability in detecting smart Ponzi schemes?
§.§ Experiments Setting
Datasets.
We used the Ponzi Contract Datasets provided by the XBlock platform <cit.>, a dataset that crawls 6,498 smart contracts on Etherscan <cit.>, and manually read the source code pairs to classify them with reference to the methods used in some previous studies <cit.>, of which 318 smart contracts were manually marked as Ponzi smart contracts and the rest were manually marked as non-Ponzi smart contracts. We retained the source code in the dataset, along with the corresponding serial number of idx and the label. If the value of Label is 1 it means it is a smart Ponzi scheme, if it is 0 it is not.
Evaluation metrics.
For the experiments in the paper, we will use common precision, recall, and F-score to evaluate the performance of the model.
Parameter settings.
In the fine-turning step, we set the code length to 256, data flow length to 64, train batch size to 1, eval batch size to 32, learning rate to 2e-5, and used the Adam optimizer to update the model parameters. Depending on the experiment, epochs were set to 3, 5, or 10, and the threshold was set to 0.5 or 0.15.
Implementation details.
All the experiments were run on a computer with two Intel(R) Xeon(R) Silver 4314 CPUs at 2.4GHz, one GPU at NVIDIA A40 or two GPUs at NVIDIA A30, and 256GB of Memory.
§.§ Results Summary
RQ1: Performance comparison with state-of-the-art methods.
We compare the detection performance of SoucreP with the same division on the same dataset as the existing state-of-the-art method according to the comparison method of Zheng et al.<cit.>. Specifically, all contracts are ranked according to the order of the block height at the time of smart contract creation, and the training set consists of smart Ponzi schemes from the 1st to the 250th and non-Ponzi smart contracts in between. The test set consists of the 251st to the last 341st smart Ponzi scheme and the remaining non-Ponzi scheme smart contracts. Thus, the training set has a total of 5990 smart contracts, while the test set has 508 smart contracts. Such a division, compared to random division, provides a better representation of the model's ability to detect emerging new smart Ponzi schemes when it only has data on earlier smart Ponzi schemes. The compared models include Ridge-NC <cit.>: a ridge classifier model trained with an N-gram count feature; SVM-NC <cit.>: an SVM model trained with an N-gram count feature; XGBoost-TF-IDF <cit.>: an XGBoost model trained model with TF-IDF feature; MulCas <cit.>: a multi-view cascade combinatorial model; SadPonzi <cit.>: a semantic-aware system for detecting smart Ponzi schemes. The first three approaches use features extracted from the opcode of a smart contract, Mulcas incorporates Developer Feature on top of that, while SadPonzi detects Ponzi schemes based on the bytecode of a smart contract. The comparison results of different methods are listed in Table <ref>. As can be seen from the results, SourceP shows the best performance in all three metrics. In particular, SourceP gains a 19.8% recall improvement and an 11.8% F-score improvement over the state-of-the-art method while ensuring that precision is also improved. Since the ratio of positive and negative samples is about 1:20, it is reasonable that the model prefers to classify the minority samples as the majority, resulting in a relatively higher precision score than the recall score.
RQ2: Sustainability of the model compared to other state-of-the-art methods.
Although SourceP has achieved excellent performance on the latest smart Ponzi schemes detection, there is a new problem known as model aging that has attracted widespread attention <cit.>. In particular, there is a big difference between the early smart Ponzi schemes and the latest Ponzi schemes in smart contracts <cit.>. To verify the sustainability of SourceP, in this experiment, we divide the dataset into six parts (P0 to P5) according to the height of the created blocks of Ponzi schemes, following the method of Zheng et al. <cit.> divide the dataset by every 50 smart Ponzi schemes, e.g. block height in the top 50 smart Ponzi schemes and the non-Ponzi contracts among them are P0, while the smart Ponzi schemes with heights from 51 to 100 and the non-Ponzi contracts among them are divided into P1, and so on for the remaining P2, P3, P4, and P5. where P5 is the 251st to the last (314th) smart Ponzi scheme and the non-Ponzi contracts among them. P5 is actually the experiment of RQ1. The detection task is to predict the next dataset using the previous dataset, e.g., P0 and P1 to predict P2, and P0, P1, and P2 to predict P3. Because smart contracts are tamper-evident, a lower block height at creation means an earlier creation time, which is equivalent to our use of an earlier time smart Ponzi scheme to predict future smart Ponzi schemes as a way to verify the sustainability of SourceP. Our comparison models include SadPonzi <cit.> and MulCas <cit.>, and the results are shown in Table <ref>. It can be seen that SourceP obtains the highest precision and F-score in each part of the experiment, achieving the highest recall in P3 and P5, and the F-score even improves by 39% when detecting the P3 part. Since the new smart Ponzi scheme is deployed based on the ERC-20 token trading contract, the reward of the Ponzi scheme is reflected in the increased value of the Ponzi tokens, and SadPonzi and MulCas experience a degradation in their performance in detecting this new smart Ponzi scheme. This experiment demonstrates that SourceP achieves the best sustainability in detecting smart Ponzi schemes.
RQ3: Ablation experiments. We conducted ablation experiments to explore the contribution played by the pre-training task and the data flow in the smart Ponzi scheme detection by removing two pre-training tasks and the data flow respectively and then performing the task of RQ1 to observe the final results. The results are shown in Table <ref>, where -w/o is an abbreviation for without. From the results, we can see that removing the two pre-training tasks or not using the data flow makes the performance of the model degrade to different degrees, which indicates that they play a role in improving the model.
RQ4: Generalization ability of SourceP.
In order to verify the generalization ability of the model, we will divide the data set randomly to conduct the experiment. Also, to prevent model overfitting, we divide the validation set separately from the training set. Therefore, the ratio of training set: validation set: test set in this experiment is 7:1:2. Fan et al. <cit.> and Chen et al. <cit.> also performed smart Ponzi contract detection with randomly partitioned datasets, and these algorithms used opcode features and account features of smart contracts, including SVM <cit.>, LSTM <cit.>, XGBOOST <cit.>, RF<cit.>, and AI-SPSD <cit.> algorithms. Since none of these algorithms is directly open source for detecting smart Ponzi schemes using them, we replicated them as much as possible based on the details they provided. We conducted 10 experiments to compare the average results of detecting smart Ponzi schemes with other algorithms in Table <ref>. Since we used a larger dataset, the method detection performance of several previous algorithms degraded. As we can see from the table, SourceP still gets the best recall, precision, and F-score when dividing the dataset randomly, proving that SourceP has good performance and generalization ability in detecting smart Ponzi schemes.
§ RELATED WORK
§.§ Ponzi schemes on the blockchain
The Ponzi scheme is a classic financial fraud <cit.>. With the development of the Internet, online “High-Yield Investment Programs" (HYIP) became a typical form of Ponzi scheme <cit.>. Because blockchain technology became increasingly popular, unscrupulous individuals began to deploy such HYIPs on the blockchain, and more and more Ponzi schemes and Internet scammers emerged on the blockchain <cit.>. Chen et al. <cit.> first proposed machine learning-based identification of Ponzi schemes in Ethereum smart contracts by extracting features from user accounts and the operational code of smart contracts, and then building a classification model to detect potential Ponzi schemes as smart contract implementations. Fan et al. <cit.> based on CatBoost <cit.>, propose the Al-SPSD model to detect newly deployed smart Ponzi schemes from the runtime opcode level in a timely manner. Chen et al. <cit.> propose SADPonzi, a working on Ethereum smart contract bytecode prototype system that identifies smart Ponzi schemes. Zheng et al. <cit.> proposed building a multi-view cascade model (MulCas) to identify Ponzi schemes. Ponzi schemes in blockchain platforms are not only present in Ethereum, some works have also focused on Ponzi schemes in Bitcoin and detected them <cit.>.
§.§ Smart Contract Analysis
Many smart contract analysis studies have been conducted on smart contract security vulnerabilities in Ethereum, for example, Oyente <cit.>, Osiris <cit.>, Mythril <cit.>, Maian <cit.>, and Manticore <cit.> use symbolic execution for vulnerability detection. Securify <cit.> and Zeus <cit.> use formal verification for vulnerability detection, and Slither <cit.> and SmartCheck <cit.> propose to use static analysis for vulnerability detection. ContractFuzzer <cit.>, and ReGuardp <cit.> propose vulnerability detection using fuzzy testing. Thomas et al.<cit.> provide an empirical review of these automated analysis tools for smart contracts. Pinna et al.<cit.> conduct a comprehensive empirical study of smart contracts deployed on the Ethereum blockchain, to provide an overview of the characteristics of smart contracts. In addition, there are empirical studies on the code smell of smart contracts <cit.>, studies on the gas of Ethereum smart contracts <cit.>, and the cloning behavior of the code <cit.>.
§.§ Pre-Training Models for Programming Languages
The emergence of pre-training models (PTMs) has brought NLP into a new era <cit.>. Some models such as BERT <cit.> and GPT <cit.> have recently achieved great success and have become a milestone in the field of artificial intelligence <cit.>. Several works have also explored the application of pre-training models (PTMs) on programming languages, such as Roberta <cit.>, a model pre-training on a text corpus with a Mask Language Model (MLM) learning goal, where RoBERTa(code) is pre-training on code only. CuBERT <cit.> is the first work to propose CodeBERT <cit.> pre-training on code-text pairs with MLM and replacement token detection learning goals, and is a representative pre-training model for multilingual code representation. The major difference between GraphCodeBERT and CodeBERT is the inclusion of AST information <cit.>. UniXcoder <cit.> uses a masked attention matrix with prefix adapters to control the behavior of the model and enhances the code representation with cross-modal content such as AST and code annotations.PLBART <cit.> is an application of the programming language BART <cit.>, which incorporates the advantages of both the bidirectional encoder of the BERT model and the unidirectional left-to-right decoder in GPT. CodeT5 <cit.> is a unified pre-training Transformer model that makes better use of code syntax information. Large-scale pre-training models have also evolved rapidly for code tasks, such as AlphaCode <cit.> which uses the encoder-decoder architecture, Code-GPT <cit.> which uses a 12-layer transformer decoder model, and GPT-C <cit.> which is designed for the task of Code Completion.
§ CONCLUSION AND FUTURE WORK
In this paper, we propose a method called SourceP to detect smart Ponzi schemes on Ethereum. To the best of our knowledge, this is the first detection method that uses only the source code of smart Ponzi schemes as features, and the first method that introduces smart Ponzi scheme detection based on pre-training models and data flow. In terms of detecting smart Ponzi schemes on Ethereum, we experimentally demonstrate that SourceP achieves better performance and sustainability compared to existing state-of-the-art methods. We also design ablation experiments to examine the contribution of pre-training models and data flow in SourceP. Finally, we experimentally demonstrate that SourceP possesses a good generalization capability. We explore the feasibility of using variable dependencies in source code to detect smart Ponzi schemes, avoiding some of the drawbacks of traditional smart Ponzi scheme detection methods. We reveal the potential of pre-training models for smart Ponzi scheme detection, exposing the dataset and source code we used to aid future research in this direction. We believe that detecting smart contracts as soon as possible after they are deployed can effectively reduce the financial losses from Ponzi schemes on Ethereum and maintain a healthy ecology of the blockchain community.
In our future work, the first step is to expand the dataset, which still has few labeled smart Ponzi schemes, and expanding the dataset has a great effect on improving the model. And as mentioned in this paper, new types of smart Ponzi schemes are more difficult to detect, so more data on new types of smart Ponzi schemes are urgently needed. Given the excellent performance shown by pre-training models and data flow in smart Ponzi scheme detection, we will also try to explore the use of the method for other blockchain security tasks to advance blockchain security technology.
IEEEtran
|
http://arxiv.org/abs/2306.08656v1
|
20230614175202
|
Augment then Smooth: Reconciling Differential Privacy with Certified Robustness
|
[
"Jiapeng Wu",
"Atiyeh Ashari Ghomi",
"David Glukhov",
"Jesse C. Cresswell",
"Franziska Boenisch",
"Nicolas Papernot"
] |
cs.LG
|
[
"cs.LG",
"cs.CR"
] |
-Explainable Multimodal Emotion Recognition with Situational Knowledge
Mijanur Palash and Bharat Bhargava, Fellow, IEEE
The authors are with the Department
of Computer science, Purdue University,
USA, 47906.
E-mail: ([email protected],[email protected])
July 31, 2023
=================================================================================================================================================================================================
Machine learning models are susceptible to a variety of attacks that can erode trust in their deployment.
These threats include attacks against the privacy of training data and adversarial examples that jeopardize model accuracy.
Differential privacy and randomized smoothing are effective defenses that provide certifiable guarantees for each of these threats, however, it is not well understood how implementing either defense impacts the other. In this work, we argue that it is possible to achieve both privacy guarantees and certified robustness simultaneously. We provide a framework called DP-CERT for integrating certified robustness through randomized smoothing into differentially private model training. For instance, compared to differentially private stochastic gradient descent on CIFAR10, DP-CERT leads to a 12-fold increase in certified accuracy and a 10-fold increase in the average certified radius at the expense of a drop in accuracy of 1.2%. Through in-depth per-sample metric analysis, we show that the certified radius correlates with the local Lipschitz constant and smoothness of the loss surface. This provides a new way to diagnose when private models will fail to be robust.
§ INTRODUCTION
Machine learning (ML) models are becoming increasingly trusted in critical settings despite an incomplete understanding of their properties. This raises questions about the trustworthiness of those models, encompassing aspects such as privacy, robustness, and more. Society at large might expect all of these properties to hold simultaneously as ML's influence on everyday life expands, but each aspect is challenging enough that scientists and practitioners still mostly grapple with them individually. Relatively little research has been done on the intersectionality of trustworthy ML requirements, since each aspect seems to push us in orthogonal research directions.
We aim to reconcile two key objectives of trustworthy ML, namely privacy and robustness.
Privacy in the context of ML manifests as the requirement that a model does not leak information about the data it was trained on <cit.>, such as revealing whether or not certain data points were included in the training dataset <cit.> or what characteristics they exhibit <cit.>.
In our study, robustness refers to the requirement that a model's prediction should not change when its test inputs are perturbed, even in the worst case when perturbations are chosen adversarially <cit.>.
The current gold standard for providing privacy guarantees is differential privacy (DP) <cit.>.
In ML, DP produces mathematically rigorous privacy guarantees by limiting the impact of each individual training data point on the final model. This is achieved by clipping per-sample gradients, and adding a well-calibrated amount of noise to all model updates. Clipping serves to bound the sensitivity of the training algorithm, while the addition of noise ensures that training will be more likely to output similar models whether any of the individual data points are added to or removed from the training dataset.
However, clipping and adding noise can impede the convergence of models <cit.> and yield decision boundaries that are less smooth <cit.>, negatively impacting robustness <cit.>.
These findings call for integrating robustness measures into private training, yet this remains challenging because most methods to increase robustness use random or adversarial augmentations of training data points, which both conceptually and practically do not align well with DP training. Conceptually, augmenting an input increases the sensitivity of private training to it, and thereby provides additional avenues for information leakage. From a practical viewpoint, since gradients are computed on a per-example basis for DP, augmentations drastically increase the time and memory costs of training.
To bridge the gap between robust and private ML model training, we evaluate the certified robustness of private models and improve it by integrating state-of-the-art techniques <cit.> with DP training. CR provides probabilistic guarantees that perturbations of a certain magnitude will not change a model's prediction, regardless of what attack strategy (known or yet unknown) is used to modify the test inputs, and thereby provides future-proof robustness guarantees. A common approach for certifying robustness is randomized smoothing, where a classifier's outputs are averaged over a distribution surrounding the test point <cit.>.
While DP and CR are the most promising standards for providing future-proof privacy and robustness guarantees respectively, their intersection has seen little attention. Recent works <cit.> propose adding noise or adversarial examples while training to improve CR guarantees, but lack the flexibility to incorporate state-of-the-art methods for non-private training <cit.>, and usually rely on training additional network components.
We aim to provide CR guarantees within the standard DP training framework, overcoming several challenges in doing so.
Present Work. We study the possible pitfalls of combining DP and CR in a systematic manner. Through our analysis and ablation studies combining randomized smoothing techniques with DP training, we show that standard DP training of ML models is insufficient to provide strong CR results. We propose DP-CERT, an adaptable framework for integrating CR into standard DP training which effectively incorporates augmentations while managing the additional privacy risks.
Compared to private training without augmentations, DP-CERT achieves better robustness on MNIST, Fashion-MNIST, and CIFAR10, and even surpasses the state-of-the-art for robustness on the latter dataset under the same privacy guarantee.
Finally, we analyze CR on a per data point basis rather than averaged across test datasets. Using the gradient norm, Hessian spectral norm, and local Lipschitz constant, we find that the certifiable radius has a negative log-linear correlation with these quantities, and compare their distributions across training methods.
We conclude with concrete recommendations of best practices for the community to achieve CR and DP simultaneously.
§ PRELIMINARIES
Problem Setup.
Consider a classification task with Y classes from a dataset D = {(x_i,y_i)}_i=1^n, where x_i∈ℝ^d and y_i∈{1,..., Y} denote the i-th input and label. Let f_θ: ℝ^d {1,..., Y} be a neural network with parameters θ, and F_θ denote the soft classifier which outputs the probability distribution, such that f_θ(x) = _y ∈{1,..., Y} F_θ(x)_y, where F_θ(x)_y denotes the model probability of x being a member of class y.
Differential Privacy and DPSGD.
We rely on the rigorous framework of differential privacy (DP) <cit.> to obtain models with privacy guarantees.
DP ensures that a model's weights at the end of training will be similar in distribution whether or not a particular data point was included in the training set. More formally, let D and D' be two potential training datasets for a model f_θ that differ in only one data point. The training mechanism M guarantees (ε, δ)-DP if for all possible sets of outcomes S of the training process, it holds that Pr[M(D) ∈ S] ≤ e^εPr[M(D') ∈ S] + δ.
The parameter ε specifies the privacy level, with smaller ε yielding higher privacy, while δ quantifies the probability of the algorithm violating the ε privacy guarantee.
To obtain a differentially private variant of stochastic gradient descent (SGD), two modifications need to be made <cit.>. First, the individual gradients of each data point are clipped to a norm C to limit the sensitivity of the model update caused by each data point. Second, choosing a noise level ρ, noise from 𝒩(0, ρ^2C^2𝐈) is added to the aggregated gradients to prevent the changes to the model from revealing too much information about individual data points.
We detail the resulting algorithm DPSGD (<ref>) and a more thorough introduction to DP in Appendix <ref>.
Certified Robustness.Adversarial examples are a well-studied phenomenon in ML, in which an input to a model is perturbed in ways that do not alter its semantics yet cause the model to misclassify the perturbed input <cit.>.
Formally, for a given labeled datapoint (x,y) and classifier f, an (ℓ_p,ζ)-adversary aims to create an adversarial example x' such that ‖ x'-x‖_p < ζ and f(x') ≠ y.
Despite much research, the most common defense against adversarial examples remains adversarial training <cit.>. While adversarial training improves robustness to known algorithms for finding adversarial examples, it does not guarantee that a model will be robust to all adversarial examples (e.g., those crafted with other attack algorithms). This motivates the development of techniques that can provide certifiable guarantees of robustness to adversarial examples by providing a lower bound r on the distance between a correctly classified input and any adversarial example that may be misclassified <cit.>. This lower bound is also known as the certification radius.
Randomized Smoothing. One popular approach for establishing certified robustness (CR) guarantees is through probabilistic robustness verification which, with high probability, verifies that no adversarial examples exist within a certain radius of the original input <cit.>. The most commonly studied method for providing a probabilistic robustness verification is through smoothing a classifier <cit.> by averaging the class predictions of f using a smoothing distribution μ,
ĝ(x) = _c ∈ [Y]∫_ζ∈supp(μ)𝕀[f(x+ζ), c]μ(ζ)dζ,
where 𝕀[a, b] = 1 a=b and 0 otherwise <cit.>. As computing the integral in Equation (<ref>) is intractable, Monte Carlo sampling is used. We denote the approximation of ĝ given by Monte Carlo sampling as g. One can certify at different radii through the choice of smoothing distribution μ. Smoothed classifiers are evaluated in terms of their certified accuracy—the fraction of samples correctly classified when certifying robustness at a given radius r.
A tight ℓ_2 radius was obtained by <cit.> when using isotropic Gaussian noise μ = 𝒩(x,σ^2𝐈), where σ is a hyperparameter that controls a robustness/accuracy tradeoff. In particular, <cit.> proved that for any base classifier f, the Gaussian smoothed classifier g is robust around an input x with radius r = σ/2(Φ^-1(p_A) - Φ^-1(p_B)) where p_A and p_B denote the probabilities of c_A and c_B, the most and second-most probable classes returned by g(x), and Φ^-1 is the inverse of the standard Gaussian CDF. In fact the exact probabilities p_A and p_B are not needed and one can use lower p_A≤ p_A and upper p_B≥ p_B bounds instead, approximated by Monte Carlo sampling.
The output of the smoothed classifier g(x) is approximated by aggregating the predictions of a base classifier f(x + η) for η∼𝒩(0,σ^2𝐈). As a high dimensional standard Gaussian assigns almost no mass near its mean 0, ensuring that g(x) is accurate at large certification radii requires the base classifier f to be accurate on Gaussian perturbed data <cit.>.
§ METHOD
Training machine learning models to be both differentially private and certifiably robust poses several challenges. The gradient clipping and noise addition used in DPSGD harms the convergence rate of training <cit.>, while restrictive privacy budgets may further require stopping training prior to convergence. Robustness on the other hand suffers for models that are not converged, as having large gradients at test points makes finding adversarial examples easier <cit.>.
Another challenge surfaces around the use of adversarial training <cit.> or augmentations of datapoints <cit.> along with DPSGD. As shown by <cit.>, data augmentations used for training can enhance a model's CR, however, it is crucial to ensure that augmented data points do not leak private information about the original. Previous works on the combination of DP and CR have proposed adding noise or adversarial examples during training, but deviate from the standard DPSGD template to address the privacy risks <cit.>. These approaches add trainable model components increasing the overall complexity <cit.>, or lack the flexibility to incorporate the latest advancements in adversarial training methods <cit.>. For a more detailed description of these related works and comparison to our method, please see Appendix <ref>.
We aim to make CR feasible within the standard training procedure of DPSGD, with state-of-the-art convergence and proper accounting for additional privacy risks by introducing the DP-CERT framework. In this section, we describe DP-CERT, how it effectively manages training with augmented samples while preserving privacy, and how it enables the integration of recent advancements in adversarial training and regularizers to enhance certifiable robustness <cit.>. Our training framework consists of three stages, summarized in Figure <ref>: augmentation multiplicity as the foundational stage, plus regularization and adversarial training as two optional stages. After the model is trained, randomized smoothing is used at inference time. We present four instantiations of the framework: DP-Gaussian, DP-SmoothAdv, DP-Stability, and DP-MACER, employing different techniques at each stage.
Augmentation Multiplicity. For each data point (x_i, y_i), we obtain K augmented data points (x_i^j, y_i), where j ∈{1, ..., K} and x_i^j is the j-th augmented data point. For notational convenience, we use x_i^0 to denote the original data point x_i. As shown by <cit.>, training with Gaussian data augmentation can enhance a model's certified robustness.
When not using adversarial training, we define x_i^j = x_i+η_j, η_j ∼𝒩(0,σ^2𝐈) for j ≠ 0.
An important component of our DP-CERT is how we handle training with augmented data points.
We adopt augmentation multiplicity, introduced in <cit.> and previously unused in studies of CR for DP, which involves averaging the gradients of multiple augmentations of the same training sample before clipping. Since all downstream impact to the model weights from sample x_i is contained in this averaged gradient, clipping it provides a finite sensitivity as required for the Sampled Gaussian Mechanism used in DPSGD <cit.>, and no additional privacy cost is incurred. The model updates can be expressed as follows
θ^t+1=θ^t-λ_t[1/B∑_i∈ B_tclip_C(1/K+1∑_j=0^K∇_ θ^tL_CE(x_i^j, y_i))+ρ C/Bξ] .
θ^t denotes the model parameters at iteration t, λ_t is the learning rate, B is the batch size, C is the clipping bound, K is the number of augmentations, ρ is the noise multiplier, ξ∼𝒩(0, I), and ∇_ θ^tL_CE(x_i^j, y_i) is the gradient with respect to data point (x_i^j, y_i). Note that j starts from 0, which means we include the original samples along with the augmented ones in model training.
Regularization. We propose adapting stability and consistency regularization to private training in order to minimize the distance between the output probability of the original and augmented examples, hereby improving the robustness to input noise. Stability training <cit.> adds a smoothed cross-entropy loss as regularization. Inspired by TRADES <cit.>, we instead use the Kullback–Leibler (KL) divergence with a hyperparameter γ controlling the strength of the regularization as:
L_stability(x_i, y_i) = ∑_j L_CE(x_i^j, y_i) + γ D_KL(F_θ(x_i) || F_θ(x_i^j)) .
Consistency regularization <cit.> is a similar technique that instead minimizes the KL divergence between F̂_θ(x_i) and F_θ(x_i), where F̂_θ(x) = 1/K∑_j F_θ(x_i^j) is the average output probability of all smoothed samples. The loss can be expressed as
L_consistency(x_i, y_i) = ∑_j L_CE(x_i^j, y_i) + γ D_KL(F̂_θ(x_i) || F_θ(x_i^j)).
Additionally, we propose integrating MACER <cit.>, an alternative training modification to directly optimize the certified accuracy at larger robustness radii without requiring the costly process of adversarial training.
MACER achieves this by decomposing the error of a smoothed classifier into a classification error term and a robustness error term, the latter reflecting whether or not the smoothed classifier was able to certify robustness for a given radius.
Adversarial Training.
To achieve better certified accuracy, we incorporate adversarial training by deploying existing attacks to create adversarial examples. Specifically, we integrate SmoothAdv <cit.> into private training, which, given original data (x,y), optimizes
_‖ x'-x‖_2 ≤ϵ(-logη∼ N(0, σ^2I)𝔼 [F_θ(x' + η)_y]),
to find an x' ϵ-close to x that maximizes the cross entropy between g_θ(x') and label y. Using Monte Carlo sampling, Objective (<ref>) can be optimized by iteratively computing the approximate gradient
∇_x'(-log(1/K∑_j=1^KF_θ(x'+η_j)_y)).
where η_1, ...,η_K ∼𝒩(0, σ^2𝐈). The approximate gradient is then used to update x', with the final x' used as examples within augmentation multiplicity.
§.§ Metrics for Interpreting Robustness
To elicit some insights into why certain training methods may produce better-performing models than others, we investigate several per-data point metrics associated with robustness, the input gradient norm, input Hessian spectral norm, and local-Lipschitz constant, and study their relationships with CR. The first two metrics measure the local smoothness of the loss landscape with respect to the input space. Taylor's approximation can be used to show a direct link between these two metrics and the worst-case change in loss from small input perturbations. Due to this connection, prior works directly regularized them in order to train more robust models <cit.>.
Gradients and Hessians are highly local quantities that are only connected to robustness through Taylor's approximation at small radii around the input data point. Consequently, they may not be informative at larger radii used to certify robustness. Thus, we also compare models using an empirical estimate of the average local Lipschitz constant of the model's penultimate layer. By viewing the network as a feature extractor composed with a linear classifier, using the penultimate layer captures the worst-case sensitivity of the feature extractor to perturbations of the data. This metric was initially proposed by <cit.> to investigate adversarial robustness and is given by
1/n∑_i=1^n max_x_i' ∈ B_∞(x,ζ)‖ f(x_i)-f(x_i')‖_1/‖ x_i-x_i'‖_∞ ,
where the maximum is approximated in the same manner as is used for adversarial example generation, typically projected gradient descent (PGD) <cit.>.
sections/experiments/main
§ CONCLUSION
We achieve better certified robustness with DPSGD training through augmentations and randomized smoothing, reconciling two crucial objectives for trustworthy ML, namely privacy and robustness.
To overcome the theoretical and practical challenges that arise from the combination of both approaches, we rely on state-of-the-art DP training with augmentations that does not incur additional privacy costs. We employ various regularizations, and adversarial training methods to enhance robustness.
Our resulting DP-CERT framework is modular and supports multiple combinations of these methods.
Through our extensive experimental study, we confirm that DPSGD training alone, even with state-of-the-art convergence, does not provide satisfactory certified robustness. However, introducing a small number of computationally inexpensive augmentations into training, such as adding Gaussian noise, suffices to yield strong privacy protection and certified robustness.
By thoroughly analyzing per-sample metrics, we show that the certified radius correlates with the local Lipschitz constant and smoothness of the loss surface; this opens a new path to diagnosing when private models will fail to be robust.
To conclude, our findings yield concrete recommendations for the community to simultaneously achieve CR and DP, providing a valuable contribution towards more trustworthy ML. When training from scratch, Gaussian augmentations (not adversarial) should be used with DPSGD, and randomized smoothing applied at inference time. For fine-tuning pretrained models, adding stability regularization also helps accuracy, and leads to much lower local Lipschitz constants.
Acknowledgments.
DG, FB, and NP would like to acknowledge sponsors who support their research with financial and in-kind contributions: CIFAR
through the Canada CIFAR AI Chair, NSERC through a Discovery Grant, the Ontario Early Researcher Award, and the Sloan Foundation. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector
Institute.
abbrvnat
§ DIFFERENTIAL PRIVACY BACKGROUND
§.§ Differential Privacy and DPSGD
Differential Privacy (DP) <cit.> is a formal framework that aims to provide mathematical guarantees on the privacy of individual data points in a dataset, while allowing one to learn properties over the entire dataset. More formally a randomized mechanism M fulfills (ε, δ)-DP if
Pr[M(D) ∈ S] ≤ e^εPr[M(D') ∈ S] + δ,
where D and D' are neighboring datasets (i.e. datasets differing in only one data point), S is the set of output values of M, and ϵ and δ are privacy parameters.
ε indicates the level of obtained privacy, with a smaller value of ε indicating a stronger privacy guarantee. δ accounts for the probability that the algorithm can violate the privacy guarantee, i.e., not stay within the specified ε. A larger value of δ increases the chance of privacy leakage.
The most popular algorithm for implementing DP guarantees in ML is differentially private stochastic gradient descent (DPSGD) <cit.>. It extends the standard SGD algorithm with two additional steps, namely gradient clipping and noise addition.
While the former bounds the sensitivity of the model update, the latter implements the privacy guarantee by preventing the gradients from revealing too much information about individual data points.
More formally, let θ_t denote the model parameters at training iteration t. At each iteration t, DPSGD computes the gradient of the loss function with respect to θ_t at an individual data point x as ∇_θ L(θ_t, x). The data point's gradient is then clipped to a maximum norm C using the operation clip(∇_θ L(θ_t, x),C), which replaces the gradient with a vector of the same direction but smaller magnitude if its norm exceeds C. The clipped gradient from all datapoints in a batch are aggregated, then perturbed by adding random noise from 𝒩(0,ρ^2 C^2 𝐈), where ρ is the noise scale parameter. We detail DPSGD in Algorithm <ref>.
§.§ DP-PSAC
We used DP-PSAC <cit.> in addition to DPSGD to ensure differential privacy while training. Per-sample adaptive clipping (PSAC) is one of a number of approaches that tries to reduce the bias from per-example clipping <cit.>, and was motivated for maximizing the signal to noise ratio of gradient updates. It has shown the best performance on several datasets including MNIST, FashionMNIST and CIFAR10. To compare with DPSGD, we incorporated DP-PSAC into our experiments.
DP-PSAC is similar to DPSGD, with the exception that it employs a different clipping method,
clip_C, r(g_i,t)=C· g_i,t/(g_i,t + r/g_i,t+r).
Where g_i,t denotes the loss gradient for the ith sample at iteration t. The motivation of this clipping method is that per-example gradients with small norms come from data points on which the model has already converged, and these gradients are often orthogonal to mini-batch gradients. Hence, small norm gradients should not have disproportionally large contributions to the batch gradient as when using per-sample normalization methods <cit.>. Compared to such approaches, PSAC reduces the influence of small norm gradients.
§ ADDITIONAL BACKGROUND AND RELATED WORK
§.§ Randomized Smoothing
Previous works have tackled improving the certified robustness of randomized smoothing methods in a variety of ways. The dominant approach for doing so involves modifications to training the base classifier so as to increase robustness and accuracy under Gaussian perturbations. The simplest approach involves adding noise to inputs during training <cit.>, while other works utilize regularization <cit.>, ensembling <cit.>, and adversarial training <cit.>. While these modifications have been independently studied in the context of improving the certified accuracy of randomized smoothing classifiers, we are the first work integrating these methods with private training through augmentation multiplicity. We provide additional information on two of these training modification methods mentioned in Section <ref>.
§.§.§ SmoothAdv
One of the most effective methods for improving the performance of randomized smoothing classifiers utilizes adversarial training of the classifier. The method, SmoothAdv proposed in <cit.>, was motivated by the idea that to improve certified accuracy at a larger certification radius one needs a classifier that is more robust to local perturbations, and the best known method of achieving that is through adversarial training.
Given a soft classifier F: ℝ^d → P(Y) where P(Y) is the set of probability distributions over Y, its smoothed soft classifier G is defined as:
G(x)=(F * 𝒩(0, σ^2I))(x) = 𝔼_δ∼𝒩(0, σ^2I)[F(x+δ)].
The goal of SmoothAdv is to find a point x̂ that maximizes the loss of G in an l_2 ball around x for the cross entropy loss. They use a projected gradient descent variant to approximately find x̂, and define J(x')=l_CE(G(x'),y) to compute
Δ_x'J(x')=Δ_x'(-log𝔼_δ∼𝒩(0, σ^2I)[F(x'+δ)_y]).
Since the expectation in Equation <ref> is difficult to compute exactly, a Monte Carlo approximation is used by sampling noise δ_1, ...,δ_m ∼𝒩(0, σ^2I) to approximately compute Δ_x'J(x'),
Δ_x'J(x')≈Δ_x'(-log(1/m∑_i=1^m[F(x'+δ_i)_y])).
Finally, x' is updated by taking a step in the direction of Δ_x'J(x'), and the final x' is used to train the classifier.
§.§.§ MACER
Similar to the Stability training method from Equation <ref>, MACER <cit.> also modifies the loss for optimization so that the final model has higher certified accuracy at a larger certified radius. In contrast to SmoothAdv, regularizing the model to be more robust in this sense does not require generation of adversarial examples and instead can be optimized directly. The method achieves this by decomposing the error of the smoothed classifier into a classification error term and a robustness error term. The former captures the error from the smoothed classifier misclassifying a given datapoint and the latter captures the error of a certified radius being too small.
Robustness error, much like hard label classification error, cannot be optimized directly. To address this, MACER proposes a surrogate loss to minimize the robustness error term—a hinge loss on the data (x,y) for which g_θ(x) = y,
max{0, γ - (Φ^-1(f̂_θ(x)_y) - Φ^-1(f̂_θ(x)_ŷ≠ y))},
where f̂_θ(x) denotes the average of softmax probabilties on Gaussian perturbations of x, f̂_θ(x)_y denotes the softmax probability f̂_θ(x) assigned to the true class y, and f̂_θ(x)_ŷ≠ y denotes the maximum softmax probability for a class that isn't the true class. This loss term is added to cross entropy of loss from the soft smoothed classifier as a regularization term.
§.§ DPSGD and Robustness
The adversarial robustness of differentially private models has been studied in several prior works. Tursynbek et al. <cit.> demonstrated that models trained with DPSGD are sometimes more vulnerable to input perturbations. Boenisch et al. <cit.> further consolidate this claim with more experiments, and showed that improper choices of hyperparameters can lead to gradient masking. Zhang and Bu <cit.> find that the success of adversarially training robust models with DPSGD depends greatly on choices of hyperparameters, namely smaller clipping thresholds and learning rates, differing from those that produce the most accurate models. Furthermore, they found that pretraining models helps further mitigate the privacy-robustness-accuracy tradeoff. These works reveal interesting adversarial robustness characteristics of DP models, however, they do not endeavor to improve the robustness of DP models.
DP-ADV <cit.> proposes combining adversarial training with DPSGD. They achieve this by replacing the original example with an adversarially crafted example, obtained with a FGSM or PGD attack. Their work is orthogonal to ours in that they focus on the robustness to adversarial attacks, whereas we focus on certified robustness. Additionally, they do not perform any data augmentation, which has been shown to be effective against adversarial attacks <cit.>.
There are a few prior works that have studied certified robustness with differential privacy guarantees. Phan et al. <cit.> first introduced a framework called Secure-SGD, which aims to achieve both certified robustness and differential privacy simultaneously. They use a PixelDP approach, proposed by Lecuyer et al. <cit.>, to attain certified robustness and introduced the Heterogeneous Gaussian Mechanism, which involves adding heterogeneous Gaussian noise instead of element-wise Gaussian noise to the gradient.
Another work <cit.> introduced the StoBatch algorithm to guarantee DP and certified robustness. First, it employs an Autoencoder (AE) <cit.> and a functional mechanism (objective perturbation <cit.>) to reconstruct input examples while preserving DP. Subsequently, this reconstructed data is use to train a deep neural network. They implement adversarial training <cit.> to achieve both robustness and DP for the neural network.
Tang et al. <cit.> propose transforming input gradients with perturbation during training, and introduced the Multivariate Gaussian Mechanism. This mechanism allows them to achieve the same DP guarantee with less noise added to the gradient. They follow the architecture of denoised smoothing, adding differentially private noise to a pre-trained classifier.
Compared with the existing work, DP-CERT 1) presents a simple and effective augmented training scheme, allowing practitioners to introduce different adversarial training techniques through noising, regularization, and adversarially crafted examples, 2) doesn't rely on a denoiser at the inference time, reducing inference latency, and 3) can be used for training both randomly initialized or pre-trained networks.
§ EXPERIMENTAL DETAILS
§.§ Dataset Statistics
We conduct experiments on MNIST, Fashion-MNIST and CIFAR10. The MNIST database of handwritten digits has a training set of 60,000 examples, and a test set of 10,000 examples, as does Fashion-MNIST. Each example in MNIST and Fashion-MNIST is a 28×28 grayscale image, associated with one label from 10 classes.
The CIFAR-10 dataset consists of 60,000 RGB images from 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images of size 32×32×3.
§.§ Code and Implementation
Our code will be released upon publication. We give credit to the original repository for the implementation of SmoothAdv, MACER, stability, and consistency regularization.[<https://github.com/jh-jeong/smoothing-consistency>] For CERTIFY[<https://github.com/locuslab/smoothing>] and local Lipschitz constant[<https://github.com/yangarbiter/robust-local-lipschitz>] evaluations, we use the code provided by the original authors.
§ ADDITIONAL RESULTS
§.§ Additional Comparative Study
Figure <ref> shows the approximate certified accuracy as the certified radius is increased on MNIST and Fashion-MNIST, corresponding to Table <ref> in the main text, for different values of the noise σ used during training. All variants of DP-CERT greatly outperform the baseline methods which do not specifically encourage robustness. Similar plots for CIFAR10 were shown in Figure <ref> in the main text. We also compare implementations of DP-CERT to the prior approaches TransDenoiser, SecureSGD, and StoBatch on CIFAR10 in Figure <ref>. All variants of DP-CERT achieve state-of-the-art certified accuracy on CIFAR10 for differentially private models, with a much smaller pre-trained model compared to <cit.>.
§.§ Additional Ablation Study
To extend Figure <ref> from the main text, we show the complete ablation study results in Figures <ref> and <ref> on MNIST and Fashion-MNIST under σ∈{0.25, 0.5, 1.0}.
Figure <ref> shows the effects of adding consistency regularization, or changing the DP clipping method to PSAC. Neither approach makes significant changes to the certified accuracy.
Figure <ref> shows the effects of changing the multiplicity of augmentations. Again, there is little difference in certified accuracy when using more than two augmentations, so we advocate using the smallest amount which is least expensive computationally. We emphasize that using no augmentations is significantly worse than using two or more – the case without augmentations is simply DPSGD which was compared in Table <ref> and Figures <ref> and <ref>. One of our main conclusions is that adding a small number of Gaussian augmentations to DPSGD is sufficient to greatly improve certified robustness.
§.§ Additional Per-sample Metric Analysis
Figures <ref> through <ref> show the distributions of our three proposed metrics for interpreting robustness, the input gradient norms, input Hessian spectral norms, and local Lipschitz constants. We use various training methods on MNIST and Fashion-MNIST under σ∈{0.25, 0.5, 1.0}. Echoing the analysis of RQ1 in Section <ref>, DPSGD produces a bimodal distribution for the Hessian and gradient norms, while Regular training exhibits a log-normal distribution and smaller tails for large metric values. PSAC shifts the distributions to be closer to those of Regular training by reducing clipping bias. DP-CERT methods, on the other hand, shift the distribution towards smaller metric values, resulting in higher certified accuracy. An exception is DP-Stability, which has significantly higher average gradient and Hessian norms, but without the mode at very high values, and with lower local Lipschitz constants than the other three variants.
Figure <ref> shows the distribution of certified radii for baseline training methods and an instance of DP-CERT for the same settings as Figure <ref>. Whereas Regular, DPSGD, and PSAC training all have a large spike of samples that cannot be certified at any level, DP-Gaussian achieves certified radii above 1.0 for most samples, with the mode even higher at 1.6.
While our main focus has been on certified robustness, we briefly compare to adversarial robustness against common attacks. Figure <ref> shows the adversarial accuracy under a l_∞-FGSM attack, with the attack strength in {0.0005, 0.01, 0.1, 0.5, 1}. Consistent with the ranking of the average local Lipschitz constant from Figures <ref> through <ref>, DP-Stability consistently outperforms other approaches, while DP-Gaussian, DP-SmoothAdv, and DP-MACER all achieve similar adversarial accuracy above that of the unprotected baselines.
Finally, we study the metric distributions for individual samples that can or cannot be certified above a threshold τ, similar to Figure <ref> in the main text. Figures <ref> and <ref> respectively show the input gradient norm and input Hessian spectral norm distributions for the baselines and proposed methods, with different threshold values τ∈{0.25, 0.5, 1.0}. In each subfigure the top row shows samples that can be certified at radius larger than τ, and the bottom row shows samples that cannot. We only show the plots for MNIST under σ=0.5 for brevity. We note that the shapes of the distributions between gradients and Hessians resemble each other closely. The analysis of RQ2 in Section <ref> also applies here; for example, in Figure <ref>, the samples with certified radii below the threshold have slightly higher average input gradient norm. As the threshold τ increases, more examples with higher input gradient norm end up below the certified radius threshold. We find that the samples on which the models are least robust tend to be samples where the gradient and Hessian norms are largest. From this observation we expect that training methods that reduce the prevalence of large gradient and Hessian norms should be the most robust, which is indeed confirmed by DP-Stability which does not have the mode at very large values in Figures <ref> through <ref>, and the best adversarial robustness in Figure <ref>.
|
http://arxiv.org/abs/2306.03715v1
|
20230606142334
|
Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability
|
[
"Jianing Zhu",
"Hengzhuang Li",
"Jiangchao Yao",
"Tongliang Liu",
"Jianliang Xu",
"Bo Han"
] |
cs.LG
|
[
"cs.LG"
] |
[
Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability
Jianing Zhuhkbu
Hengzhuang Lihkbu
Jiangchao Yaosjtu,lab
Tongliang Liusyd,sydc
Jianliang Xuhkbu
Bo Hanhkbu
hkbuDepartment of Computer Science, Hong Kong Baptist University
sjtuCMIC, Shanghai Jiao Tong University
labShanghai AI Laboratory
sydMohamed bin Zayed University of Artificial Intelligence
sydcSydney AI Centre, The University of Sydney
Bo [email protected]
Jiangchao [email protected]
Machine Learning, ICML
0.3in
]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications. Previous paradigms either explore better scoring functions or utilize the knowledge of outliers to equip the models with the ability of OOD detection. However, few of them pay attention to the intrinsic OOD detection capability of the given model. In this work, we generally discover the existence of an intermediate stage of a model trained on in-distribution (ID) data having higher OOD detection performance than that of its final stage across different settings, and further identify one critical data-level attribution to be learning with the atypical samples. Based on such insights, we propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data. Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them. Extensive experiments and analysis demonstrate the effectiveness of our method. The code is available at: <https://github.com/tmlr-group/Unleashing-Mask>.
§ INTRODUCTION
Out-of-distribution (OOD) detection has drawn increasing attention when deploying machine learning models into the open-world scenarios <cit.>. Since the test samples can naturally arise from a label-different distribution, identifying OOD inputs from in-distribution (ID) data is important, especially for those safety-critical applications like autonomous driving and medical intelligence. Previous studies focus on designing a series of scoring functions <cit.> for OOD uncertainty estimation or fine-tuning with auxiliary outlier data to better distinguish the OOD inputs <cit.>.
Despite the promising results achieved by previous methods <cit.>, limited attention is paid to considering whether the given well-trained model is the most appropriate basis for OOD detection. In general, models deployed for various applications have different original targets (e.g., multi-class classification <cit.>) instead of OOD detection <cit.>. However, most representative score functions, e.g., MSP <cit.>, ODIN <cit.>, and Energy <cit.>, uniformly leverage the given models for OOD detection <cit.>. The above target-oriented discrepancy naturally motivates the following critical question: does the given well-trained model have the optimal OOD discriminative capability? If not, how can we find a more appropriate counterpart for OOD detection?
In this work, we start by revealing an interesting empirical observation, i.e., there always exists a historical training stage where the model has a higher OOD detection performance than the final well-trained one (as shown in Figure <ref>), spanning among different OOD/ID datasets <cit.> under different learning rate schedules <cit.> and model structures <cit.>. It shows the inconsistency between gaining better OOD discriminative capability <cit.> and pursuing better performance on ID data during training.
Through the in-depth analysis from various perspectives (as illustrated in Figure 2), we figure out one possible attribution at the data level is memorizing the atypical samples (compared with others at the semantic level) that are hard to generalize for the model. Seeking zero training error on those samples leads the model more confident in the unseen OOD inputs.
The above analysis inspires us to propose a new method, namely, Unleashing Mask (UM), to excavate the overlaid detection capability of a well-trained given model by alleviating the memorization of those atypical samples (as illustrated in Figure <ref>) of ID data. In general, we aim to backtrack its previous stage with better OOD discriminative capabilities. To achieve this target, there are two essential issues: (1) the model that is well-trained on ID data has already memorized some atypical samples; (2) how to forget those memorized atypical samples considering the given model? Accordingly, our proposed UM contains two parts utilizing different insights to address the two problems. First, as atypical samples are more sensitive to the change of model parameters, we initialize a mask with the specific cutting rate to mine these samples with constructed parameter discrepancy. Second, with the loss reference estimated by the mask, we conduct the constrained gradient ascent for model forgetting (i.e., Eq. (<ref>)). It will encourage the model to finally stabilize around the optimal stage. To avoid severe sacrifices of the original task performance on ID data, we further propose UM Adopts Pruning (UMAP) which tunes on the introduced mask with the newly designed objective.
We conduct extensive experiments (in Section <ref> and Appendixes <ref> to <ref>) to present the working mechanism of our proposed methods. We have verified the effectiveness with a series of OOD detection benchmarks mainly on two common ID datasets, i.e., CIFAR-10 and CIFAR-100. Under the various evaluations, our UM, as well as UMAP, can indeed excavate the better OOD discriminative capability of the well-trained given models and the averaged FPR95 can be reduced by a significant margin. Finally, a range of ablation studies, verification on the ImageNet pretrained model, and further discussions from both empirical and theoretical views are provided. Our main contributions are as follows,
* Conceptually, we explore the OOD detection performance via a new perspective, i.e., backtracking the initial model training phase without regularizing by any auxiliary outliers, different from most previous works that start with the well-trained model on ID data.
* Empirically, we reveal the potential OOD discriminative capability of the well-trained model, and figure out one data-level attribution of concealing it during original training is memorizing the atypical samples.
* Technically, we propose a novel Unleashing Mask (UM) and its practical variant UMAP, which utilizes the newly designed forgetting objective with ID data to excavate the intrinsic OOD detection capability.
* Experimentally, we conduct extensive explorations to verify the overall effectiveness of our method in improving OOD detection performance, and perform various ablations to provide a thorough understanding.
§ PRELIMINARIES
We consider multi-class classification as the original training task <cit.>, where 𝒳⊂ℝ^d denotes the input space and 𝒴={1,…, C} denotes the label space. In practical, a reliable classifier should be able to figure out the OOD input, which can be considered as a binary classification problem. Given 𝒫, the distribution over 𝒳×𝒴, we consider 𝒟_in as the marginal distribution of 𝒫 for 𝒳, namely, the distribution of ID data. At test time, the environment can present a distribution 𝒟_out over 𝒳 of OOD data. In general, the OOD distribution 𝒟_out is defined as an irrelevant distribution of which the label set has no intersection with 𝒴 <cit.> and thus should not be predicted by the model. A decision can be made with the threshold λ:
D_λ(x;f)={ ID S(x)≥λ
OOD S(x)<λ,
.
Building upon the model f∈ℋ:𝒳→ℝ^c trained on ID data with the logit outputs, the goal of decision is to utilize the scoring function S:𝒳→ℝ to distinguish the inputs of 𝒟_in from that of 𝒟_out by S(x). Typically, if the score value is larger than the threshold λ, the associated input x is classified as ID and vice versa. We consider several representative scoring functions designed for OOD detection, e.g., MSP <cit.>, ODIN <cit.>, and Energy <cit.>. More detailed definitions and implementation are provided in Appendix <ref>.
To mitigate the issue of over-confident predictions for some OOD data <cit.>, recent works <cit.> utilize the auxiliary unlabeled dataset to regularize the model behavior. Among them, one representative baseline is outlier exposure (OE) <cit.>. OE can further improve the detection performance by making the model f(·) finetuned from a surrogate OOD distribution 𝒟^s_out, and its corresponding learning objective is defined as follows,
ℒ_f=𝔼_𝒟_in[ℓ_CE(f(x),y)]
+ λ𝔼_𝒟^s_out[ℓ_OE(f(x))],
where λ is the balancing parameter, ℓ_CE(·) is the Cross-Entropy (CE) loss, and ℓ_OE(·) is the Kullback-Leibler divergence to the uniform distribution, which can be written as ℓ_OE(h(x))=-∑_k _k f(x) / C, where _k (·) denotes the k-th element of a softmax output. The OE loss ℓ_OE(·) is designed for model regularization, making the model learn from surrogate OOD inputs to return low-confident predictions <cit.>.
Although previous works show promising results via designing scoring functions or regularizing models with different auxiliary outlier data, few of them investigated or excavated the original discriminative capability of the well-trained model using ID data. In this work, we introduce the layer-wise mask m <cit.> to mine the atypical samples that are memorized by the model. Accordingly, the decision can be rewritten as D(x;m⊙ f), and the output of a masked model is defined as m⊙ f(x).
§ PROPOSED METHOD: UNLEASHING MASK
In this section, we introduce our new method, i.e., Unleashing Mask (UM), to reveal the potential OOD discriminative capability of the well-trained model. First, we present and discuss the important observation that inspires our methods (Section <ref>). Second, we provide the insights behind the two critical parts of our UM (Section <ref>). Lastly, we introduce the overall framework and its learning objective, as well as a practical variant of UM, i.e., UMAP (Section <ref>).
§.§ Overlaid OOD Detection Capability
First, we present the phenomenon of the inconsistency between pursuing better OOD discriminative capability and smaller training errors during the original task.
Empirically, as shown in Figure <ref>, we trace the OOD detection performance during the model training after multiple runs of the experiments. Across three different OOD datasets in Figure <ref>, we can observe the existence of a better detection performance using the index of FPR95 metric based on the Energy <cit.> score. The generality has also been demonstrated under different learning schedules, model structures, and ID datasets in Figures <ref> and <ref>. Without any auxiliary outliers, it motivates us to explore the underlying mechanism of the training process with ID data.
We further delve into the learning dynamics from various perspectives in Figure <ref>, and we reveal the critical data-level attribution for the OOD discriminative capability. In Figure <ref>, we find that the training loss has reached a reasonably small value[Note that it is not the conventional overfitting <cit.> as the testing loss is still decreasing. In Section <ref> and Appendix <ref>, we provide both the empirical comparison of some targeted strategies and the conceptual comparison of them.] at Epoch 60 where its detection performance achieves a satisfactory level. However, if we further minimize the training loss, the trend of the FPR95 curve shows almost the opposite direction with both training and testing loss or accuracy (see Figures <ref> and <ref>). The comparison of the ID/OOD distributions is presented in Figure <ref>. To be specific, the statics of the two distributions indicate that the gap between the ID and OOD data gets narrow as their overlap grows along with the training. After Epoch 60, although the model becomes more confident on ID data which satisfies a part of the calibration target <cit.>, its predictions on the OOD data also become more confident which is unexpected. Using the margin value defined in logit space (see Eq. (<ref>)), we gather the statical with Energy score in Figure <ref>. The misclassified samples are found to be close to the decision boundary and have a high uncertainty level in model prediction. Accordingly, we extract those samples that were learned by the model at this period. As shown in Figures <ref>, <ref> and <ref>, the misclassified samples learned after Epoch 60 present much atypical semantic features, which results in more diverse feature embedding and may impair OOD detection. As deep neural networks tend to first learn the data with typical features <cit.>, we attribute the inconsistent trend to memorizing those atypical data at the later stage.
§.§ Unleashing the Potential Discriminative Power
In general, the models that are developed for the original classification tasks are always seeking better performance (e.g., higher testing accuracy and lower training loss) in practice. However, the inconsistent trend revealed before provides us the possibility to unleash the potential detection power only considering the ID data in training. To this end, we have two important issues that need to address: (1) the well-trained model may have already memorized some atypical samples which cannot be figured out; (2) how to forget those atypical samples considering the given model?
Atypical mining with constructed discrepancy.
As shown Figures <ref> and <ref>, the training statics provide limited information to accurately differentiate the stage that learns on typical or atypical data. We thus explore to construct the parameter discrepancy to mine the atypical samples from a well-trained given model in the light of the learning dynamics <cit.> of deep neural networks and the model uncertainty representation <cit.>. Specifically, we employ a randomly initialized layer-wise mask which applied to all layers. It is consistent with the mask generation in the conventional pruning pipeline <cit.>. In Figure <ref>, we provide empirical evidence to show that we can figure out atypical samples by a certain mask ratio δ, through which we can gradually mine the model stage that misclassifies atypical samples. We provide more discussion about the underlying intuition of masking in Appendix <ref>.
Model forgetting with gradient ascent.
As the training loss achieves zero at the final stage of the given model, we need extra optimization signals to forget those memorized atypical samples. Considering the previous consistent trend before the potential optimal stage (e.g., before Epoch 60 in Figure <ref>), the optimization signal also needs to control the model update not to be too greedy to drop the discriminative features that can be utilized for OOD detection. Starting with the well-trained given model, we can employ the gradient ascent <cit.> to forget the targeted samples, while the tuning phase should also prevent further updates if it achieves the expected stage. As for another implementation choice, e.g., retraining the model from scratch for our targets, we discuss it in Appendix <ref>.
§.§ Method Realization
Based on previous insights, we present our overall framework and the learning objective of the proposed UM and UMAP for OOD detection. Lastly, we discuss their compatibility with either the fundamental scoring functions or the outlier exposure approaches utilizing auxiliary outliers.
Framework. As illustrated in Figure <ref>, our framework consists of two critical components for uncovering the intrinsic OOD detection capability: (1) the initialized mask with a specific masking rate for constructing the output discrepancy with the original model; (2) the subsequent adjustment for alleviating the memorization of atypical samples. The overall workflow starts with estimating the loss value of misclassifying those atypical samples and then conducts tuning on the model or the masked output to forget them.
Forgetting via Unleashing Mask (UM). Based on previous insights, we introduce the forgetting objective as,
minℒ_UM
= min_m_δ∈[0,1]^n |ℓ_CE (f) - ℓ_CE(m_δ⊙ f^*)|
+ℓ_CE(m_δ⊙ f^*),
where m_δ is the layer-wise mask with the masking rate δ, ℓ_CE is the CE loss, ℓ_CE is the averaged CE loss over the ID training data, |·| indicates the computation for absolute value and m_δ⊙ f^* denotes the masked output of the fixed pretrained model that is used to estimate the loss constraint for the learning objective of forgetting. The value of ℓ_CE(·) would be constant during the whole finetuning process. Concretely, the well-trained model will start to optimize itself again if it memorizes the atypical samples and achieves almost zero loss value. We provide a positive gradient signal when the current loss value is lower than the estimated one and vice versa. The model is expected to finally stabilize around the stage that can forget those atypical samples. To be more specific, for a mini-batch of ID samples, they are forwarded to the (pre-trained) model and the loss is automatically computed in Eq. (<ref>). Based on our introduced layer-wise mask, the atypical samples would be easier to induce large loss values than the rest, and will be forced to be wrongly classified in the end-to-end optimization, in which atypical samples are forgotten without being identified.
Unleashing Mask Adopts Pruning (UMAP). Considering the potential negative effect on the original task performance when conducting tuning for forgetting, we further propose a variant of UM Adopts Pruning, i.e., UMAP, to conduct tuning based on the masked output (e.g., replace ℓ_CE(f) to ℓ_CE(m̂_p ⊙ f) in Eq <ref>) using a functionally different mask m̂_p with its pruning rate p as follows,
minℒ_UMAP
= min_m̂_p∈[0,1]^n
m_δ∈[0,1]^n |ℓ_CE( m̂_p⊙ f) - ℓ_CE(m_δ⊙ f^*)|
+ℓ_CE(m_δ⊙ f^*),
Different from the objective of UM (i.e., Eq <ref>) that minimizes the loss value over the model parameter, the objective of UMAP minimizes the loss over the mask m̂_p to achieve the target of forgetting atypical samples. UMAP provides an extra mask to restore the detection capacity but doesn't affect the model parameter for the inference on original tasks, indicating that UMAP is a more practical choice in real-world applications (as empirically verified in our experiments like Table <ref>). We present the algorithms of UM (in Algorithm <ref>) and UMAP (in Algorithm <ref>) in Appendix <ref>.
Compatible with other methods. As we explore the original OOD detection capability of the well-trained model, it is orthogonal and compatible with those promising methods that equip the given model with better detection ability. To be specific, through our proposed methods, we reveal the overlaid OOD detection capability by tuning the original model toward its intermediate training stage.
The discriminative feature learned at that stage can be utilized by different scoring functions <cit.>, like ODIN <cit.> adopted in Figure <ref>. For those methods <cit.> utilizing the auxiliary outliers to regularize the model, our finetuned model obtained by UM and UMAP can also serve as their starting point or adjustment. As our method does not require any auxiliary outlier data to be involved in training, adjusting the model using ID data during its developing phase is practical.
§ EXPERIMENTS
In this section, we present the performance comparison of the proposed method in the OOD detection scenario. Specifically, we verify the effectiveness of our UM and UMAP with two mainstreams of OOD detection approaches: (i) fundamental scoring function methods; (ii) outlier exposure methods involving auxiliary samples. To better understand our proposed method, we further conduct various explorations on the ablation study and provide the corresponding discussion on each sub-aspect considered in our work. More details and additional results are presented in Appendix <ref>.
§.§ Experimental Setups
Datasets. Following the common benchmarks used in previous work <cit.>, we adopt , <cit.> as our major ID datasets, and we also adopt <cit.> for performance exploration. We use a series of different image datasets as the OOD datasets, e.g., <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. We also use the other ID dataset as OOD dataset when training on a specific ID dataset, given that none of them shares the same classes, e.g., we treat as the OOD dataset when training on for comparison. We utilize the ImageNet-1k <cit.> training set as the auxiliary dataset for all of our experiments about fine-tuning with auxiliary outliers (e.g., OE/Energy/POEM), which is detailed in Appendix <ref>. This choice follows previous literature <cit.> that considers the dataset's availability and the absence of any overlap with the ID datasets.
Evaluation metrics. We employ the following three common metrics to evaluate the performance of OOD detection: (i) Area Under the Receiver Operating Characteristic curve (AUROC) <cit.> can be interpreted as the probability for a positive sample to have a higher discriminating score than a negative sample <cit.>; (ii) Area Under the Precision-Recall curve (AUPR) <cit.> is an ideal metric to adjust the extreme difference between positive and negative base rates; (iii) False Positive Rate (FPR) at 95% True Positive Rate (TPR) <cit.> indicates the probability for a negative sample to be misclassified as positive when the true positive rate is at 95%. We also include in-distribution testing accuracy (ID-ACC) to reflect the preservation level of the performance for the original classification task on ID data.
OOD detection baselines. We compare the proposed method with several competitive baselines in the two directions. Specifically, we adopt Maximum Softmax Probability (MSP) <cit.>, ODIN <cit.>, Mahalanobis score <cit.>, and Energy score <cit.> as scoring function baselines; We adopt OE <cit.>, Energy-bounded learning <cit.>, and POEM <cit.> as baselines with outliers. For all scoring function methods, we assume the accessibility of well-trained models. For all methods involving outliers, we constrain all major experiments to a finetuning scenario, which is more practical in real cases. Different from training a dual-task model at the very beginning, equipping deployed models with OOD detection ability is a much more common circumstance, considering the millions of existing deep learning systems. We leave more implementation details in Appendix <ref>.
§.§ Performance Comparison
In this part, we present the performance comparison with some representative baseline methods to demonstrate the effectiveness of our UM and UMAP.
In each category of Table <ref>, we choose one with the best detection performance to adopt UM or UMAP and check the three evaluation metrics of OOD detection and the ID-ACC.
In Table <ref>, we summarize the results using different methods. For the scoring-based methods, our UM can further improve the overall detection performance by alleviating the memorization of atypical ID data, when the ID-ACC keeps comparable with the baseline. For the complex CIFAR-100 dataset, our UMAP can be adopted as a practical way to empower the detection performance and simultaneously avoid severely affecting the original performance on ID data. As for those methods of the second category (i.e., involving auxiliary outlier 𝒟_aux sampled from ImageNet), since we consider a practical workflow, i.e., fine-tuning, on the given model, OE achieves the best performance on the task. Due to the special optimization characteristic, Energy (w. 𝒟_aux) and POEM focus more on the energy loss on differentiating OOD data while performing not well on the preservation of ID-ACC. Without sacrificing much performance on ID data, OE with our UM can still achieve better detection performance. In Table <ref>, the fine-grained detection performance on each OOD testing set demonstrates the general effectiveness of UM and UMAP. Note that we may observe Mahalanobis can sometimes achieve the best performance on the specific OOD test set (e.g., Textures). It is probably because Mahalanobis is prone to overfitting on texture features during fine-tuning with Textures. In contrast, according to Table <ref>, Mahalanobis achieves the worst results on the other five datasets. We leave more results (e.g., completed comparison in Table <ref>; more fine-grained results in Tables <ref> and <ref>; using another model structure in Tables <ref>, <ref> and <ref>) to Appendix, which has verified the significant improvement (up to 18% reduced on averaged FPR95) across various setups and also on a large-scale ID dataset (i.e., ImageNet <cit.> in Table <ref>).
§.§ Ablation and Further Analysis
In this part, we conduct further explorations and analysis to provide a thorough understanding of our UM and UMAP. Moreover, we also provide additional experimental results about further explorations on OOD detection in Appendix <ref>.
Practicality of the considered setting and the implementation choice. Following the previous work <cit.>, we consider the same setting that starts from a given well-trained model in major explorations, which is practical but can be extended to another implementation choice, i.e., retraining the whole model. In Figure <ref>, we show the effectiveness of UM/UMAP under different choices. It is worth noting that UM adopting finetuning has shown the advantages of being cost-effective on convergence compared with train-from-scratch, which we leave more discussion and comparison in Appendix <ref>.
Specificity and applicability of excavated OOD discriminative capability. As mentioned before, the intrinsic OOD discriminative capability is distinguishable from conventional overfitting. We empirically compare UM/UMAP with dropout (DR), weight decay (WD), and early stop in Figure <ref>. UM gain lower FPR95 from the newly designed objective for forgetting. In Figure <ref>, we present the applicability of the OOD detection capability using different score functions, which implies the generated model stage better meets the requirement of uncertainty estimation.
Effects of the mask on mining atypical samples. In Figure <ref>, we compare UM with different mask ratios for mining the atypical samples, which seeks to find the intermediate model stage that wrongly classified the atypical samples. The results show reasonably small ratios (e.g., from 0.995 to 0.97) that we knocked off in the original model can help us to achieve the targets. More detailed analysis of the mask ratio and the discussion about the underlying intuition of atypical mining are provided in Appendixes <ref> and <ref>.
Exploration on UMAP and vanilla model pruning. Although the large constraint on training loss can help reveal the OOD detection performance, the ID-ACC may be undermined under such circumstances. To mitigate this issue, we further adopt pruning in UMAP to learn a mask instead of tuning the model parameters directly. In Figure <ref>, we explore various prune rates p and demonstrate their effectiveness. Specifically, our UMAP can achieve a lower FPR95 than vanilla pruning with the original objective.
The prune rate can be selected from a wide range (e.g., p ∈ [0.3, 0.9]) to guarantee a fast convergence and effectiveness. We also provide additional discussion on UMAP in Appendix <ref>.
Sample visualization of the atypical samples identified by our mask.
In Figure <ref>, we visualize the misclassified samples using the ImageNet <cit.> dataset with the pre-trained model by adopting different mask ratios. We can find that masking the model constructs the parameter discrepancy, which helps us to identify some ID samples with atypical semantic information (e.g., those samples in the bottom line compared with the above in each class). It demonstrates the rationality of our intuition to adopt masking. We leave more visualization results in Appendix <ref>.
Theoretical insights on ID data property. Similar to prior works <cit.>, here we present the major results based on the sample complexity analysis adopted in POEM <cit.>. Due to the limited space, please refer to Appendix <ref> for the completed analysis and Figure <ref> for more conceptual understanding.
Given a simple Gaussian mixture model in binary classification with the hypothesis class ℋ=sign(θ^Tx), θ∈ℝ^d. There exists constant α, δ^* and ϵ that,
μ^Tθ^*_n_1,n_2/σ||θ^*_n_1,n_2||≥||μ||^2-σ^1/2||μ||^3/2-σ^2(|α-δ^*|+ϵ)/2/2√(σ^2/n(d+1/σ) + ||μ||^2)
Since FPR(θ^*_n_1,n_2)=erf(μ^Tθ^*_n_1,n_2/σ||θ^*_n_1,n_2||) is monotonically decreasing, as the lower bound of μ^Tθ^*_n_1,n_2/σ||θ^*_n_1,n_2|| will increase with the constraint of |α-δ^*| (which corresponds to the illustrated distance from the outlier boundary in right-most of Figure <ref>) decrease in our methods, the upper bound of FPR(θ^*_n_1,n_2) will decrease. One insight is learning more atypical ID samples needs more high-quality auxiliary outliers (near ID data) to shape the OOD detection capability.
Additional experimental results of explorations.
Except for the major performance comparisons and the previous ablations, we also provide further discussion and analysis from different views in Appendix <ref>, including the practicality of the considered setting, the effects of the mask on mining atypical samples, discussion of UMAP with vanilla pruning, additional comparisons with more advanced methods and completed results of our proposed UM and UMAP.
§ RELATED WORK
OOD Detection without auxiliary data. <cit.> formally shed light on out-of-distribution detection, proposing to use softmax prediction probability as a baseline which is demonstrated to be unsuitable for OOD detection <cit.>. Subsequent works <cit.> keep focusing on designing post-hoc metrics to distinguish ID samples from OOD samples, among which ODIN <cit.> introduces small perturbations into input images to facilitate the separation of softmax score, Mahalanobis distance-based confidence score <cit.> exploits the feature space by obtaining conditional Gaussian distributions, energy-based score <cit.> aligns better with the probability density. Besides directly designing new score functions, many other works pay attention to various aspects to enhance the OOD detection such that LogitNorm <cit.> produces confidence scores by training with a constant vector norm on the logits, and DICE <cit.> reduces the variance of the output distribution by leveraging the model sparsification.
OOD Detection with auxiliary data. Another promising direction toward OOD detection involves the auxiliary outliers for model regularization. On the one hand, some works generate virtual outliers such that <cit.> uses generative adversarial networks to generate boundary samples, VOS <cit.> regularizes the decision boundary by adaptively sampling virtual outliers from the low-likelihood region. On the other hand, other works tend to exploit information from natural outliers, such that outlier exposure is introduced by <cit.>, given that diverse data are available in enormous quantities. <cit.> train an additional "head" and maximizes the discrepancy of decision boundaries of the two heads to detect OOD samples. Energy-bounded learning <cit.> fine-tunes the neural network to widen the energy gap by adding an energy loss term to the objective. Some other works also highlight the sampling strategy, such that ATOM <cit.> greedily utilizes informative auxiliary data to tighten the decision boundary for OOD detection, and POEM <cit.> adopts Thompson sampling to contour the decision boundary precisely. The performance of training with outliers is usually superior to that without outliers, shown in many other works <cit.>.
§ CONCLUSION
In this work, we explore the intrinsic OOD discriminative capability of a well-trained model from a unique data-level attribution. Without involving any auxiliary outliers in training, we reveal the inconsistent trend between minimizing original training loss and gaining OOD detection capability. We further identify the potential attribution to be the memorization on atypical samples. To excavate the overlaid capability, we propose the novel Unleashing Mask (UM) and its practical variant UMAP. Through this, we construct model-level discrepancy that figures out the memorized atypical samples and utilizes the constrained gradient ascent to encourage forgetting. It better utilizes the well-trained given model via backtracking or sub-structure pruning. We hope our work could provide new insights for revisiting the model development in OOD detection, and draw more attention toward the data-level attribution. Future work can be extended to a more systematical ID/OOD data investigation with other topics like data pruning or few-shot finetuning.
§ ACKNOWLEDGEMENTS
JNZ and BH were supported by NSFC Young Scientists Fund No. 62006202, Guangdong Basic and Applied Basic Research Foundation No. 2022A1515011652, CAAI-Huawei MindSpore Open Fund, and HKBU CSD Departmental Incentive Grant. JCY was supported by the National Key R&D Program of China (No. 2022ZD0160703), STCSM (No. 22511106101, No. 22511105700, No. 21DZ1100100), 111 plan (No. BP0719010). JLX was supported by RGC grants 12202221 and C2004-21GF.
icml2023
§ APPENDIX
§ REPRODUCIBILITY STATEMENT
We provide the link of our source codes to ensure the reproducibility of our experimental results: <https://github.com/tmlr-group/Unleashing-Mask>. Below we summarize critical aspects to facilitate reproducible results:
* Datasets. The datasets we used are all publicly accessible, which is introduced in Section <ref>. For methods involving auxiliary outliers, we strictly follow previous works <cit.> to avoid overlap between the auxiliary dataset (ImageNet-1k) <cit.> and any other OOD datasets.
* Assumption. We set our experiments to a post-hoc scenario <cit.> where a well-trained model is available, and some parts of training samples are also available for subsequent fine-tuning <cit.>.
* Environment. All experiments are conducted with multiple runs on NVIDIA Tesla V100-SXM2-32GB GPUs with Python 3.6 and PyTorch 1.8.
§ DETAILS ABOUT CONSIDERED BASELINES AND METRICS
In this section, we provide the details about the baselines for the scoring functions and fine-tuning with auxiliary outliers, as well as the corresponding hyper-parameters and other related metrics that are considered in our work.
Maximum Softmax Probability (MSP). <cit.> proposes to use maximum softmax probability to discriminate ID and OOD samples. The score is defined as follows,
S_MSP(x; f) = max_cP(y = c | x; f) = max (f(x))
where f represents the given well-trained model and c is one of the classes 𝒴={1,…, C}. The larger softmax score indicates the larger probability for a sample to be ID data, reflecting the model's confidence on the sample.
ODIN. <cit.> designed the ODIN score, leveraging the temperature scaling and tiny perturbations to widen the gap between the distributions of ID and OOD samples. The ODIN score is defined as follows,
S_ODIN(x; f) = max_cP(y = c | x̃; f) = max (f(x̃)/T)
where x̃ represents the perturbed samples (controled by ϵ), T represents the temperature. For fair comparison, we adopt the suggested hyperparameters <cit.>: ϵ = 1.4× 10^-3, T = 1.0 × 10^4.
Mahalanobis. <cit.> introduces a Mahalanobis distance-based confidence score, exploiting the feature space of the neural networks by inspecting the class conditional Gaussian distributions. The Mahalanobis distance score is defined as follows,
S_Mahalanobis(x; f) = max_c - (f(x) - μ̂_c)^T Σ̂^-1(f(x) - μ̂_c)
where μ̂_c represents the estimated mean of multivariate Gaussian distribution of class c, Σ̂ represents the estimated tied covariance of the C class-conditional Gaussian distributions.
Energy. <cit.> proposes to use the Energy of the predicted logits to distinguish the ID and OOD samples. The Energy score is defined as follows,
S_Energy(x; f) = -T log∑_c = 1^C e^f(x)_c / T
where T represents the temperature parameter. As theoretically illustrated in <cit.>, a lower Energy score indicates a higher probability for a sample to be ID. Following <cit.>, we fix the T to 1.0 throughout all experiments.
Outlier Exposure (OE). <cit.> initiates a promising approach towards OOD detections by involving outliers to force apart the distributions of ID and OOD samples. In the experiments, we use the cross-entropy from f(x_out)to the uniform distribution as the ℒ_OE <cit.>,
ℒ_f = 𝔼_𝒟_in[ℓ_CE(f(x),y)] + λ𝔼_𝒟^s_out[log∑_c = 1^C e^f(x)_c - 𝔼_𝒟^s_out(f(x))]
Energy (w. 𝒟_aux). In addition to using the Energy as a post-hoc score to distinguish ID and OOD samples, <cit.> proposes an Energy-bounded objective to further separate the two distributions. The OE objective is as follows,
ℒ_OE = 𝔼_𝒟^s_in(max(0, S_Energy(x,f) - m_in))^2 + 𝔼_𝒟^s_out(max(0, m_out - S_Energy(x,f)))^2
We keep the thresholds same to <cit.>: m_in = -25.0, m_out = -7.0.
POEM. <cit.> explores the Thompson sampling strategy <cit.> to make the most use of outliers to learn a tight decision boundary. Though given the POEM's nature to be orthogonal to other OE methods, we use the Energy(w. 𝒟_aux) as the backbone, which is the same as Eq.( <ref>) in <cit.>. The details of Thompson sampling can refer to <cit.>.
FPR and TPR. Suppose we have a binary classification task (to predict an image to be an ID or OOD sample in this paper). There are two possible outputs: a positive result (the model predicts an image to be an ID sample); a negative result (the model predicts an image to be an OOD sample). Since we have two possible labels and two possible outputs, we can form a confusion matrix with all possible outputs as follows,
The false positive rate (FPR) is calculated as:
FPR = FP/FP +TN
The true positive rate (TPR) is calculated as:
TPR = TP/TP + FN
Margin value. Let f(x): ℝ^d →ℝ^k be a model that outputs k logits, following previous works <cit.>, the margin value of an example (x,y) used in our Figure <ref> is defined as,
S_margin(x,y) = f(x)_y-max_j≠ yf(x)_j
§ THEORETICAL INSIGHTS ON ID DATA PROPERTY
In this section, we provide a detailed discussion and theoretical analysis to explain the revealed observation and the benefits of our proposed method on ID data property. Specifically, we present the analysis based on the view of sample complexity adopted in POEM <cit.>. To better demonstrate the conceptual extension, we also provide an intuitive illustration based on a comparison with POEM's previous focus on auxiliary outlier sampling in Figure <ref> (extended version of Figure <ref>). Briefly, we focus on the ID data property which is not discussed in the previous analytical framework.
Preliminary setup and notations. As the original training task (e.g., the multi-classification task on CIFAR-10) does not involve any outliers data, it is hard to analyze the related property with OOD detection. Here we introduce an Assumption <ref> about virtual 𝒟_aux to help complete the analytical framework.
To sum up, we consider a binary classification task here for distinguishing ID and OOD data. Following the prior works <cit.>, we assume the extracted feature approximately follows a Gaussian mixture model (GMM) with the equal class priors as 1/2𝒩(μ,σ^2ℐ)+1/2𝒩(-μ,σ^2ℐ). To be specific, 𝒟_in=𝒩(μ, σ^2ℐ) and 𝒟_aux=𝒩(-μ, σ^2ℐ). Considering the hypothesis class as ℋ=sign(θ^Tx), θ∈ℝ^d. The classifier outputs 1 if x∼𝒟_in and outputs -1 if x∼𝒟_aux.
First, we introduce the assumption about virtual 𝒟_aux. Considering the representation power of deep neural networks, the assumption can be valid. It is empirically supported by the evidence in Figure <ref>, as the part of real 𝒟_out can be viewed as the virtual 𝒟_aux. Second, to better link our method for the analysis, we introduce another assumption (i.e., Assumption <ref>) about the ID training status. It can be verified by the relative degree of distinguishability indicated by a fixed threshold in Figure <ref>, that the model is more confident on the 𝒟_out along with the training.
[Virtual 𝒟_aux]
Given the well-trained model in the original classification task on the ID distribution 𝒟_in, and considering the binary classification for OOD detection, we can assume the existence of a virtual 𝒟_aux, that the OOD discriminative capacity of the current model can result from learning on the virtual 𝒟_aux with the outlier exposure manner.
[ID Training Status w.r.t. Masking]
Considering the model training phase in the original multi-class classification task on the ID distribution 𝒟_in, and tuning with a specific mask ratio serving as the sample selection, we assume that the data points x∼ virtual 𝒟_aux satisfy the extended constraint based on the boundary scores -|f_outlier(x)| defined in POEM <cit.>: ∑^n_i=1f_outlier≤ (|α-δ^*|+ϵ)n, where the f_outlier is a function parameterized by some unknown ground truth weights and maps the high-dimensional input x into a scalar. Generally, it represents the discrepancy between virtual 𝒟_aux and the true 𝒟_out, indicated with the constraint |α-δ^*| results from the masked ID data.
Given the above, we can naturally get the following extended lemma based on that adopted in POEM <cit.>.
Assume the data points x∼ virtual 𝒟_aux satisfy the following constraint for resulting in the following varied boundary margin: ∑_i=1^n|2x_i^Tμ|≤ n σ^2 (|α-δ^*|+ϵ).
Given the Gaussian mixture model described in the previous setup, we can obtain the following expression by Bayes' rule of ℙ(outlier|x),
ℙ(outlier|x) = ℙ(x|outlier) ℙ(outlier)/ℙ(x) = 1/1+e^-1/2σ^2(d_outlier(x)-d_in(x)),
where d_outlier(x))=(x+μ)^⊤(x+μ), d_in(x)=(x-μ)^⊤(x-μ), and ℙ(outlier|x)=1/1+e^-f_outlier(x) according to its definition. Then we have:
-f_outlier = -1/2σ^2 (d_outlier(x)-d_in(x)),
-|f_outlier| = -1/2σ^2|(x-μ)^⊤(x-μ) -(x+μ)^⊤(x+μ)|=-2/σ^2|x^⊤μ|.
Therefore, we can get the constraint as: ∑_i=1^n|2x_i^Tμ|≤ n σ^2 (|α-δ^*|+ϵ).
With the previous assumption and lemma that incorporate our masking in the variable δ^*, we present the analysis as below.
Complexity analysis anchored on 𝒟_in. With the above lemma and the assumptions of virtual 𝒟_aux (as illustrated in Figure <ref>), we can derive the results to understand the benefits from the revealed observation and our UM and UMAP.
Consider the given classifier defined as θ^*_n_1,n_2=1/n_1+n_2(∑_i=1^n_1 x_i^1-∑_i=1^n_2 x_i^2), assume each x_i^1 is drawn i.i.d. from 𝒟_in and each x_i^2 is drawn i.i.d from 𝒟_aux, and assume the signal/noise ratio is ||μ||/σ=r_0≫ 1, the dimentionality/sample size ratio is d/n=r_1, as well as exist some constant α<1. By decomposition, we can rewrite θ^*_n_1,n_2=μ+n_1/n_1+n_2θ_1+n_2/n_1+n_2θ_2 with the following θ_1 and θ_2:
θ_1 = 1/n_1(∑_i=1^n_1x_i^1)-μ, θ_2 = 1/n_2(-∑_i=1^n_2x_i^2)-μ,
Since θ_1∼𝒩(0, σ^2/n_1ℐ), we have that ||θ_1||^2∼σ^2/n_1𝒳_d^2 and μ^Tθ_1/||μ||∼𝒩(0, σ^2/n_1) to form the standard concentration bounds as:
ℙ(||θ_1||^2≥σ^2/n_1(d+1/σ))≤ e^-d/8σ^2, ℙ(|μ^Tθ_1|/||μ||≥(σ||μ||)^1/2)≤ 2e^-n_1||μ||/2σ
Anchored on 𝒟_in, the distribution of θ_2 can be treated as a truncated distribution of θ_1 as x_i^2 drawn i.i.d. from the virtual 𝒟_aux are under the relative constraint with 𝒟_in. Without losing the generality, we replace n_1 with n, and have the following inequality with a finite positive constant a:
ℙ(||θ_2||^2≥σ^2/n_1(d+1/σ))≤ ae^-d/8σ^2
According to Lemma <ref>, we can have that |μ^Tθ_2|≤||μ||^2+σ^2(|α-δ^*|+ϵ)/2. Now we can have ||θ_1||^2≤σ^2/n(d+1/σ), ||θ_2||^2≤σ^2/n(d+1/σ), |μ^Tθ_1|/||μ||≤(σ||μ||)^1/2 simultaneously hold and derive the following recall the decomposition,
||θ^*_n_1,n_2||^2 = ||μ+n_1/n_1+n_2θ_1+n_2/n_1+n_2θ_2||^2 ≤σ^2/n(d+1/σ) + ||μ||^2,
and
|μ^Tθ^*_n_1,n_2|≥1/2(||μ||^2-σ^1/2||μ||^3/2-σ^2(|α-δ^*|+ϵ)/2).
With the above inequality derived in Eq. (<ref>) and Eq. (<ref>), we can have the following bound with the probability at least 1-(1+a)e^-r_1n/8σ^2-2e^-n_1||μ||/2σ
μ^Tθ^*_n_1,n_2/σ||θ^*_n_1,n_2||≥||μ||^2-σ^1/2||μ||^3/2-σ^2(|α-δ^*|+ϵ)/2/2√(σ^2/n(d+1/σ) + ||μ||^2)
Since FPR(θ^*_n_1,n_2)=erf(μ^Tθ^*_n_1,n_2/σ||θ^*_n_1,n_2||) is monotonically decreasing, as the lower bound of μ^Tθ^*_n_1,n_2/σ||θ^*_n_1,n_2|| will increase as the constraint from the virtual 𝒟_aux changed accordingly in our UM and UMAP, the upper bound of FPR(θ^*_n_1,n_2) will decrease.
From the above analysis, one insight we can draw is learning more atypical ID data may need more high-quality auxiliary outliers to shape the near-the-boundary behavior of the model, which can further enhance the OOD discriminative capability.
§ DISCUSSION ABOUT THE "CONFLICT" AGAINST PREVIOUS EMPIRICAL OBSERVATION
In this section, we address what initially appears to be a contradiction between our observation and previous empirical studies <cit.>, but it is not a contradiction. This work demonstrates that during training, there exists a middle stage where the model's OOD detection performance is superior to the final stage, even though the model has not achieved the best performance on ID-ACC. Some previous studies <cit.> suggest that a good close-set classifier tends to have higher OOD detection performance, which may seem to contradict our claim. However, this is not the case, and we provide the following explanations.
First, the previous empirical observation <cit.> of a high correlation between a good close-set classifier (e.g., high ID-ACC in <cit.>) and OOD detection performance is based on inter-model comparisons, such as comparing different model architectures. This is consistent with our results in Table <ref>. Even the previous model stages backtracked via our UM show similar results confirming that a better classifier (e.g., the DenseNet-101 in Table <ref>) is better to achieve better OOD detection performance.
Second, our observation is based on intra-model comparisons, which compare different training stages of a single model. Our results in Figure <ref> across various training settings confirm this observation. Additionally, Table <ref> shows that when we backtrack the model through UM, we obtain lower ID-ACC but better OOD detection performance. However, if we compare different models, DenseNet-101 with higher ID-ACC still outperforms Wide-ResNet, as previously mentioned.
To summarize, our observation provides an orthogonal view to exploring the relationship between ID-ACC and OOD detection performance. On the one hand, we attribute this observation to the model's memorization of atypical samples, as further demonstrated by our experiments (e.g., in Figure <ref>). On the other hand, we believe that this observation reveals other characteristics of a "good classifier" beyond ID-ACC, e.g., higher OOD detection capability.
§ DISCUSSION WITH CONVENTIONAL OVERFITTING
In this section, we provide a comprehensive comparison of our observation and conventional overfitting in deep learning.
First, we would refer to the concept of the conventional overfitting <cit.>, i.e., the model "overfits" the training data but fails to generalize and perform well on the test data that is unseen during training. The common empirical reflection of overfitting is that the training error is decreasing while the test error is increasing at the same time, which enlarges the generalization gap of the model. It has been empirically confirmed not the case in our observation as observed in Figure <ref> and <ref>. To be specific, for the original classification task, there is no conventional overfitting observed as the test performance is still improved at the later training stage, which is a general pursuit of the model development phase on the original tasks <cit.>.
Then, when we consider the OOD detection performance of the well-trained model, our unique observation is about the inconsistency between gaining better OOD detection capability and pursuing better performance on the original classification task for the in-distribution (ID) data. It is worth noting that here the training task is not the binary classification of OOD detection, but the classification task on ID data. It is out of the rigorous concept of conventional overfitting and has received limited focus and discussion through the data-level perspective in the previous literature about OOD detection <cit.> to the best of our knowledge. Considering the practical scenario that exists target-level discrepancy, our revealed observation may encourage us to revisit the detection capability of the well-trained model.
Third, we also provide an empirical comparison with some strategies targeted for mitigating overfitting. In our experiments, for all the baseline models including that used in Figure <ref>, we have adopted those strategies <cit.> (e.g., drop-out, weight decay) to reduce overfitting.
The results are summarized in the following Tables <ref>, <ref>, <ref> and <ref>. According to the experiments, most conventional methods proposed to prevent conventional overfitting show limited benefits in gaining better OOD detection performance, since they have a different underlying target from UM/UMAP. However, most of them suffer from the higher sacrifice on the performance of the original task and may not be compatible and practical in the current general setting, i.e., starting from a well-trained model.
In contrast, our proposed UMAP can be a more practical and flexible way to restore detection performance.
Given the concept discrepancy aforementioned, we can know that "memorization of the atypical samples" are not "memorization in overfitting". Those atypical samples are empirically beneficial in improving the performance on the original classification task as shown in Figure <ref>. However, this part of knowledge is not very necessary and even harmful to the OOD detection task as the detection performance of the model drops significantly. Based on the training and test curves in our observation, the memorization in overfitting is expected to happen later than the final stage in which the test performance would drop. Since we have already used some strategies to prevent overfitting, it does not exist. Intuitively, the "atypical samples" identified in our work are relative to the OOD detection task. The memorization of "atypical samples" indicates that the model may not be able to draw the general information of the ID distribution through further learning on those atypical samples through the original classification task. Since we mainly provide the understanding of the data-level attribution for OOD discriminative capability, further analysis from theoretical views <cit.> to link the conventional overfitting with OOD detection would be an interesting future direction.
§ ADDITIONAL EXPLANATION TOWARDS MINING THE ATYPICAL SAMPLES
In this section, we provide further discussion and explanation about mining the atypical samples.
First, for identifying those atypical samples using a randomly initialized layer-wise mask <cit.> with the well-pre-trained model, the underlying intuition is constructing the parameter-level discrepancy to mine the atypical samples. It is inspired by and based on the evidence drawn from previous literature about learning behaviors <cit.> of deep neural networks (DNNs), sparse representation <cit.>, and also model uncertainty representation (like dropout <cit.>). To be specific, the atypical samples tend to be learned by the DNNs later than those typical samples <cit.>, and are relatively more sensitive to the changes of the model parameter as the model does not generalize well on that <cit.>. By the layer-wise mask, the constructed discrepancy can make the model misclassify the atypical samples and estimate loss constraint for the forgetting objective, as visualized in Figure <ref>.
Second, introducing the layer-wise mask has several advantages for achieving the staged target of mining atypical samples in our proposed method, while we would also admit that the layer-wise mask may not be an irreplaceable option or may not be optimal. On the one hand, considering that the model has been trained to approach the zero error on training data, utilizing the layer-wise mask is an integrated strategy to 1) figure out the atypical samples; and 2) obtain the loss value computed by the masked output that misclassifies them. The loss constraint is later used in the forgetting objective to fine-tune the model. On the other hand, the layer-wise mask is also compatible with the proposed UMAP to generate a flexible mask for restoring the detection capability of the original model.
More discussion and visualization using CIFAR-10 and ImageNet. Third, we also adopt the unit/weight mask <cit.> and visualize the misclassified samples in Figure <ref> (we also present a similar visualization about the experiments on ImageNet <cit.> in Figure <ref>). The detected samples show that traditionally pruning the network according to weights can't efficiently figure out whether an image is typical or atypical while pruning randomly can do so. Intuitively, we attribute this phenomenon to the uncertain relationship <cit.> between the magnitudes and the learned patterns. Randomly masking out weights can have a harsh influence on atypical samples, which creates a discrepancy in mining them. Further investigating the specific effect of different methods that construct the parameter-level discrepancy would be an interesting sub-topic in future work. For the value of CE loss, although the atypical samples tend to have high CE loss value, they are already memorized and correctly classified as indicated by the zero training error. Only using the high CE error can not provide the loss estimation when the model does not correctly classify those samples.
§ ALGORITHMIC REALIZATION OF UM AND UMAP
In this section, we provide the detailed algorithmic realizations of our proposed Unleashing Mask (UM) (i.e., in Algorithm <ref>) and Unleashing Mask Adopt Pruning (UMAP) (i.e., in Algorithm <ref>) given the well-trained model.
In general, we seek to unleash the intrinsic detection power of the well-trained model by adjusting the well-trained given model. For the first part, we need to mine the atypical samples and estimate the loss value to misclassify them using the current model. For the second part, we need to tune or prune with the loss constraint for forgetting.
To estimate the loss constrain for forgetting (i.e., ℓ_CE(m_δ⊙ f^*) in Eq <ref> with the fixed given model f^*), we randomly knock out parts of weights according to a specific mask ratio δ. To be specific, we sample a score from a Gaussian distribution for every weight. Then we initialize a unit matrix for every layer of the model concerning the size of the layer. We formulate the mask m_δ according to the sampled scores. We then iterate through every layer (termed as l ∈θ_layers) to find the threshold for each layer that is smaller than the score of the given mask ratio in that layer (termed as quantile). Then set all the ones, whose corresponding scores are more significant than the layers' thresholds, to zeros.
We dot-multiply every layer's weights with the formulated binary matrix as if we delete some parts of the weights. Then, we input a batch of training samples to the masked model and treat the mean value of the outputs' CE loss as the loss constraint. After all of these have been done, we begin to fine-tune the model's weights with the loss constraint applied to the original CE loss. In our algorithms, the fine-tuning epochs k is the epochs we finetune after we get the well-trained model.
For UMAP, the major difference from UM is that, instead of fine-tuning the weights, we generate a popup score for every weight, and force the gradients to pass through the scores. In every iteration, we need to formulate a binary mask according to the given prune rate p. This is just what we do when estimating the loss constraint. For more details, it can refer to <cit.>. In Table <ref>, we summarize the complete comparison of UM and UMAP to show their effectiveness. We also provide the performance comparison by switching ID training data to be the large-scaled ImageNet, and demonstrate the effectiveness of our UM and UMAP in Table <ref>. In practice, we use SVHN as a validation OOD set to tune the mask ratio. We adopt 99.6% (the corresponding estimated loss constraint is about 0.6) in this large-scale experiment to estimate the loss constraint for forgetting. Surprisingly, we also find that loss values smaller than the estimated one (i.e., <0.6) can also help improve OOD detection performance, distinguishing the general effectiveness of UM/UMAP.
§ ADDITIONAL EXPERIMENT RESULTS
In this section, we provide more experiment results from different perspectives to characterize our proposed algorithms.
§.§ Additional Setups
Training details. We conduct all major experiments on DenseNet-101 <cit.> with training epochs fixed to 100. The models are trained using stochastic gradient descent <cit.> with Nesterov momentum <cit.>. We adopt Cosine Annealing <cit.> to schedule the learning rate which begins at 0.1. We set the momentum and weight decay to be 0.9 and 10^-4 respectively throughout all experiments. The size of the mini-batch is 256 for both ID samples (during training and testing) and OOD samples (during testing). The choice of mask ratio for our UM and UMAP is detailed and further discussed in Appendix <ref>.
Model architecture. For DenseNet-101, we fix the growth rate and reduce the rate to 12 and 0.5 respectively with the bottleneck block included in the backbone <cit.>. We also explore the proposed UM on WideResNet <cit.> with 40 depth and 4 widen factor, which is termed as WRN-40-4. The batch size for both ID and OOD testing samples is 256, and the batch size of auxiliary samples is 2000. The λ in Eq. (<ref>) is 0.5 to keep the OE loss comparable to the CE loss. As for the outliers sampling, we randomly retrieve 50000 samples from ImageNet-1k <cit.> for OE and Energy (w. 𝒟_aux) and 50000 samples using Thompson sampling <cit.> for POEM <cit.>.
r0.38
< g r a p h i c s >
Learning Rate Scheduler.
Learning rate schedules. We use 4 different learning rate schedules to demonstrate the existence of the overlaid OOD detection capability. For cosine annealing, we follow the common setups in <cit.>; for linear schedule, the learning rate remains the same in the first one-third epochs, decreases linearly to the tenth of the initial rate in the middle one-third epochs, and decrease linearly to 1% of the initial rate in the last one-third epochs; for the multiple decay schedule, the learning rate decreases 10% of the initial rate (0.01) every 10% epochs (10 epochs); for the multiple step schedule, the learning rate decreases to 10% of the current rate every 30 epochs. All those learning rate schedules for our experiments are intuitively illustrated in Figure <ref>.
§.§ Empirical verification on typical/atypical data.
In the following Tables <ref>, <ref>, <ref>, <ref>, and <ref>, we further conduct the experiments to identify the negative effect of learning on those atypical samples by comparing with a counterpart that learning only with the typical samples. The results demonstrate that the degeneration in detection performance is more likely to come from learning atypical samples.
In Table <ref>, we provide the main results for the verification using typical/atypical samples. Intuitively, we intend to separate the training dataset into a typical set and an atypical set, and train respectively on these two sets to see whether it is learning atypical samples that induce the degradation in OOD detection performance during the latter training phase. Specifically, we input the training samples through the model (DenseNet-101) of the 60th epoch and get the CE loss for selection. We provide the ACC of the generated sets on the model of the 60th epoch (ACC in the tables). The extremely low ACCs of the atypical sets show that the model of the 60th epoch can hardly predict the right label, which meets our conceptual definition of atypical samples. We then finetune the model of the 60th epoch with the generated dataset and report the OOD performance. The results show learning from only those atypical data fails to gain better detection performance than its counterpart (i.e., learning from only those typical data), although it is beneficial to improve the performance of the original multi-class classification task. The experiments provide a conceptual verification of our conjecture which links our observation and the proposed method.
§.§ Empirical Efficiency of UM and UMAP
As mentioned before, UM adopts finetuning on the proposed objective for forgetting has shown the advantages of being cost-effective compared with train-from-scratch. For the tuning epochs, we show in Figures <ref> and <ref> that fine-tuning using UM can converge within about 20 epochs, indicating that we can apply our UM/UMAP for far less than 100 epochs (compared with train-from-scratch) to restore the better detection performance of the original well-trained model. It is intuitively reasonable that finetuning with the newly designed objective would benefit from the well-trained model, allowing a faster convergence since the two phases consider the same task with the same training data. As for the major experiments conducted in our work, finetuning adopts 100 epochs for better exploring and presenting its learning dynamics for research purposes, and this configuration is indicated in the training details of Section <ref>.
Here, we also provide an extra comparison to directly show the relative efficiency of our proposed UM/UMAP in the following Table <ref> and Table <ref>. The results demonstrate that UM and UMAP can efficiently restore detection performance compared with the baseline. Considering the significance of the OOD awareness for those safety-critical areas, it is worthwhile to further excavate the OOD detection capability of the deployed well-trained model using our UM and UMAP.
However, there may be a concern that while both UM/UMAP and OE-based methods need extra fine-tuning processes, why should we choose UM/UMAP instead of OE-based methods, given that OE-based methods can also achieve good performance on OOD detection task. The intuition of UM/UMAP is to unleash the OOD detection capability of a pre-trained model with ID data, which is orthogonal to those OE-based methods (e.g. DOE <cit.>), improving the OOD capability of a pre-train model with both ID data and auxiliary data. On the one hand, OE-based methods need sampling/synthesizing large auxiliary OOD datasets, while UM/UMAP only needs the ID data. On the other hand, although both require additional costs to fine-tune the model, they are orthogonal and can be coupled (as discussed in Section <ref>). To further address the concern, we conduct additional experiments(i.e., OE-based results in Table <ref>) to validate their mutual benefit in the combination. According to the results, we can find that UM/UMAP with DOE achieves better performance. This is because while OE-based methods can improve the performance of OOD detection by fine-tuning with both ID and auxiliary outliers, UM/UMAP can serve as a method (only using ID data) to encourage optimization to learn a more appropriate model for OOD detection.
§.§ Fine-grained Results on OOD Data
In order to further understand the effectiveness of the proposed UM and UMAP on different OOD datasets, we report the fine-grained results of our experiments on CIFAR-10 and CIFAR-100 with 6 OOD datasets (CIFAR-10/CIFAR-100, textures, Places365, SUN, LSUN, iNaturalist).
The results on the 6 OOD datasets show the general effectiveness of the proposed UM as well as UMAP.
In Table <ref>, OE + UM can outperform all the OOD baselines, and further improve the OOD performance even though the original detection performance is already well. By equipping with our proposed UM and UMAP, the baselines can outperform their counterparts on most of the OOD datasets. For instance, the FPR95 can decrease from 1.91 to 1.42. In Table <ref>, we also take a closer check about results on CIFAR-100 with 6 OOD datasets. Our proposed method can almost improve all competitive baselines (either the scoring functions or the finetuning with auxiliary outliers) on the 6 OOD datasets. In both w. 𝒟_aux and w.o. 𝒟_aux scenarios, Unleashing Mask can significantly excavate the intrinsic OOD detection capability of the model. In addition to unleashing the excellent OOD performance, UMAP can also maintain the high ID-ACC by learning a binary mask instead of tuning the well-trained original parameters directly. Due to the space limit, we separate the results of SVHN dataset in Tables <ref> and <ref> to show the relative comparison of our UM and UMAP. The results demonstrate the general effectiveness of UM/UMAP compared with the original Energy score. Besides, we find Mahalanobis performs dramatically well which is an outlier method against other post-hoc baselines when SVHN as OOD set in our experiments. Our conjecture about this phenomenon is that the Mahalanobis score can perform better on those specific OOD data by inspecting the class conditional Gaussian distributions <cit.>. Nonetheless, the proposed UM/UMAP can still outstrip all the baselines on most OOD datasets under various settings at the perspective of average, showing their distinguishing effectiveness and practicability.
§.§ Experiments on Different Model Structure
Following <ref>, we additionally conduct critical experiments on the WRN-40-4 <cit.> backbone to demonstrate the effectiveness of the proposed UM and UMAP. In Figure <ref>, we can find during the model training phase on ID data, there also exists the overlaid OOD detection capability can be explored in later development. In Table <ref>, we show the comparison of multiple OOD detection baselines, evaluating the OOD performance on the different OOD datasets mentioned in Section <ref>. The results again demonstrate that our proposed method indeed excavates the intrinsic detection capability and improves the performance.
As for the fine-grained results of WRN-40-4, we report results on 6 OOD datasets respectively. When trained on CIFAR-10, UM can outstrip all the scoring function baselines on 5 OOD datasets except Textures on which Mahalanobis performs better while UMAP still has excellent OOD performance ranking only second to UM. When trained on CIFAR-100, UM and UMAP can also outperform the baselines on most OOD datasets. The fine-grained results of WRN-40-4 further demonstrate the effectiveness of the proposed UM/UMAP on other architectures. The future extension can also take other advanced model structures for OOD detection <cit.> into consideration.
§.§ Additional Experiments on More Advanced Post-hoc and OE-based Methods
Except for some representative methods (like MSP, Energy, OE, POEM) that have been considered in the experiments, in Table <ref>, we add more advanced post-hoc and OE-based methods <cit.> as comparison to further validate the effectiveness of the proposed UM/UMAP.
§.§ Additional Verification for Intrinsic OOD Discriminative Capability
In Section <ref>, we display the overlaid OOD detection capability on CIFAR-10 using SVHN as the OOD dataset. Here, we additionally verify the previously observed trend during training when training DenseNet-101 on CIFAR-100 using iNaturalist as an OOD dataset. In Figure <ref>, we trace the three evaluation metrics during training on CIFAR-100 using 4 different learning rate schedules. Consistent with the original experiment, we still use iNaturalist as the OOD dataset. It can be seen for all three metrics that exists a middle stage where the model has the better OOD detection capability (For FPR95, it is smaller (better) in the middle stage; for AUROC and AUPR, they are higher (better) in the middle stage). Besides that, we also look into the change of OOD performance on other architecture (e.g., WRN-40-4) in Figure <ref> and Figure <ref>. In Figure <ref>, we display the curves of three metrics of WRN-40-4 when trained on CIFAR-10 with SVHN and Textures as OOD datasets. The trend that the OOD performance first goes better and then converges to worse OOD performance can be reflected. In Figure <ref>, we continually provide curves of the three metrics of WRN-40-4 during training on CIFAR-100 with iNaturalist, Places365, and SUN as OOD datasets. A clear better middle stage can still be
excavated in this scenario.
§.§ Ablation on UMAP which Adopting Pruning on UM.
We conduct various experiments to see whether pruning has an impact on Unleashing Mask itself. To be specific, we expect the pruning to learn a mask on the given model while not impairing the excellent OOD performance that UM brings. In Figure <ref>, it presents that pruning from a wide range (e.g. p ∈ [0.3, 0.9]) can well maintain the effectiveness of UM while possessing a terrific convergence trend. For simplicity, we use prune to indicate the original pruning approach and UMAP indicate UM with pruning on the mask with our newly designed forgetting objective in Figure <ref>. In Figure <ref>, the solid lines represent the proposed UMAP and the dashed lines represent only pruning the well-trained model at prune rates 0.2, 0.5, and 0.8. While the model's OOD performance can't be improved (not better than the baseline) through only pruning, using our proposed forgetting objective for the loss constrain can significantly bring out better OOD performance at a wide range of mask rates (e.g. p ∈ [0.5, 0.8]). In Figure <ref>, we intuitively reflect the effect of the estimated loss constraint by the initialized mask which redirects the gradients when the loss reaches the value, while the loss will just approach 0 when pruning only. In Figure <ref>, we can see that ID-ACC for both UMAP and Prune can converge to approximately the same high level (92%∼ 94%), though we can simply remove the learned mask to recover the original ID-ACC.
§.§ Fine-grained comparison of model weights.
We display the weights of the original model, pruned model, and the UMAP model respectively in Figure <ref>. The histograms show that the adopted pruning algorithm tends to choose weights far from 0 for the first convolution layer, shown in Figure <ref>. However, for almost all layers (from the 2nd to the 98th), the pruning chooses weights with no respect to the value of weights, shown in Figure <ref>. For the fully connected layer, the pruning algorithm itself still keeps its behavior on the first layer, while UMAP forces the pruning algorithm to choose weights near 0, shown in Figure <ref>, indicating that forgetting learned atypical samples doesn't necessarily correspond to larger weights or smaller weights.
§.§ The effectiveness of UM
In Figure <ref>, we present the FPR95, AUROC, and AUPR curves during training to show the comparison of the original training and our proposed UM on ID data. We observe that training using UM can consistently outperform the vanilla model training, either for the final stage or the middle stage with the best OOD detection performance indicated by the FPR95 curve. In Figure <ref>, we also adopt different mask rates for the initialized loss constraint estimation for forgetting the atypical samples. The results show that a wide range of mask ratios (i.e., from 96% to 99%) to estimate the loss constraint used in Eq. (<ref>) can gain better OOD detection performance than the baseline. It shows the mask ratio would be robust to hyper-parameter selection under a certain small value. The principle intuition behind this is our revealed important observation as indicated in Figures <ref>, <ref>, and <ref>. With the guidance of the general mechanism, empirically choosing the hyper-parameter using the validation set is supportable and valuable for excavating better OOD detection capability of the model as conducted by previous literature <cit.>.
In our experiments, we empirically determine the value of our proposed UM and UMAP by examining the training loss on the masked output. For CIFAR-10 as ID datasets, the value of the mask ratio is 97.5%, and the estimated loss constraint for forgetting is 0.10 for our tuning until the convergence; For CIFAR-100, the value of the mask ratio is 97%, and the estimated loss constraint for forgetting is 1.20 for our tuning until the convergence. To choose the parameters of the estimated loss constraint, we use the TinyImageNet <cit.> dataset as the validation set, which is not seen during training and is not considered in our evaluation of OOD detection performance. Since the core intuition behind our method is to restore the OOD detection performance starting from the well-trained model stage, forgetting a relatively small portion (empirically found around 97% mask ratio) of atypical samples can be beneficial for the two common benchmarked datasets. In addition, we also verify the effectiveness of UM and UMAP considering the large-scale ImageNet as ID dataset in Table <ref> and Appendix <ref>, the loss constraint for forgetting can be 0.6 which is estimated using the mask ratio as 99.6%. To find the optimal parameter for tuning, more advanced searching techniques like AutoML or validation design based on the important observation in our work may be further employed in the future. For the safety concerns, it is affordable and reasonable to gain significant OOD detection performance improvement by investing extra computing resources.
§ SUMMARIZATION OF THE PROPOSED UM/UMAP'S ADVANTAGES
Regarding the advantages of the proposed method, we kindly interpret them as follows,
* Novelty. The proposed UM/UMAP is the first to emphasize the intrinsic OOD detection capability of a given well-trained model during its training phase, better leveraging what has been learned and drawing new insight into the relationship between the OOD detection and the original classification task. This work also shows that ID data is important for a well-trained model's OOD discriminative capability.
* Simplicity. Based on the empirical insights that atypical semantics may impair the OOD detection capability, we introduced the easy-to-adopt forgetting objective to weaken the influence of atypical samples on the OOD detection performance. Besides, to maintain the ID performance, we proposed to learning a mask instead of tuning the model directly. Such a design makes UM/UMAP easy to follow and a good starting point to conduct further adjustments. They build on extensive empirical analysis on the point of how to unleash the optimal OOD detection capacity of one given model. Besides, this work explores an orthogonal perspective to previous methods, which shows the consistent improvement combined with previous methods in a range of experiments.
* Compatibility & Effectiveness. UM/UMAP is orthogonal to other competitive methods and can be flexibly combined with them. Extensive experiments demonstrate that UM/UMAP can consistently improve the baselines on average in both benchmarked datasets and large-scale ImageNet (e.g., Tables <ref>,<ref>,<ref>,<ref>,<ref>,<ref>,<ref>,<ref>,<ref>,<ref>,<ref>; Figures <ref>,<ref>,<ref>,<ref>,<ref>,<ref>,<ref>,<ref>,<ref>.
|
http://arxiv.org/abs/2306.01979v1
|
20230603014304
|
Thickness-dependent, tunable anomalous Hall effect in hydrogen-reduced PdCoO$_2$ thin films
|
[
"Gaurab Rimal",
"Yiting Liu",
"Matthew Brahlek",
"Seongshik Oh"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci"
] |
[email protected]
Department of Physics & Astronomy, Rutgers, The State University of New Jersey, Piscataway, New Jersey 08854, USA
Department of Physics & Astronomy, Rutgers, The State University of New Jersey, Piscataway, New Jersey 08854, USA
State Key Laboratory of Precision Spectroscopy, East China Normal University, Shanghai 200062, China
Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA
[email protected]
Department of Physics & Astronomy, Rutgers, The State University of New Jersey, Piscataway, New Jersey 08854, USA
Center for Quantum Materials Synthesis, Rutgers, The State University of New Jersey, Piscataway, New Jersey 08854, USA
It was recently reported that hydrogen-reduced PdCoO_2 films exhibit strong perpendicular magnetic anisotropy (PMA) with sign tunable anomalous Hall effect (AHE). Here, we provide extensive thickness-dependent study of this system, and show that the electronic and magnetic properties are strongly dependent on the thickness and annealing conditions. Below a critical thickness of 25 nm, AHE shows clear PMA with hysteresis, and its sign changes from positive to negative, and back to positive as the annealing temperature increases from 100 ^∘C to 400 ^∘C. Beyond the critical thickness, both PMA and AHE hysteresis disappear and the AHE sign remains positive regardless of the annealing parameters. Our results show that PMA may have a large role on AHE sign-tunability and that below the critical thickness, competition between different AHE mechanisms drives this sign change.
Thickness-dependent, tunable anomalous Hall effect in hydrogen-reduced PdCoO_2 thin films
Seongshik Oh
=========================================================================================
§ INTRODUCTION
Ferromagnetic materials exhibit anomalous Hall effect (AHE), which is an extra contribution added to the ordinary Hall effect and is usually proportional to their magnetization (thus frequently hysteretic). One of the common characters of any AHE is its well-defined sign: depending on how its magnetization couples with the mobile carriers, AHE can be either positive or negative against the applied external magnetic field. In clean materials, the net Berry curvature, integrated at the Fermi level of the materials band structure, determines the sign of AHE. However, in materials with structural disorders or impurities, both magnitude and sign of AHE can be dominated by extrinsic factors such as defect scattering. Regardless, in most ferromagnetic materials, the AHE sign is not switchable, but certain materials such as SrRuO_3 <cit.>, chiral magnets <cit.>, and magnetic topological insulators <cit.>, can exhibit AHE sign reversal depending on external stimulations.
The non-magnetic, metallic delafossites PtCoO_2 and PdCoO_2 are the most conducting oxides, with PdCoO_2 having the longest mean free path of 20 μm among oxides <cit.>. Although PdCoO_2 is not magnetic, it exhibits itinerant surface magnetism <cit.>, and recent work on hydrogenated PdCoO_2 reported the emergence of bulk magnetism with strong perpendicular magnetic anisotropy (PMA). It was shown that these hydrogenated films exhibit sign change of AHE with hydrogenation parameters <cit.>, likely due to structural changes when PdCoO_2 is reduced. Here, we expand on this previous study with comprehensive experiments on over 80 hydrogenated PdCoO_2 samples of various thickness and find that as the film gets thicker, PMA gradually turns into in-plane magnetic anisotropy (IMA) and the sign-tunability of AHE also disappears.
§ EXPERIMENTAL DETAILS
We grew PdCoO_2 films on Al_2O_3 (0001) substrates using oxygen plasma assisted molecular beam epitaxy (MBE) <cit.>. The films were grown at 300 ^∘C in a background pressure of 4 × 10^-6 Torr plasma oxygen in a layer-by-layer fashion. Plasma is generated using a 13.6 MHz RF source at a power of 450 W. After the growth, hydrogenation is carried out at ambient pressure by annealing in a 10% H_2/90% Ar mixture gas at various temperatures for different duration. All thickness values correspond to the nominal PdCoO_2 thickness. Transport measurements were carried out using standard DC van der Pauw technique.
§ RESULTS AND DISCUSSION
In Figure 1, we demonstrate the effects of anneal temperature and film thickness on the magnetic and transport properties of the PdCoO_2 films. Previously, annealing time (t_A) and temperature (T_A) were used to switch the AHE sign <cit.>. Here we use T_A as the main tuning parameter. Except for the thinnest film (3.5 nm), no sign change occurs for T_A < 100 ^∘C. With slightly higher T_A (∼100 - 200 ^∘C), AHE with positive sign appears (positive AHE is defined as R_xy^sat > 0 for H> 0). With higher T_A, positive AHE abruptly switches to negative in films with thickness up to 25 nm. At even higher anneal temperatures, the AHE sign reverses back to positive. Robust PMA with well-defined coercive fields is present in films with thickness up to 25 nm. As the film gets thicker than 25 nm, the AHE shows only IMA behavior and no switching is observed. The switching behavior is generally reproducible across many samples and anneal parameters. For example, in the case of the 9 nm thick film shown in Figure <ref>, annealing at 100 ^∘C leads to AHE with positive sign and robust PMA. When this film is annealed at 200 ^∘C, the AHE sign becomes negative, the shape of the curve changes and the coercive field becomes larger. After annealing at 300 ^∘C, the R_xy value shrinks significantly, and although the sign is positive, hump-like features are present suggesting that there are two competing channels with different AHE signs <cit.>. When annealed at 400 ^∘C, the AHE sign becomes positive, and the coercivity is also enhanced compared to annealing at 100 ^∘C.
Rutherford backscattering, x-ray photoelectron spectroscopy and x-ray diffraction show reduction of the films to a Pd-Co alloy state after hydrogenation <cit.>, which explains the emergence of PMA and AHE behavior. However, unlike traditional PdCo alloys, hydrogenation of PdCoO_2 is unique due to intermixing of Pd and Co along with with trace amounts of oxygen and hydrogen and a novel layered structure can also be realized <cit.>. This may be why the PMA is present in thinner films only. In general, the strength and direction of magnetic anisotropy is determined by many factors such as the crystalline environment (magnetocrystalline anisotropy), strain (magnetostriction), interface and shape <cit.>. In the case of Pd/Co multilayers, layer thickness and interface have been found to play critical roles in the development of PMA. A recent study of PdCo alloys found that the surface anisotropy contribution is dominant in thinner films, while shape and magnetoelastic anisotropy dominates thicker films, which can be used to change between IMA and PMA <cit.>. In a similar manner, the differences in the hysteretic behavior across thickness in our films may be due to similar nanostructural changes resulting from the annealing process.
These results suggest that there may be a continuous transformation of AHE across T_A: initially a positive AHE appears, changes to negative with higher T_A, then changes back to positive with even higher T_A. There are some minor changes in the overall squareness of the curves, but the PMA is preserved. The fact that sign change is limited to thinner films suggests a potentially large role played by interfaces. To understand the sign change, we may consider two competing channels corresponding to the two signs. Initially a positive channel is largely responsible for the positive AHE. After extended annealing, the negative channel dominates and results in a net negative AHE. Finally, with more annealing, the same (or different) positive channel results in positive AHE. To understand this, we discuss how AHE varies across measurement temperatures for the different annealing conditions, i.e. across each of the different AHE signs.
In Figure <ref>(a), we show the variation in positive and negative AHE across measurement temperatures in 9 nm thick samples. At the lowest anneal temperature of 100 ^∘C, AHE is positive and |ρ_xy| increases with temperature. At T_A = 200 ^∘C, AHE is negative, but |ρ_xy| decreases with temperature, which is opposite to the previous case. With an even higher anneal temperature of 400 ^∘C, the AHE switches to positive and |ρ_xy| increases with temperature. When ρ_xy values for each T_A are compared to the lowest measured temperature, as shown in Figure <ref>(b), we find that at a field of 1 T, where ρ_xy is saturated, samples at all anneal temperatures show similar response. This quantity, which we define as Δρ_xy^1T = ρ_xy^1T(T) - ρ_xy^1T(10K), shows that although the overall sign of AHE is different between the different anneal conditions, temperature-dependent contribution to AHE remains little changed. This suggest that there are at least two distinct mechanisms responsible for the observed AHE. One is the temperature-independent part, whose sign depends strongly on the annealing parameter, and the other is the temperature-dependent contribution, which always remains positive and grows with temperature, and is little dependent on the annealing conditions. It should be noted that the Curie temperature (T_C) of these films are beyond room temperature <cit.>, and the appearance of two contributions may be a result of two different scattering channels.
Sign change in AHE was previously reported in a few materials including elemental metals <cit.> and multilayers <cit.>, but its origin is not always clear. Sometimes, such a sign change can occur when the dominant contribution for AHE changes between extrinsic and intrinsic mechanism <cit.>. Disorder can also strongly influence carrier scattering and lead to changes in AHE <cit.>, which includes examples such as the metallic oxide SrRuO_3 <cit.>, intermetallic alloys such as MnGa <cit.> and magnetic topological insulators <cit.>. Considering that the bands in hydrogen-reduced PdCoO_2 are derived from Pd and Co states, and since the film structures are intermediate between single-crystalline and polycrystalline, the sign change of AHE is likely a combined result of both extrinsic (scattering) and intrinsic (Berry curvature) mechanisms. The resistivity of our films lie close to the region where crossover between extrinsic and intrinsic mechanisms occurs <cit.>, providing another hint that the two competing channels may be due to different mechanisms. Also, the fact that thickness plays a role shows that the interfaces may have an important contribution and, as evidenced from Figure <ref>, thickness and PMA may also be correlated and the sign change may be dependent on the PMA. Yet, considering the structural complexity of these films, it will be difficult to pinpoint the exact origin of the sign change. Regardless, the very fact that the desired AHE sign can be reproducibly achieved with a simple annealing procedure suggests that this effect may be harnessed in this material for further studies on AHE and/or applications. Similarly, it would be interesting to examine the extrapolation of these results to similar systems.
The longitudinal resistivity (ρ_xx) values with different T_A for the hydrogen-reduced films are shown in Figure <ref>(a). In general, the hydrogenated films are more resistive than pristine PdCoO_2 <cit.>. It is also notable that the residual resistivity ratio (RRR), defined as RRR = ρ(295 K) / ρ(10 K), is substantially reduced after hydrogenation. Quantitatively, while RRR in pristine PdCoO_2 thin films grows with thickness and reaches about 16 at 100 nm, it is relatively constant at about 1.2 ± 0.1 for all the hydrogenated films, regardless of thickness. Both increased resistivity and reduced RRR after hydrogenation show that the hydrogenated films have substantially higher level of defect scatterings than the pristine films. Furthermore, as shown in Figure <ref>(b), ρ_xx is usually higher for negative AHE, suggesting that the additional scattering channel responsible for the higher resistivity is responsible for the negative AHE. Figure <ref>(c,d) compiles the relationship between thickness, anneal temperature and AHE which provides a guide to tune the sign and magnitude of AHE in different films by varying the anneal parameter T_A. It is notable that negative AHE can show up only within a narrow region of the phase space.
§ CONCLUSION
In summary, we showed that below a critical thickness of about 25 nm, magnetic and electronic properties of the hydrogen-reduced PdCoO_2 films can be effectively tuned with the annealing temperature. Then, beyond the critical film thickness, PMA with sign tunable AHE changes to IMA with a fixed sign of AHE. Competitions between different scattering mechanisms, coupled with out-of-plane anisotropy, may lead to the AHE sign change.
§ ACKNOWLEDGEMENTS
This work is supported by National Science Foundation (NSF) Grant No. DMR2004125 and Army Research Office (ARO) Grant No. W911NF2010108. We thank Shriram Ramanathan for helpful discussions.
elsarticle-num
|
http://arxiv.org/abs/2306.08460v1
|
20230614120428
|
Improving Generalization in Meta-Learning via Meta-Gradient Augmentation
|
[
"Ren Wang",
"Haoliang Sun",
"Qi Wei",
"Xiushan Nie",
"Yuling Ma",
"Yilong Yin"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
0000–0000/00$00.00 2021 IEEE
Improving Generalization in Meta-Learning via Meta-Gradient Augmentation
Ren Wang, Haoliang Sun^∗, Qi Wei, Xiushan Nie, Yuling Ma, Yilong Yin^∗
Ren Wang, Haoliang Sun, Qi Wei, Yilong Yin are with the School of Software, Shandong University, Jinan, China. E-mail: {xxlifelover, hlsun.cn, 1998v7}@gmail.com, [email protected].
Xiushan Nie, Yuling Ma are with the School of Computer Science and Technology, Shandong Jianzhu University, Jinan, China. E-mail: {niexiushan19, mayuling20}@sdjzu.edu.cn.
^∗Corresponding authors.
July 31, 2023
=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Meta-learning methods typically follow a two-loop framework, where each loop potentially suffers from notorious overfitting, hindering rapid adaptation and generalization to new tasks. Existing schemes solve it by enhancing the mutual-exclusivity or diversity of training samples, but these data manipulation strategies are data-dependent and insufficiently flexible. This work alleviates overfitting in meta-learning from the perspective of gradient regularization and proposes a data-independent Meta-Gradient Augmentation (MGAug) method. The key idea is to first break the rote memories by network pruning to address memorization overfitting in the inner loop, and then the gradients of pruned sub-networks naturally form the high-quality augmentation of the meta-gradient to alleviate learner overfitting in the outer loop. Specifically, we explore three pruning strategies, including random width pruning, random parameter pruning, and a newly proposed catfish pruning that measures a Meta-Memorization Carrying Amount (MMCA) score for each parameter and prunes high-score ones to break rote memories as much as possible. The proposed MGAug is theoretically guaranteed by the generalization bound from the PAC-Bayes framework. In addition, we extend a lightweight version, called MGAug-MaxUp, as a trade-off between performance gains and resource overhead. Extensive experiments on multiple few-shot learning benchmarks validate MGAug's effectiveness and significant improvement over various meta-baselines. The code is publicly available at <https://github.com/xxLifeLover/Meta-Gradient-Augmentation>.
Meta-learning, Few-shot tasks, Regularization, Network pruning, Data augmentation.
§ INTRODUCTION
Meta-learning aims to rapidly adapt to unseen new tasks by observing the learning process over a wide range of tasks, and has been applied to various scenarios<cit.>, including few-shot learning <cit.>, continual learning <cit.>, transfer learning <cit.>, etc. Most prevalent meta-learning methods follow a unified two-loop framework <cit.>. In the outer loop, a meta-learner explores meta-knowledge from numerous tasks. Based on this meta-knowledge, base learners in the inner loop are expected to quickly fine-tune and adapt to new tasks.
As indicated in <cit.>, the two-loop framework may potentially suffer from meta-overfitting in two aspects: memorization overfitting and learner overfitting. Memorization overfitting <cit.> means that the base learner handles tasks merely based on meta-knowledge rather than task-specific fine-tuning in the inner loop. In this way, meta-knowledge degenerates into rote memorization, which hinders rapid adaptation to new tasks. Learner overfitting <cit.> occurs when meta-learner overfit to insufficient training tasks, typically manifested by a meta-learner that can adapt quickly but still fails on the new task. Intuitively, learner overfitting is similar to overfitting in traditional learning, except that the smallest training unit is changed from samples to tasks <cit.>. Those two forms of overfitting greatly degrade the generalization and robustness of meta-learning methods.
Data manipulation is a simple and efficient way to combat these overfitting issues <cit.>. There are two typical strategies: constructing mutually-exclusive tasks and conducting task-level augmentation. The previous one works on addressing memorization overfitting, where training tasks are independently assigned class labels to avoid the meta-learner handling tasks using rote memorization <cit.>. The latter aims to alleviate learner overfitting by augmenting the training tasks. Typically, TaskAug <cit.> rotates the image by a certain angle and treats it as a new class, thus increasing the diversity of the sampled tasks. To simultaneously alleviate these two forms of overfitting, the MetaMix <cit.> linearly combines features and labels of samples to increase both the mutual-exclusivity and the diversity of training tasks. However, these regularization strategies based on data manipulation are manually designed for specific data or tasks, resulting in a lack of flexibility and generality in real-world applications:
* Most strategies to increase task mutual-exclusivity are designed for classification tasks and are difficult to extend to other task scenarios <cit.>.
* Our experiments in Sec. <ref> show that mutual-exclusivity is effective but short-lived, which means that simply increasing task mutual-exclusivity is not sufficient to combat memorization overfitting.
* Unless carefully designed based on the data, task augmentation is not always effective or even detrimental <cit.>. For example, Meta-MaxUp <cit.> systematically explores the effectiveness of various data-based augmentation strategies in meta-learning and reveals that shot augmentation instead reduces the accuracy of the few-shot classification task.
Recent researches have focused on more flexible regularization strategies to address data-dependent limitations <cit.>. For example, to combat memorization overfitting in regression tasks, MetaAug <cit.> extends the mutually-exclusive property to the continuous sample space by perturbing labels with uniform noise. DropGrad <cit.> provides a gradient regularization strategy where meta-gradients[For brevity, meta-learner related concepts are denoted by the prefix `meta-', e.g., `meta-gradient' for the gradient of meta-learner.] are randomly dropped to alleviate learner overfitting. Despite demonstrating promising results, it remains challenging to address both types of overfitting in a data-agnostic manner.
In this work, we improve generalization in meta-learning following the gradient regularization perspective and propose a data-independent Meta-Gradient Augmentation (MGAug). Considering that the meta-gradient required to update the meta-learner depends on the inner-loop fine-tuning of base learners, the key idea is to first solve the memorization overfitting issue in the inner loop by breaking the rote memorization state, and then yield diversity gradients as the meta-gradient augmentation to alleviate learner overfitting in the outer loop. Specifically, breaking rote memories is achieved by pruning the base learner before each inner loop. To this end, we explore three different levels of pruning strategies, named random width pruning (WP), random parameter pruning (PP), and catfish pruning (CP)[Our catfish pruning is named after the “catfish effect”, which forces sardines to reactivate by putting in catfish to avoid suffocation during transport. Here, sardines are base learner parameters that are forced to fine-tune on tasks to avoid rote states by catfish pruning.], respectively. The first two are based on random strategies and inspired by typical GradAug <cit.> and Dropout <cit.>, respectively, while CP is a newly proposed unstructured pruning strategy that measures a Meta-Memorization Carrying Amount (MMCA) score for each parameter and prunes those with high scores to achieve the most efficient memorization breaking.
Once the rote memorization is removed, the pruned sub-network has to re-fine-tune the remaining parameters to handle new tasks, thus alleviating memorization overfitting. With different pruning rates, sub-networks produce gradients containing diverse task information as high-quality meta-gradient augmentation to ultimately reduce learner overfitting. The proposed MGAug is theoretically guaranteed by a PAC-Bayes-based generalization bound. In addition, we implemented a lightweight version (MGAug-MaxUp) inspired by the MaxUp strategy <cit.>, as well as explored the plug-and-play property of MGAug as the trade-off between performance gains and computational resources. Extensive experimental results show that both MGAug and MGAug-MaxUp improve the generalization of various meta-learning baselines.
The main contributions are summarized as follows:
* We propose a novel data-agnostic meta-regularization via meta-gradient augmentation (MGAug), which can alleviate both memorization and learner overfitting in the two-loop meta-learning framework.
* We explore three pruning strategies to break rote memorization, including two existing random prunings and a new catfish pruning that measures a Meta-Memorization Carrying Amount (MMCA) score for each parameter.
* We deduce a PAC-Bayes-based generalization bound as the theoretical guarantee for MGAug. In addition, we implement a lightweight extended version called MGAug-MaxUp that trades off performance and overhead.
* Experiments in both mutually-exclusive and non-mutually-exclusive tasks demonstrate that MGAug can be plug-and-played into most meta-learning baselines and significantly improve their performance.
§ RELATED WORK
§.§ Regularization
Regularization techniques prevent the model from overfitting the training data and can be roughly divided into data augmentation, label regularization, and internal changes. Data augmentation <cit.> and label regularization <cit.> modify the input and labels respectively using various transformations (e.g., flipping and noise addition) to increase sample diversity. In contrast, internal variation emphasizes the diversity of parameters or connections, mostly independent of data or labels. The well-known Dropout <cit.> and its variants randomly remove some neurons to force the learner to capture more features. Shake-Shake <cit.> and ShakeDrop <cit.> are designed for a specific residual structure, giving different weights to each residual branch. The network pruning adopted in our MGAug belongs to an internal variation that prunes parameters to break rote memorization state. Another related work is GradAug <cit.>, which augments gradients to enrich network attention and improve generalization in conventional learning. The main difference is that our meta-gradient augmentation is naturally derived from pruned sub-networks in the meta-framework.
§.§ Meta regularization
Meta regularization is specially designed for meta-learning to solve learner and memorization overfitting. For learner overfitting, well-designed task augmentation <cit.> remains an effective solution by increasing task diversity. Meta-MaxUp <cit.> splits the meta-framework and further explores various data augmentation combinations. In contrast, memorization overfitting occurs in the inner loop with only a few updates, invalidating most conventional regularization strategies <cit.>. Although constructing mutually-exclusive tasks <cit.> shows promise against memorization overfitting, the task-dependent property makes it difficult to extend either diversity or mutual-exclusivity to regression and reinforcement learning scenarios <cit.>. MetaPruning <cit.> ignores inner loops and improves meta-generalization through data-agnostic network pruning. Instead, we argue that redundant memories in inner loops are the key cause of memorization overfitting <cit.>. With the same intuition, MR-MAML <cit.> and TAML <cit.> develop explicit meta-regularization terms to constrain the parameter scale and the base learner behavior, respectively. Unlike them, we directly break the rote memorization fetter via proposed catfish pruning and alleviate learner overfitting using derived augmented meta-gradients, which can also be considered an enhanced DropGrad <cit.>.
§ META LEARNING
Meta-learning is generally trained and tested on several tasks (here, we omit validation for brevity). To avoid confusion, we use the terms “support set” and “query set” to refer to training and test samples in a single task, leaving “training set” and “testing set” to the meta-learner <cit.>. Given a set of training tasks {𝒯_t=(D^s_t, D^q_t)} sampled from the task distribution p(𝒯), where D^s_t = (x^s_t, y^s_t) is the support set containing support samples x^s_t and corresponding labels y^s_t, and D^q_t = (x^q_t, y^q_t) is the query set containing query samples x^q_t and labels y^q_t. The goal of meta-learning is to produce a base learner that can quickly handle new tasks 𝒯_new=(D^s_new, D^q_new), i.e., fine-tune on the support data (x^s_new, y^s_new) and then accurately predict y^q_new for x^q_new. Considering the difficulty of constructing a large number of routine tasks, meta-learning algorithms are usually validated on few-shot tasks. When applied to classification scenarios, this is commonly described as an N-way K-shot task, indicating K samples in the support set, with class labels y^s, y^q ∈{ 1, …, N }. In this way, each task consists of KN support samples, and K is usually small (typically 1 or 5).
§.§ Two-loop meta-learning framework
Most meta-learning methods can be summarized into a two-loop framework <cit.>. The outer loop first involves sampling a batch of tasks {𝒯_t}^T_t=1∼ p(𝒯), and then updating meta-parameters based on feedback derived from the inner loop over these tasks. In each inner loop, with the given meta-parameter ω and fine-tuning policy ℱ, the base parameters are updated from θ(ω) to θ_t(ω) on support samples D^s_t for the t-th task:
θ_t(ω) = ℱ(θ(ω), D^s_t).
Afterwards, the feedback is usually defined as the loss on query sample D^q_t derived from the fine-tuned θ_t(ω). Let ℒ_outer denote the loss function in the outer loop, and meta-parameters are finally optimized by minimizing empirical risk, i.e., min_ω∑_tℒ_outer(θ_t(ω), D^q_t). Without loss of generality, using the gradient descent, meta-update in the o-th outer-loop can be further formalized as
ω^o = ω^o-1 - β/T∑_t=1^T ∇_ωℒ_outer(θ_t(ω^o-1), D^q_t ),
where β is the meta-learning rate, t and T are the index and number of tasks, respectively. Note that different meta-parameter deployments θ(ω) and fine-tuning algorithms ℱ motivate different branches of meta-learning methods.
The well-known gradient-based meta-learning (GBML) <cit.> takes ω as the initialization of base parameters θ and fine-tunes them by gradient descent, such as MAML <cit.>, Meta-SGD <cit.> etc. Formally, let ℒ_inner denotes the loss in the inner loop, then ℱ≜min_θℒ_inner(θ(ω), D^s_t). Similarly, after the o-th outer loop, the initialization and the i-th update of the base learner are as follows:
θ^o, i_t = θ^o, i-1_t - α∇_θℒ_inner(θ^o, i-1_t, D^s_t), …, θ^o, 0_t = ω^o,
where α is the base learning rate. Clearly, GBML expects to meta-learn good initialization to adapt quickly to unseen tasks. An alternative metric-based meta-learning (MBML) <cit.> meta-learns feature extractors and freezes them in inner loops, i.e., θ_t(ω)=θ(ω)=ω. The query prediction is determined by the similarity between the support feature and the query feature, where the similarity is calculated by a non-parametric metric such as Euclidean distance or cosine distance. This paper focuses on these two meta-learning branches, but our MGAug can also be used for other branches and methods <cit.> derived from this two-loop framework, such as R2-D2 <cit.> and MetaOptNet <cit.>.
§.§ Two types of meta-overfitting
As mentioned, meta-overfitting consists of memorization and learner overfitting, where memorization overfitting is specific to the two-loop framework <cit.>. We explain it by decomposing the meta-update into four steps in Fig. <ref>: Step 1, initialize base parameters θ(ω) based on ω; Step 2, fine-tune θ(ω) to θ_t(ω) on the support set of the t-th task; Step 3, predict the query sample based on θ_t(ω) and calculate loss values; Step 4, update ω once based on the average query error over a batch of tasks. Obviously, the query error is the key feedback for the update of meta-parameters and should be inferred jointly by meta-knowledge (i.e., ω) and task-specific fine-tuning (see Fig. <ref> (a)). Memorization overfitting occurs as ω is trained enough to directly memorize query predictions while ignoring fine-tuning (see Fig. <ref> (b)), implying a degradation of rapid adaptability. Just as regular learners overfit training samples, another learner overfitting means that meta-learners may also overfit training tasks and fail to generalize to unseen novel tasks. Such learner overfitting can be naturally mitigated by task-level variants of regular regularizers <cit.>.
§ META-GRADIENT AUGMENTATION
This work aims to simultaneously mitigate these two types of meta-overfitting in a data-independent manner. The overall idea is to overcome memorization overfitting using network pruning and then alleviate learner overfitting with obtained augmented meta-gradients. Fig.<ref> shows the illustration of our MGAug with the proposed catfish pruning. In the inner loop, we first allow the base learner to fine-tune once normally and then compute the regular meta-gradient. To break the rote memorization state, catfish pruning removes the parameters that carry the most meta-memorization to enforce base learner re-fine-tune to the support set (Fig. <ref> (c)). Each pruning is like throwing a catfish into sardines (parameters of the base learner), resulting in different sub-networks and fine-tuning results. The higher the pruning rate, the more severe the memory breakage, and the more fine-tuning is required. After several independent pruning and fine-tuning stages, we obtain a set of augmented meta-gradients containing diversity task information, which are ultimately used to update the meta-learner as an augmentation of regular meta-gradients. Taking the GBML as an example, the rest of this section details the inner and outer loops using our MGAug.
§.§ Inner-Loop with Network Pruning
The inner-loop processes after the o-th outer loop, where the base parameters are initialized by the latest meta-learned parameters, i.e., θ(ω) ≜θ^o, 0 = ω^o. Let ℱ_ρ denote the pruning criterion. We can obtain the pruned sub-network parameters θ^o, 0_ρ = ℱ_ρ(θ^o, 0, ρ) with a given pruning rate ρ, and rewrite the i-th update of the base learner on the t-th task in (<ref>) as
θ^o, i_ρ, t = θ^o, i-1_ρ, t - α∇ℒ_inner(θ^o, i-1_ρ, t, D^s_t).
where α is the learning rate in the inner loop. For the pruning criterion ℱ_ρ, we explored three specific strategies whose intensity of memory breaking gradually increased at the same pruning rate. For brevity, we omit the superscripts of the inner and outer loops below.
§.§.§ Random width pruning (WP)
WP is a structural pruning strategy that prunes the neurons in each layer of the network to meet the given pruning rate. Without loss of generality, we use the l-th convolutional layer parameter θ_(l)∈ℛ^d_in× d_out× k × k for illustration, where k represents the convolution kernel size and d_in and d_out represent the number of input and output channels, respectively. For example, d_in is the channel number of input images in the first layer, and d_out is the number of corresponding convolution kernels. The number of trainable parameters in this layer is n_(l) = d_out× k × k. With the pruning rate ρ∈ [0,1], the parameter of the corresponding layer in sub-networks is θ_ρ, (l), where | θ_ρ, (l)| = n_(l)(1-ρ). Inspired by <cit.>, we sampled the first (1-ρ) × 100% from the entire model as the sub-network, that is, θ_ρ, (l)∈ℛ^(1-ρ) d_in× (1-ρ) d_out× k × k.
§.§.§ Random parameter pruning (PP)
In contrast, PP is an unstructured pruning, where each parameter may be removed individually. Specifically, we introduce an indication mask m ∈ℛ^n consistent with the shape of base parameters θ, where n is the number of parameters and the value of m is randomly selected from the set {0, 1}. A position with a value of 0 in m indicates that the corresponding parameter is pruned, otherwise it is retained. Afterwards, the pruned parameters can be expressed as θ_ρ = m ⊙θ, where | m | ≤ n(1-ρ) and ⊙ is the Hadamard product. Compared to WP, PP achieves internal changes at the parameter level, enabling more flexible memorization breaking.
§.§.§ Catfish pruning (CP)
We further propose a task-oriented pruning criterion that enables stronger memorization breaking based on the current task and parameter state. Like PP, CP also needs an indicator mask m. The difference is that the mask value in CP reflects the amount of memories contained in each parameter rather than being randomly generated. To this end, we define the meta-memorization carrying amount and design a memorization-breaking pruning criterion.
(Meta-Memorization Carrying Amount). Let θ∈ℝ^n denote the base learner parameter and e_(j) be the indicator vector for the j-th parameter θ_(j), whose value is zero everywhere except that index j is one. Keeping everything else constant, we measure the query loss difference in the t-th task before and after pruning parameter θ_(j) to get the following Meta-Memorization Carrying Amount (MMCA):
MMCA_t, (j) ≜Δℒ_(j)(θ; D^q_t)
= ℒ(1⊙θ; D^q_t) - ℒ((1-e_(j)) ⊙θ; D^q_t),
where 1 is the vector of dimension m and ⊙ denotes the Hadamard product.
MMCA essentially measures the sensitivity of parameter θ_(j) in solving task t. It is reasonable to represent the amount of memorization carried out here since the base parameters in each epoch are initialized by the meta parameters derived from the previous epoch. However, computing MMCA directly for each discrete parameter is prohibitively expensive as it requires n + 1 forward passes (n is the number of parameters). Therefore, by relaxing the binary constraint on the indicator variable, we obtain an approximation of MMCA.
For any task 𝒯_t=(D^s_t, D^q_t) , the change in the loss function on D^q_t before and after removing the j-th parameter θ_(j) can be approximated by
Δℒ_(j)(θ; D^q_t) ≈∂ℒ(θ, D^q_t) /∂θ_(j)×θ_(j).
We defer the proof to Appendix<ref>. Similar approximation strategies <cit.> have also been used to measure the impact of a data point or connection on the loss. The key difference is that we leverage the fact that the query loss depends on meta-knowledge. Based on this MMCA score, we further compute the value of the binary mask m by the designed memorization-breaking pruning criterion. Similar to PP, the parameters θ_(l) in the l-th layer are then pruned by θ_ρ, (l) = m_(l)⊙θ_(l), where || m_(l) ||_0 / n_(l)≤ρ, where n_(l) = |θ_(l)| is the number of parameters in the l-th layer.
(Memorization-breaking pruning criterion). Given a MMCA score mask, the parameters corresponding to the high score positions are removed to break the memorization state as much as possible, i.e.,
m_t, (j) =
0 if |MMCA_t, (j,l) | is in the top-ρ% largest value
1 otherwise,
where (j,l) refers to the j-th parameter in the l-th layer.
Such we obtain a layer-level binary mask and can further compute the sub-network predictions via forwarding propagation. During backward propagation, we apply the same mask to the gradient so that the pruned parameters are no longer updated while the others are trained normally.
Our catfish pruning breaks rote memorization to combat memorization overfitting and generates augmented meta-gradients containing diversity task information, with two differences from connection-sensitivity-based network pruning methods <cit.>. One is that catfish pruning prunes in the inner loop, so the criterion is based on the query set rather than the regular training samples <cit.>. Another essential difference is that normal pruning usually removes insensitive parameters for accuracy preservation, which is even the exact opposite of our memorization-breaking criterion <cit.>. We highlight this difference in Fig. <ref> by visualizing the MMCA distribution of the last layer parameters in the Conv-4 backbone with 20% pruning rate.
§.§ Outer-loop with Augmented Meta-Gradients
We now obtain two group gradients with respect to the meta-parameters w^o: one is the (normal) meta-gradients obtained by backward-propagation on the full network, and the other is derived from several sub-networks. The former retains full meta-knowledge to speed the training process and avoid underfitting at the early learning stage. The latter is the meta-gradient augmentation resulting from structural perturbations. The meta-parameters are finally updated by accumulating these two-group gradients and formalized as
ω^o+1 = ω^o - β/T∑_t=1^T (∇_ωℒ_outer(θ_t(ω), D^q_t)
+ ∑_u=1^U ∇_ωℒ_outer(θ_u, t(ω), D^q_t) ),
where U is a hyper-parameter representing the number of sub-networks in each task. Since the meta-learner is trained with shared initialization parameters, it naturally shares diversity attention across all sub-networks, which is the key to combating learner overfitting. The entire procedure for the two-loop framework with MGAug is provided in Algorithm <ref>.
MGAug is designed for the two-loop meta-framework and is quite different from the Dropout-style approaches <cit.>. The former prunes base parameters before each inner loop in order to break the rote memorization state, while the latter directly prunes meta-parameters in the outer loop, essentially to alleviate learner overfitting.
The augmented meta-gradients derived from CP in MGAug are based on the task response rather than directly changing training tasks, which is also essentially different from the stochastic gradient noise strategy <cit.>. The advantages of similar self-guided augmentation have been validated in traditional learning <cit.>.
§.§ MGAug-MaxUp: A Lightweight Version
As an alternative, we further extended a lightweight version, noted as MGAug-MaxUp. Inspired by MaxUp <cit.>, MGAug-MaxUp only updates meta-learner by back-propagating in the network with the largest query loss instead of all networks. In this way, the update of the outer loop becomes:
ω^o+1 = ω^o - β/T∑_t=1^T ( ∇_ωmax(ℒ_outer(θ_t(ω), D^q_t),
{ℒ_outer(θ_u, t(ω), D^q_t)}_u=1^U) ),
where max(·) is the maximum function, which itself does not back-propagate during training but is simply the gradient of the worst copy. As a tradeoff between performance gains and computational costs, MGAug-MaxUp can be easily deployed in resource-limited scenarios.
§.§ A PAC-Bayes Generalization Bound
We provide a PAC-Bayes-based generalization bound <cit.> for the two-loop meta-learning framework with inner-loop pruning, which can theoretically guarantee the performance of our MGAug. We simplify the analysis by pruning a sub-network for each task (i.e., U=1). Following the meta-learning PAC-Bayes framework <cit.>, let 𝒫 and 𝒬 be the hyper-prior and hyper-posterior of the meta-learner, and assume that loss function is bounded to the interval [0,1]. For a given pruning rate ρ∈[0,1] and initial parameters Θ_i of the base learner on task i, we take the pruned parameters Q_ρ,Θ_i as posterior distribution and the corresponding Q_ρ,0∼𝒬 as the prior distribution.
(Meta-learning PAC-Bayes bound with inner-loop pruning). Let er(𝒬) and êr̂(Q_ρ,Θ, 𝒯) be the expected and empirical errors in meta-learning, and let m_i denote the number of samples in the i-th task. Then for any δ∈ (0,1] the following inequality holds uniformly for all hyper-posterior distributions 𝒬 with probability at least 1-δ,
er(𝒬) ≤1/T∑_i=1^TQ_ρ, 0∼𝒬Eêr̂_i(Q_ρ,Θ_i, 𝒯_i) + √(D(𝒬‖𝒫) + log2T/δ/2(T-1))
+ 1/T∑_i=1^T√(D(𝒬‖𝒫) + log2Tm_i/δ + 1-ρ/2‖Θ_i ‖^2/2(m_i-1)).
The expected error is bounded by the empirical multi-task error plus two complexity terms. The first is the average of task-complexity terms for observed tasks, and the second is the environment-complexity term <cit.>. MGAug prunes parameters in the inner loop, reducing the complexity cost of base learners by a factor of 1-ρ and further reducing the task-complexity terms. On the other hand, complexity terms are independent on the pruning criterion. A good criterion can improve generalization by minimizing the increase in empirical error <cit.>. Appendix<ref> provides proof of Theorem <ref>.
§ EXPERIMENTS
This section presents extensive experimental results of MGAug and its lightweight versions on multiple public benchmarks. The remainder of the experiments are organized as follows: Subsection <ref> lists the basic experimental setups, including datasets, backbones, hyper-parameters, etc. Subsection <ref> shows the generalization improvement brought by MGAug and the comparison with state-of-the-art methods over various meta-learning instances. Subsection <ref> explores the reasons why MGAug works well by analyzing its behaviors from different perspectives. Subsections <ref> and <ref> investigate the robustness of MGAug for different hyper-parameters and scenarios, respectively.
§.§ Experimental Settings
§.§.§ Datasets
We conduct experiments with two widely-used datasets: mini-ImageNet <cit.> and CUB <cit.>. Mini-ImageNet consists of 100 classes of natural images sampled from ImageNet <cit.>, with 600 images per class. It is split into non-overlapping 64, 16, and 20 classes for training, validation, and testing.
The CUB contains 200 species of birds and 11,788 images in total. We randomly select 100 classes as the training set, and the others are equally divided for validation and testing.
Experiments involve 5-way 1-shot and 5-shot tasks in both mutually exclusive (ME) and non-mutually-exclusive (NME) settings following the previous work <cit.>, where each task contains 15 query samples.
§.§.§ Backbones
We use two backbones with different depths, including Conv-4, ResNet-10, and ResNet-18. The Conv-4 contains four convolution blocks, each block is concatenated by convolution, BatchNorm, nonlinear activation (ReLU), and max pooling layers. The ResNet-10 is a simplified ResNet-18 <cit.> where only one residual building block is used in each layer. Following previous works <cit.>, we respectively resize images to 84 × 84 and 224 × 224 before feeding the Conv and ResNet backbones and correspondingly randomly scale to [84, 64, 48] and [224, 192, 160, 140] as the data augmentation of sub-networks.
§.§.§ Baselines
We choose MAML <cit.> and Prototypical Network <cit.> (abbreviated as ProtoNet) as instance baselines. The former belongs to the GBML branch, and the latter is a classic MBML method. For MAML, we implement a first-order approximation FoMAML for efficiency <cit.>. We further take the transformations designed in Baseline++ <cit.> as the data regularization baseline and mark it with `Aug'. In the following experiments, we mark the pruning strategy with “-XX” and make MGAug-CP as the default setting, abbreviated as MGAug.
§.§.§ Implementation details
For 1-shot tasks, we respectively train 4800 and 1600 epochs for GBML and MBML methods, and each epoch includes 100 episodes. For 5-shot tasks, the number of epochs is halved. All results are average results over 600 episodes with confidence intervals of radius one standard error. Following the training procedure of <cit.>, all methods are trained from scratch and use the Adam optimizer with an initial learning rate of 10^-3.
§.§ Comparison with existing meta-regularization strategies
§.§.§ MBML-based strategies
Table <ref> and <ref> list the results of ProtoNet baseline with different meta-regularization methods on CUB and mini-ImageNet, respectively, where the best results are marked in bold and the second with an underline. In addition to Aug <cit.>, we also compare two state-of-the-art regularization methods designed for the MBML branch, including TaskAug <cit.>, Meta-MaxUp <cit.>. Results show that memorization breaking and augmented diversity gradients greatly improve classification accuracy. For example, in the 5-way 1-shot + ResNet-10 scenario, MGAug improves the accuracy by 6.96% and 6.67% on CUB and mini-ImageNet, respectively.
§.§.§ GBML-based strategies
Table <ref> and <ref> list results of FoMAML baseline with different meta-regularization methods on CUB and mini-ImageNet, respectively, where the best results are marked in bold and the second with an underline. We compare four state-of-the-art regularization methods designed for the GBML branch, including MR <cit.>, TAML <cit.>, MetaMix <cit.> and GradDrop <cit.>.
The former two design explicit regularization terms to address memorization and task bias issues in fast adaptation, respectively. While the latter two are typical methods of data and gradient regularization, where MetaMix mixes the input and its features using the MixUp strategy and GradDrop randomly drops meta-gradients to increase its diversity. Compared to random-based strategies, gradient diversity in MGAug is learned by different sub-networks on the same task, which leads to self-guided augmentation and higher classification accuracy.
§.§.§ Loss and accuracy curves
Besides accuracy, we plot the loss and accuracy curves in Fig. <ref> to observe meta-overfitting in FoMAML baseline, Aug, and MGAug. Among them, the FoMAML baseline has the lowest loss and the highest accuracy during training, especially in CUB, while it performs the worst over validation epochs. This inversion is powerful evidence of overfitting. In contrast, Aug and MGAug do not significantly overfit the training task and generalize better on unseen tasks. Another interesting trend is the trade-off between training loss and validation accuracy. The training loss of MGAug is always lower than Aug, but it yields more accurate predictions. This phenomenon means that MGAug learns more generalizable meta-knowledge during training. In other words, even well-designed data transformations may potentially inhibit the representation capability of the network.
§.§ Behavioral Analysis of MGAug
§.§.§ Rote memorization breaking
We observe the behavior of the base learner to investigate the memorization breaking in the inner loop. To this end, we visualize the gap of fine-tuning accuracy between the full-network and sub-networks with different pruning rates via a modified hat graph <cit.>. Fig. <ref> shows the average accuracy of FoMAML with MGAug using ResNet-10 on 100 tasks sampled from CUB. Results for the full-network are indicated by the dashed line with asterisks. The histogram reflects the gap between the accuracy of sub-networks and the full-network, i.e., the upward bar indicates higher accuracy than the full-network and vice versa. The trend in Fig. <ref> reflects whether fine-tuning relies on rote memorization or rapid adaptation of meta-knowledge.
* NME tasks suffer from severe memorization overfitting. Observing Fig. <ref> (c) and (d), due to label mutual exclusion, each ME task cannot be handled solely on memorization, i.e., the accuracy at step-0 is similar to random classification and improves rapidly after fine-tuning. In contrast, for NME tasks, there is rote memorization about training samples in meta-learned parameters that resulted in 28% classification accuracy without fine-tuning (at step-0) and further limited the fine-tuning performance.
* The ME setting is short-lived for solving memorization issues. Although the ME setting avoids the reliance on memorization at step-0, the accuracy increases sharply after only one step and remains almost unchanged until step 5 (fine-tuning five steps by default <cit.>). This trend means that memorization is almost recovered with just one iteration and still prevents subsequent fine-tuning.
* Our MGAug breaks rote memorization. Unlike constructing ME tasks, MGAug directly breaks memorization and inhibits its recovery. An intuitive phenomenon is that accuracy slowly increases during fine-tuning, even with only a 0.1% pruning rate. Following the same ME setting, Fig. <ref> visualizes fine-tuning curves to verify whether fine-tuning is reactivated with broken memorization, which is the key to overcoming memorization overfitting. Further, we plotted the curve fine-tuned from random initialization as a baseline with no memory at all. Clearly, the trend of MGAug is closer to random initialization than to constructing ME tasks, indicating that the memorization issue is significantly alleviated and fine-tuning is reactivated. Also, the accuracy of MGAug improves more quickly than random initialization, implying a faster adaptation of meta-knowledge to new tasks.
* CP has stronger breaking capability than WP and PP. Although all three effectively hinder memory recovery, CP is clearly the most effective, followed by PP and finally WP, as seen in step-1 in Fig. <ref> (a), (b), and (c).
§.§.§ Augmented meta-gradients
We empirically infer that the effectiveness of the meta-gradient augmentation derived from pruned sub-networks is twofold. One is the improvement from resolving memorization overfitting, which has been verified in the previous section. The other is the diversity of attention introduced by sub-networks with different pruning rates, even for the same task. To verify this, Fig. <ref> visualizes the attention regions of different sub-networks using Grad-CAM <cit.> and lists representative examples. Interestingly, attention changes seem to occur more often in samples containing insignificant objects (bottom). Conversely, for the salient ones (top), the learner is more confident in the predictions.
§.§.§ Plug-and-play property
In addition, MGAug can also improve meta-generalization in a flexible plug-and-play way. Fig. <ref> shows the results of training with MGAug starting from epochs 0, 400, 800, and 1200 on 5-way 5-shot CUB tasks. Both train and test curves show that MGAug consistently improves ProtoNet baseline performance and avoids meta-overfitting, even if it is only used for the last 400 epochs.
§.§.§ The comparison of training costs
The additional overhead required by MGAug is related to the number U of sub-networks, more precisely U times the cost of the vanilla model. Although it can be accelerated by parallel computing the sub-network, we still designed the lightweight MGAug-MaxUp to trade off performance and overhead. Table <ref> lists the time for meta-training once on a single 5-way 5-shot task under the same environment. It can be seen that MGAug-MaxUp performs similarly to the vanilla methods with almost no additional computational cost.
§.§ Robustness experiments on hyper-parameters
This subsection aims to evaluate the robustness of MGAug to hyper-parameters. Specifically, the hyper-parameters mainly involve the number of sub-networks and the range of pruning rates. The following experiments are all performed on the 5-way 5-shot CUB tasks using the Conv-4 backbone.
§.§.§ The number of sub-networks
Fig. <ref> (left) shows the results of ProtoNet with MGAug using different numbers of sub-networks (from one to five). Clearly, MGAug outperforms the baseline for all hyper-parameter settings. Even pruning one sub-network for each task still improves classification accuracy, which validates the theoretical analysis in Theorem <ref>. In other experiments, three and one sub-network empirically default to the GBML and MBML branches, respectively.
§.§.§ The range of pruning rates
Fig. <ref> (right) shows the results of four range pruning rates, where the largest pruning rates are 2%, 5%, 10% and 20%, respectively. As the pruning rate increases, the accuracy gradually decreases. This is because the proposed memorization-breaking criterion preferentially removes parameters carrying meta-memorization. When the memorization is completely erased, meta-learning degenerates into ordinary few-shot learning. Nevertheless, even with the 20% setting, MGAug still improves the baseline, indicating its robustness to hyper-parameters.
§.§ Robustness experiments on other scenarios
To verify the flexibility and generality of MGAug, we provide experimental results in more scenarios, including more meta-baselines, backbones, and tasks. The experiments in this subsection are based on 5-way 1-shot tasks.
§.§.§ More meta-learning instances
We first supplement the results of FoMAML and ProtoNet with a Conv-4 backbone on the mini-ImageNet dataset in Table <ref>, and then integrate MGAug into more meta-baselines, including Reptile <cit.>, CAVIA <cit.>, MAML <cit.>, R2-D2 <cit.>, and MetaOptNet <cit.>. The first three are instances of the GBML, and the latter two belong to a branch called last-layer meta-learning <cit.>. The classification results listed in Table <ref> show that MGAug significantly improves the performance of these meta-learning methods compared to the data augmentation (i.e., Aug), especially in Reptile, CAVIA, and MetaOptNet.
§.§.§ Deeper backbones
Since the underlying assumption is that deep models are more susceptible to overfitting, we explore the performance of MGAug on the deeper ResNet-34 and ResNet-50 backbone <cit.>. Table <ref> lists the classification accuracy of the MGAug based on the ProtoNet baseline. Compared with shallower backbones, data augmentation improves accuracy significantly in deeper ones, while MGAug can further gain about 3% improvement.
§.§.§ Cross-domain tasks
To further evaluate how MGAug improves generalization of meta-learning methods, we conduct a cross-domain experiment in which the test set is from an unseen domain. Following the cross-domain setting in <cit.>, meta-learner is trained on mini-ImageNet and evaluated on the few-shot task constructed on CUB, or vice versa. The classification accuracy on the 5-way 1-shot task is shown in Table <ref>, where the models trained on mini-ImageNet perform better overall than those trained on CUB, benefiting from the rich categories of training samples. In both cross-domain settings, our method still significantly improves accuracy, indicating that MGAug induces the model to meta-learn more transferable features.
§ CONCLUSION
This work proposes a data-independent meta-regularization method, termed MGAug, which can alleviate both memorization and learner overfitting in the two-loop meta-learning framework. Unlike existing task augmentation and explicit regularization terms, the key idea is to first solve the rote memorization issue and restore adaptability in the inner loop via network pruning, and then alleviate learner overfitting with augmented meta-gradients derived from pruned sub-networks. We explore two random pruning strategies and propose a noval catfish pruning that achieves the most significant memorization breaking by removing the parameters containing the largest amount of rote memories. We also deduce a PAC-Bayes-based generalization bound for MGAug and further implement a lightweight version balancing performance and overhead. Extensive experimental results show that MGAug significantly outperforms existing meta-learning baselines. Meanwhile, we believe that MGAug's ideas and implementations can also inspire and drive the development of gradient regularization strategies.
§ APPENDIX
§.§ Proof of Proposition 1
For the t-th task, Meta-Memorization Carrying Amount (MMCA) is defined as the difference in query loss before and after pruning parameter θ_(j), i.e.,
MMCA_t, (j) ≜Δℒ_(j)(θ; D^q_t)
= ℒ(1⊙θ; D^q_t) - ℒ((1-e_(j)) ⊙θ; D^q_t),
where 1 is the vector of dimension n and ⊙ is the Hadamard product. The e_(j) is the indicator vector for the j-th parameter θ_(j), whose value is zero everywhere except that index j=1.
In essence, the calculation of MMCA_t,(j) is to measure the the effect of the j-th initial parameter on the loss function. We additionally introduce a pruning indicator variable c∈{0,1}^n, where c_(j) indicates whether the parameter θ_(j) is preserved (c_(j)=1) or pruned (c_(j)=0). Further, the optimization objective of the base learner in inner loop can be rewritten as min_θℒ(c ⊙θ(ω), D^s_t). Obviously, Δℒ_(j)(θ; D^q_t) can be approximated as the derivative of ℒ with respect to c_(j). But, since c is binary, ℒ is not differentiable with respect to c in this discrete setting. Therefore, by relaxing the binary constraint in the indicator variable c, the effect of parameter θ_(j) on the loss can be approximated as:
Δℒ_(j)(θ; D^q_t) ≈.∂ℒ(c ⊙θ, D^q_t) /∂ c_(j)|_c=1
= lim_δ→ 0.ℒ(c ⊙θ, D^q_t) - ℒ((c-δ e_(j)) ⊙θ, D^q_t)/δ|_c=1.
Clearly, ∂ℒ/ ∂ c_(j) is an infinitesimal version of Δℒ_(j), that measures the rate of change of ℒ with respect to an infinitesimal change in c_(j) from 1 → 1 - δ. This can be computed efficiently in one forward-backward pass using automatic differentiation, for all j at once <cit.>.
Assumes c_(j) = 1. Let a_(j) be the incoming activation that is multiplied by θ_(j), and z be the pre-activation of the neuron to which θ_(j) serves as an input, i.e., z = c_(j) a_(j)θ_(j). According to the given conditions and the chain rule, we can deduce the MMCA value by
MMCA_t, (j)≈∂ℒ/∂ c_(j) = ∂ℒ/∂ z∂ z/∂ c_(j) = ∂ℒ/∂ z a_(j)θ_(j)
= ∂ℒ/∂ z∂ z/∂θ_(j)θ_(j) = ∂ℒ/∂θ_(j)θ_(j).
Therefore, the MMCA score of a parameter is essentially determined by its weights and derivatives. The weights represent the state of the meta memorization (knowledge), while the derivatives represent which memorization is sensitive to the current task. By removing the parameter with a large MMCA score, rote memorization is efficiently broken.
§.§ Proof of Theorem 1
This section provides proof of Theorem 1. Following the previous work <cit.>, the proof begins with McAllaster’s classical PAC-Bayes bound <cit.> for a single task and consists of two steps. In the first step, we bound the errors caused by observing insufficient samples in each task, and each task is assigned a pruned sub-network. In the second step, we bound the error caused by observing a limited number of tasks in the environment.
(McAllester's single-task bound <cit.>). Let 𝒳 be a sample space and 𝕏 some distribution over 𝒳, and let ℱ be a hypothesis space of functions over ℱ. Define a `loss function' g(f, X):ℱ×𝒳→[0, 1], and let X_1^M:=X_1,…,X_M be a sequence of M independent random variables distributed according to 𝕏. Let π be some prior distribution over ℱ (which must not depend on the samples X_1,…,X_M). For any δ∈ (0, 1], the following bound holds uniformly for all ‘posterior’ distributions κ over ℱ (even sample-dependent),
ℙ_X_1^Mi.i.d∼𝕏 {X∼𝕏𝔼f∼κ𝔼 g(f,X)≤1/M∑_m=1^M f∼κ𝔼g(f,X_m) .
. + √(1/2(M-1)(D(κ‖π)+logM/δ)),∀κ}≥ 1-δ.
§.§.§ First step
We use Theorem 2 to bound the generalization error in each of the observed tasks with a meta-learned algorithm 𝒬.
Let i∈ 1,…, T be the index of observed tasks. The samples are X_m:=z_i,j, the number of samples is M:=m_i, and sample distribution is 𝕏:=𝒟_i. The `loss function' is g(f, X):=l(h,z). We define the `prior over hypothesis' π:=(𝒫, P), in which we first sample P from 𝒫 and then sample hypothesis h from P. According to Theorem 2, the `posterior over hypothesis' can be any distribution, in particular, the bound will hold for the following family of distributions κ:=(𝒬, Q), where we first sample P from 𝒬 and then sample h from Q = Q(𝒯_i, P) with the task 𝒯_i. For deep-network-based methods, the meta-learned algorithm typically refers to the meta-parameters Θ∈ℝ^d. Given a dropout rate ρ∈ [0, 1], we can generate sub-network parameters θ∈ℝ^d by pruning based on the proposed criterion. For each coordinate θ^i, the value 0 with probability ρ (pruning the coordinate θ^i) or with probability 1-ρ setting θ^i = Θ^i + ϵ, where ϵ∼𝒩(0, 1) is an auxiliary noise vector. Let Q_ρ, Θ denote the distribution on parameter vectors defined by this pruning process, and the `prior' and `posterior' distributions can be re-marked as Q_ρ, 0 and Q_ρ, Θ.
To further clarify the formal notation of the pruning process, we consider the Boolean d-cube ℬ which is the set of pruning mask vector s∈ℝ^d such that s_i∈{0,1} for all 1≤ i≤ d. Following the previous work <cit.>, s∈ℬ is called the “sparsity patterns”. We let S_ρ be the distribution on the sparsity patterns generated by selecting each s^i independently with the probability of s^i=0 being ρ. For a given s and θ∈ℝ^d we will write s∘θ for the Hadamard product defined by (s∘θ)^i=s^iθ^i. We then have that a draw from Q_ρ, Θ can be made by first drawing a sparsity pattern s∼ S_ρ and a noise vector ϵ∼𝒩(0, 1), and then constructing product defined by s∘ (Θ + ϵ), i.e., θ∼ Q_ρ, Θ𝔼(f(θ)) = s ∼ S_ρ𝔼(f(s∘ (Θ + ϵ))).
The KL-divergence term is
D(κ‖π) = f∼κ𝔼logκ(f)/π(f) = Q_ρ, 0∼𝒬𝔼h∼ Q_ρ, Θ𝔼log𝒬(Q_ρ, 0)Q_ρ, Θ(h)/𝒫(Q_ρ, 0)Q_ρ, 0(h)
= P∼𝒬𝔼log𝒬(P)/P(P) + Q_ρ, 0∼𝒬𝔼h∼ Q_ρ, Θ𝔼logQ_ρ, Θ(h)/Q_ρ, 0(h)
= D(𝒬‖𝒫) + Q_ρ, 0∼𝒬𝔼 D(Q_ρ, Θ‖ Q_ρ, 0)
= D(𝒬‖𝒫) + s∼ S_ρ𝔼ϵ∼𝒩(0, 1)𝔼lnS_ρ(s)e^-1/2‖ s∘ϵ‖^2/S_ρ(s)e^-1/2‖ s∘ (Θ+ϵ) ‖^2
= D(𝒬‖𝒫) + s∼ S_ρ𝔼(1/2‖ s∘Θ‖^2 )
= D(𝒬‖𝒫) + 1-ρ/2‖Θ‖^2
Plugging in to (<ref>), for all observed tasks i=1,…,T, we obtain that for any δ_i > 0
ℙ_𝒯_i∼𝒟_i^m{z∼𝒟_i𝔼Q_ρ, 0∼𝒬𝔼h∼ Q_ρ, Θ𝔼 l(h,z) ≤1/m_i∑_j=1^m_iQ_ρ, 0∼𝒬𝔼h∼ Q_ρ, Θ𝔼 l(h, z_i,j) .
. + √(1/2(m_i-1)(D(𝒬‖𝒫) + 1-ρ/2‖Θ_i‖^2 + logm_i/δ_i)),∀𝒬}≥ 1-δ_i,
§.§.§ Second step
Similar to the first step, we use Theorem 2 with the following substitutions to bound the environment-level generalization error. Note that this is consistent with the previous work <cit.>, and we reformulated it here for completeness of proof. Let (𝒟_i, m_i) be sampled from the task-distribution τ and 𝒯_i ∼ D_i^m_i, we denote iid samples as (𝒟_i, m_i, 𝒯_i), i=1,…,T. The `hypotheses' are f:=Q_ρ, 0 and the `loss function' is g(f, X):=h∼ Q_ρ, Θ𝔼z∼𝒟𝔼l(h,z). Let π:=𝒫 be prior distribution over hypothesis, the bound will hold uniformly for all distributions κ:=𝒬,
ℙ_(𝒟_i, m_i)∼τ,𝒯_i∼𝒟_i^m_i,i=1,…,T{(𝒟, m)∼τ𝔼S∼𝒟^m𝔼Q_ρ, 0∼𝒬𝔼h∼ Q_ρ, Θ𝔼.
. z∼𝒟𝔼l(h,z) ≤1/T∑_i=1^T Q_ρ, 0∼𝒬𝔼h∼ Q_ρ, Θ𝔼z∼𝒟_i𝔼l(h, z) .
. + √(1/2(T-1)(D(𝒬‖𝒫)+logT/δ_0)),∀𝒬}≥ 1-δ_0.
Finally, denote the expected error of the meta-learner as er(𝒬, τ) := (𝒟, m)∼τ𝔼S∼𝒟^m𝔼Q_ρ, 0∼𝒬𝔼h∼ Q_ρ, Θ𝔼z∼𝒟𝔼l(h,z) and empirical error of each task as êr̂(𝒬, 𝒯) := h∼ Q_ρ, Θ𝔼z∼𝒟_i𝔼l(h, z) respectively. We will bound the probability of the event that is the intersection of the events in (<ref>) and (<ref>) by using the union bound. For any δ > 0, set δ_0:=δ/2 and δ_i:=δ/2T for i=1,…,T, the following hold.
er(𝒬) ≤1/T∑_i=1^TQ_ρ, 0∼𝒬𝔼êr̂_i(Q_ρ,Θ_i, 𝒯_i) + √(D(𝒬‖𝒫) + log2T/δ/2(T-1))
+ 1/T∑_i=1^T√(D(𝒬‖𝒫) + log2Tm_i/δ + 1-ρ/2‖Θ_i ‖^2/2(m_i-1)),
which completes the inductive proof.
§ ACKNOWLEDGEMENTS
This research was supported in part by the Natural Science Foundation of China (No. 62106129, 62176139, and 62177031), the Natural Science Foundation of Shandong Province (No. ZR2021QF053, ZR2021ZD15), and the China Postdoctoral Science Foundation (No. 2021TQ0195, 2021M701984).
IEEEtran
|
http://arxiv.org/abs/2306.03780v3
|
20230606153450
|
Notes on conformal anomaly, nonlocal effective action and the metamorphosis of the running scale
|
[
"A. O. Barvinsky",
"W. Wachowski"
] |
hep-th
|
[
"hep-th",
"gr-qc"
] |
[email protected]
Theory Department, Lebedev Physics Institute, Leninsky Prospect 53, Moscow 119991, Russia
Institute for Theoretical and Mathematical Physics, Moscow State University, Leninskie Gory, GSP-1, Moscow, 119991, Russia
[email protected]
Theory Department, Lebedev Physics Institute, Leninsky Prospect 53, Moscow 119991, Russia
We discuss the structure of nonlocal effective action generating the conformal anomaly in classically Weyl invariant theories in curved spacetime. By the procedure of conformal gauge fixing, selecting the metric representative on a conformal group orbit, we split the renormalized effective action into anomalous and Weyl invariant parts. A wide family of thus obtained anomalous actions is shown to include two special cases of Riegert–Fradkin–Tseytlin and Fradkin–Vilkovisky actions. Both actions are shown to be contained in the first three orders of the curvature expansion for a generic one-loop effective action obtained by covariant perturbation theory. The complementary Weyl invariant part of the action is given by the “conformization” of the full effective action—restricting its argument to the conformally invariant representative of the orbit of the conformal group. This is likely to resolve a long-standing debate between the proponents of the Riegert action and adherents of the perturbation expansion for the effective action with typical nonlocal logarithmic form factors. We derive the relation between quantum stress tensors on conformally related metric backgrounds, which generalizes the known Brown-Cassidy equation to the case of nonzero Weyl tensor, and discuss applications of this relation in the cosmological model driven by conformal field theory. We also discuss the issue of renormalization group running for the cosmological and gravitational coupling constants and show that it exhibits a kind of a metamorphosis to the nonlocal form factors of the so-called partners of the cosmological and Einstein terms—nonlocal curvature squared terms of the effective action.
Notes on conformal anomaly, nonlocal effective action and the metamorphosis of the running scale
W. Wachowski
July 31, 2023
================================================================================================
To the memory of Stanley Deser
§ INTRODUCTION
The status of local Weyl anomalies is widely considered to be fully settled in current literature. However, the issue of their relevance to concrete physical effects, as opposed to a mere criterion of consistency at the quantum level of the classically Weyl invariant theories, often remains a subject of the debate. The manifestation of the conformal anomaly in physical applications usually occurs within the effective action formalism, and there is extending over years debate on the structure of this action, taking place between the pioneers of the conformal anomaly and adherents of perturbation theory. The nature of this debate consists in a seemingly contradictory difference between the known expression for the anomaly action and the form of the nonlocal effective action obtained by Feynman diagrammatic technique.
As is well known, the one-loop conformal anomaly for classically Weyl invariant 4-dimensional theory having in Euclidean curved spacetime the covariantly renormalized effective action [ g_μν] reads as <cit.>
⟨ T^μ_μ ⟩≡2 g_μν/√(g)δ/δ g_μν = 1/16π^2(α C^2 + β E +γ R),
E= R_μναγR^μναγ - 4R_μνR^μν + R^2,
where √(g)E denotes the Gauss–Bonnet density, C_μναβ is the Weyl tensor, C^2 = C_μναβC^μναβ, and α, β and γ are the numerical coefficients depending on the spin of the quantum field.[We work in Euclidean signature spacetime, and our notations are R^α_βμν=∂_μ^α_νβ - ⋯, R_μν=R^α_μαν, =g^μν∇_μ∇_ν. For simplicity we do not include in the anomaly the contribution F_μν^2 of the vector gauge field and φ^4-contribution of the self-interacting conformal scalar field.] The anomalous action _A[ g_μν] generating this anomaly was first derived in the nonlocal form by Riegert <cit.> and by Fradkin and Tseytlin <cit.> in the local form of the conformal Wess-Zumino action involving an auxiliary scalar field—the dilaton responsible for intetwining two conformally related metrics. The nonlocal form of the Riegert–Fradkin–Tseytlin (RFT) action reads as
_A[ g ] = 1/64π^2∫ d^4x √(g) (α C^2 + β/2_4) 1/Δ_4_4
-1/32π^2(γ/6+β/9) ∫ d^4x √(g)R^2,
where
E_4 ≡ E - 23 R,
Δ_4 denotes the so-called Paneitz operator <cit.>
Δ_4 = ^2 + 2R^μν∇_μ∇_ν - 2/3 R + 1/3(∇^μR) ∇_μ
and 1/Δ_4 implies its inverse—the notation for the operation of acting by its Green's function G(x,y) on a generic test function ψ(y), Δ_4 G(x,y)= δ(x,y), 1Δ_4 ψ(x)= ∫ d^4y G(x,y) ψ(y).
Some time after the invention of the RFT action the attention to it was drawn by Antoniadis, Mazur and Mottola due to several applications in gravity theory <cit.>, but this caused a serious criticism <cit.> of the expression (<ref>) in view of its drastic structural difference from the renormalized effective action built within perturbation theory in powers of spacetime curvature. This expansion begins with <cit.>
_ ren = 1/32π^2∫ dx √(g)[-α
C_μναβln(-/μ^2)C^μναβ
-γ/6 R^2 ] + O(^3),
collectively denoting here the Riemann, Ricci and scalar curvature, and does not at all resemble the form of (<ref>). This criticism was maintained by objections against short distance behavior of stress tensor correlation functions generated by the RFT action, which were shown to contradict the conformal Ward identities for these correlator <cit.>. Another criticism was associated with the objections against the double pole structure of the Green's function of the operator (<ref>), ∼ 1/^2 <cit.>. Although these objections were disclaimed in <cit.> by explicit calculations of ⟨ TTT⟩-correlators, the question might still be hovering unsettled in the literature <cit.>.
The goal of this paper will be to discuss the status of the effective action responsible for the generation of the Weyl anomaly. To begin with we will focus on a wide variety of nonlocal anomalous actions by including the RFT action in their functional family. The idea of this construction is similar to gauge fixing applied to the ambiguity of the conformal split of the metric argument of the action functional, which was suggested rather long ago in <cit.>. The resulting class of anomaly actions will be parameterized by the conformal gauge selecting the representative on the orbit of the local conformal group. We will explicitly demonstrate that the difference between the members of this class is a Weyl invariant functional—a point of departure between various suggestions for the anomalous action. Two particular gauges will be considered, one of them exactly corresponding to the RFT action (<ref>) and another associated with the Weyl invariant nonlocal rescaling of the metric field suggested by Fradkin and Vilkovisky. This rescaling, which is directly applicable in asymptotically flat spacetimes, was designed as a remedy against the trace anomaly <cit.>—the analogue of the Yamabe problem of a local Weyl transformation to the metric with a vanishing scalar curvature.
Then we show how the Fradkin–Vilkovisky version of the anomaly action arises in the first three orders of the covariant curvature expansion for a generic one-loop effective action. We discuss the associated mechanism of partial summation of scalar curvature terms of this expansion <cit.> along with the double pole problem for the Green function of the Paneitz operator (<ref>).
Lack of uniqueness of the anomaly action defined only up to a Weyl invariant functional raises, of course, the question of its incompleteness in concrete applications. This also poses the question of whether the RFT action or its modifications within the above class provides an optimal description of the physical problem in question. For example, it is well known that in two dimensions the stress tensor trace anomaly and the associated nonlocal Polyakov action are fully responsible for the Hawking radiation of the two dimensional black holes <cit.>. On the contrary, in higher dimensions the anomaly action is insufficient to describe this phenomenon. Still there is a strong belief <cit.> that at distances of the horizon scale gravity theory is essentially modified due to large infrared effects of the conformal mode described by the action (<ref>). These effects might dominate macroscopic physics at such scales, like for instance the near black hole horizon behavior of quantum stress tensor <cit.>, the contribution to the scalar sector of gravitational waves <cit.> or dynamical vacuum energy in effective theory of gravity <cit.>. Though it is not entirely clear how complete is the setup in these problems, there are physical situations when the conformal mode really runs the whole show, and we consider as a direct application of (<ref>) two examples of such a situation. These are the calculation of the metric stress tensor in a generic conformally flat spacetime <cit.> and the Friedmann metric cosmology driven by the trace anomaly of conformal invariant fields <cit.>, the latter playing important role in the model of initial conditions for inflationary cosmology <cit.>.
A related issue in the problem of nonlocal effective action is the question of renormalization group (RG) running of the cosmological and gravitational constants. Though the issue of running scale and its relation to the cosmological constant problem has already become a byword in current literature, it becomes increasingly clearer that this running should not be interpreted in the usual sense of RG theory <cit.>. The notion of “scale” is so ambiguous in physics that its running nature actually looses universality when addressing various physical setups, like for example associating cosmological inflation with RG running <cit.>. Serious arguments against running nature of the cosmological and gravitational couplings in <cit.> have led to the notion of cosmological constant partners <cit.> interpreted in <cit.> in terms of separation of scales or decoupling of heavy modes <cit.>. Still, it is customary to have nontrivial solutions of RG equations in renormalizable gravity models <cit.> with running scale dependent and G. Therefore a natural question arises how these solutions have to be interpreted when the tadpole structure of the covariant cosmological and Einstein terms preclude them from their actual dependence on the momentum <cit.>.
So one of the goals of this paper is an attempt to clarify this issue within a special version of the notion of the “scale”. Looking forward to the final conclusion, we might formulate the suggestion for the notions of running and G couplings as their conversion or metamorphosis into their nonlocal partners similar to those introduced by J. Donoghue in <cit.>. Within perturbation scheme the cosmological and Einstein terms start manifesting themselves as nonlocal curvature squared terms very different from their original form.
The paper is organized as follows. In Sect. <ref> we decompose the quantum effective action into anomalous and Weyl invariant parts by imposing the conformal gauge for the choice of the representative on the orbit of the conformal group. This allows one to build the whole class of nonlocal anomalous actions, functionally parameterized by the choice of this gauge and including the RFT action (<ref>) and the Fradkin–Vilkovisky action suggested in <cit.>. Sect. <ref> contains the discussion of the covariant curvature expansion of <cit.> and the way how it contains the anomalous action in the lowest orders of this expansion. In particular, it is shown that the Fradkin–Vilkovisky version of this action performs a resummation of the covariant curvature series in powers of the Ricci scalar <cit.>. In Sect. <ref> we give a direct and, apparently, not very well known derivation from the RFT action of the vacuum stress-tensor behavior at the orbit of the conformal group—a good example of direct applicability of (<ref>). Here we also comment on the application of the anomalous conformal Wess-Zumino action to the a-theorem <cit.> and present the generalization of the Brown–Cassidy formula <cit.> for the stress tensor to the case of a nonzero Weyl tensor, see Eq.(<ref>). Applications of the anomaly action in conformally flat spacetime are presented in Sect. <ref>. It is shown how this action underlies the construction of the inflation scenario starting from the cosmological initial state in the form of the mircocanonical density matrix <cit.>, recently reviewed in <cit.>. Important feature of this application is the value of the Casimir vacuum energy which is also determined by the coefficients of the anomalous trace (<ref>) <cit.>.
In Sect.6 we discuss the problem of scale dependence of the gravitational and cosmological constants related to the ideas of <cit.> and <cit.>. Here we show that in the UV regime the RG analysis of the cosmological and Einstein terms strongly points out to the conversion of their scale dependence into the nonlocal form factors of their UV partners represented by curvature squared terms with dimensionless nonlocal coefficients. We call this phenomenon a metamorphosis of the running scale, which we derive by using a special scaling operator. In IR domain the same analysis leads to the low energy partners depending on mass scale of the theory. These nonlocal partners were suggested in <cit.> by J. Donoghue for the cosmological constant term and blueprinted for the Einstein term in <cit.> in the form of the long distance modification of Einstein gravity.
In the concluding section we briefly recapitulate the above observations and dwell on related potential problems and applications. We start by discussing the role of Weyl anomaly in the problem of cosmological initial conditions for the inflation scenario driven by a conformal field theory <cit.>. This scenario motivates introduction of numerous conformal higher spin (CHS) fields whose Weyl anomaly is generated only in the one-loop approximation and, thus, acquires a kind of nonperturbative status. Then we discuss the uniqueness for the nonlocal scaling operator used for the derivation of the above metamorphosis phenomenon. In particular, we show that in the curvature squared terms of the action it is nearly uniquely determined due to general covariance of the theory, though in Lorentz symmetry violating models like Hořava gravity <cit.> it may be rather ambiguous.
§ CONFORMAL GAUGE FIXING
The splitting of the renormalized effective action of a classically conformally invariant theory into the anomaly part _A generating the trace anomaly (<ref>) and the Weyl invariant part ^ conf, g_μνδ^ conf/δ g_μν=0,
_ ren=_A+^ conf,
is obviously not unique and admits the freedom
_A→_A+W^ conf, ^ conf→^ conf-W^ conf,
with an arbitrary conformally invariant functional W^ conf,
g_μνδ W^ conf/δ g_μν=0.
The freedom in the choice of W^ conf[ g_μν] arises as a functional integration constant for the first order variational equation that can be written down for _A[ g_μν] or for the renormalized effective action [ g_μν]≡_ ren[ g_μν]. At the orbit of the conformal group passing through the metric g_μν—the argument of the effective action—and parameterized by the local conformal parameter σ=σ(x),
g_μν = e^2σg̅_μν,
the renormalized action _ ren[ e^σg̅ ] satisfies the equation
δ_ ren[e^2σg̅]/δσ = √(g)/16π^2(α C^2 + β E +γ R) |_g_μν = e^σg̅_μν,
which can be integrated to give conformal Wess-Zumino action <cit.>
Δ[ g̅, σ ] ≡_ ren[ g ] - _ ren[ g̅ ]
= 1/16π^2∫ d^4x √(g̅){[αC̅^2 + β_4] σ + 2βσΔ̅_4σ}
- 1/32π^2(γ/6 + β/9) ∫ d^4x (√(g)R^2 - √(g̅)R̅^2),
where the two metrics g_μν and g̅_μν are related by the equation (<ref>), all barred quantities are built in terms of g̅_μν and Δ̅_4 is the barred version of the fourth-order Paneitz operator (<ref>). This expression _ ren-_ ren=_A -_A can also be rewritten in the other form
_A[ g ]-_A[ g̅ ]
= 1/16π^2∫ d^4x √(g) { [ α C^2 + β_4] σ
-2β σΔ_4σ }
- 1/32π^2(γ/6 + β/9) ∫ d^4x (√(g)R^2 - √(g̅)R̅^2),
if one takes into account two important properties of the Paneitz operator—Weyl invariance of its densitized form,
√(g̅) Δ̅_4 = √(g) Δ_4,
and the finite conformal transformation of E_4—the Gauss-Bonnet density modified by √(g) R term (<ref>),
√(g) _4 = √(g̅) _4 + 4√(g̅) Δ̅_4σ.
These two properties are consistent with each other because the last equation should obviously remain valid under the interchange of g_μν and g̅_μν accompanied by flipping the sign of σ.
There is also the third form of the Wess-Zumino action, which will be given below in Eq.(<ref>). It exists for a special renormalization converting to zero the coefficient γ of the R term in (<ref>), and underlies the proof of the so-called a-theorem for the monotonic RG flow of the coefficient a=β/16π^2 of the topological term in the trace anomaly <cit.>.
Modulo a nonvanishing conformal anomaly all points on the orbit of the conformal group (<ref>) are physically equivalent, and this typical situation of a broken local gauge invariance can be managed by introducing the gauge condition which uniquely selects g̅_μν as the representative of the equivalence class of metrics (<ref>). If we denote this gauge condition as χ[g̅]=0 then this representative should be uniquely selected by the solution of the equation for the conformal parameter σ,
χ[ g̅ ] = χ[ g e^-2σ] = 0,
this solution being a functional of the metric _χ[ g̅ ], labelled by the gauge symbol χ,
σ = _χ[ g̅ ].
The representative of the conformal orbit g̅_μν[g] as a functional of a given metric g_μν (through which the orbit is passing) becomes Weyl invariant,
g̅_μν[ g ]≡ g_μν e^-2_χ[ g̅ ],
g_αβδg̅_μν[ g̅ ]/δ g_αβ = 0,
because under any local Weyl rescaling g_μν→ e^2σ g_μν the conformal parameter transforms as _χ[g]→_χ[g] + σ in view of the identity χ[g e^-_χ[g]] ≡ 0, so that
δ_σ_χ[ g̅ ]=σ,
where δ_σ is the operator of the conformal variation
δ_σ≡ 2∫ d^4x σ(x) g_μν(x)δ/δ g_μν(x).
For the uniqueness of such conformal gauge fixing procedure (in spacetime and at least in some finite domain of the space of metrics) the Faddeev–Popov operator Q_χ=Q_χ(x,y), corresponding to the gauge χ[g], δ_ωχ(x)=∫ d^4y Q_χ(x,y) ω(y), should be nondegenerate.
Thus, the terms of (<ref>)
W^ conf[ g ] = _A[ g̅ ] + 1/32π^2(γ/6 + β/9)∫ d^4x
√(g̅) R̅^2
taken at g̅_μν[ g ] can be considered as an irrelevant Weyl invariant integration “constant”, while the rest of the terms can be identified with the anomaly action after the substitution of σ=_χ[ g ]. This set of anomaly actions _A[ g ]≡_χ[ g ] parameterized and labelled by conformal gauge conditions χ reads as
_χ[ g ]= 1/16π^2∫ d^4x √(g) {(α C^2 + β_4) _χ
- 2β_χΔ_4_χ} - 1/32π^2(γ/6 + β/9)∫ d^4x √(g)R^2.
The difference between various members of this set is, of course, a Weyl invariant functional. For two arbitrary conformal gauges one has
_χ_1 - _χ_2 = 1/16π^2∫ d^4x √(g) (_χ_1-_χ_2)
×[α C^2 +β_4 -2βΔ_4(_χ_1
+_χ_2)].
Conformal variation of this expression is vanishing, because of the transformation law (<ref>) for _1,2, Weyl invariance of the density √(g) C^2 and the relation (<ref>) which in the infinitesimal form reads as
δ_σ[√(g) _4] = 4√(g) Δ_4σ,
so that using all the above properties δ_σ(_χ_1 -_χ_2)=0.
Note that with our definition of the anomaly action (<ref>) the way it enters the full quantum action can be represented as
[ g ] = _χ[ g ] +[ g̅ ] +1/32π^2(γ/6 + β/9) ∫ d^4x√(g̅) R̅^2,
where g̅_μν[ g ] = e^-2_χ[ g ]g_μν
§.§ Riegert–Fradkin–Tseytlin gauge
An obvious choice of the conformal gauge associated with the Gauss–Bonnet density and the Branson curvature is the Riegert–Fradkin–Tseytlin gauge
χ__ RFT[ g̅ ] ≡_4 = 0.
It can be imposed for topologically simple spacetime manifolds with a vanishing bulk part of the Euler characteristics (see Eq.(<ref>) and footnote <ref> below), to which in particular belongs asymptotically flat spacetime to be mainly considered throughout the paper. The advantage of this gauge is that it is exactly solvable due to the transformation law for the Branson curvature (<ref>). Applying this gauge and using Eq. (<ref>) we obtain a linear equation on __ RFT which has a solution in terms of the inverse Paneitz operator
__ RFT=1/4 1/Δ_4_4.
Formally substituting this expression to (<ref>) we obtain exactly the RFT action (<ref>).
This RFT action and the inverse Paneitz operator are well defined and exist in asymptotically flat spacetime under Dirichlet boundary conditions at infinity when treated within perturbation theory in powers of the curvatures whose collection is denoted below as . Indeed, in this case
1/Δ_4=1/^2+O(),
and this operator works well when it is applied to the functions of the Branson curvature type ∼_4. Because of the double-pole nature of the operator 1/^2 its action on generic functions may be badly defined due to infrared divergences, but when the function is represented by the total derivative structure it generates, when acted upon by 1/^2, well defined multipole expansion valid in four dimensions at spacetime infinity <cit.>.[
As discussed in <cit.>, the operator 1/^n in D-dimensional space with D<2n is ill defined unless the functions it acts upon are of the form ∂_α_1...∂_α_mj(x), m=2n-D+1 with the function j(x) having an asymptotic behavior j(x)=O(1/|x|^D), |x|→∞. This property can be explained by the fact that in the multipole expansion of 1∂_α_1... ∂_α_mj(x) the first few multipoles vanish, which improves the fall-off properties of the result at infinity and makes possible a repeated action by 1/.]
But the Gauss–Bonnet density and √(g) R are both locally a total derivative which makes 1/Δ_4 well defined in the expression (<ref>) for __ RFT. This in fact implies the invertibility of the Faddeev–Popov operator in this gauge, which up to coefficient coincides with the Paneitz operator, Q__ RFT=4Δ_4, and thus guarantees local uniqueness of conformal gauge fixing procedure.
Moreover, the above observation serves as a repudiation of the harmful role of double poles in the RFT action that was claimed in <cit.>. Absence of infrared dangerous double poles is explicit in the lowest order of the curvature expansion for __ RFT which reads
__ RFT=-1/6 R+O(^2),
in view of the fact that the Gauss–Bonnet density is quadratic in the curvature √(g)E = O(^2). Higher orders of this expansion are also safe because of the total derivative nature of √(g) E. Regarding the lowest order quadratic in curvature part, with the above approximation for __ RFT it equals
__ RFT[ g ] =
-γ/192π^2∫ d^4x√(g) R^2+O(^3),
because all the terms depending on the parameter β completely cancel out, and what remains coincides with the last quadratic term of (<ref>). This coincidence fully matches with the linear in curvature part of the trace anomaly (<ref>) (its γ-term) generated by the quadratic action (<ref>). Indeed, the conformal transformation of its nonlocal Weyl term contributes only to O(^2)-part of the anomaly due to the fact that only its form factor ln(-/μ^2) is not Weyl invariant, and the whole γ-term of the anomaly entirely comes from the R^2-part of (<ref>).
§.§ Fradkin-Vilkovisky gauge
Another conformal gauge arises in context of conformal off-shell extension of Einstein gravity suggested in <cit.> and corresponds to the 4-dimensional version of the Yamabe problem. The representative of the conformal group orbit is chosen to be the metric with a vanishing scalar curvature
χ__ FV[ g̅ ] = R̅,
which implies a nonlinear but still explicitly solvable equation for __ FV ,
R[e^-2__ FVg_μν] = e^3__ FV(R-6 ) e^-__ FV = 0.
This solution reads
__ FV=-ln(1+1/61/-R/6R),
lim_|x|→∞ e^-__ FV = 1
in terms of the inverse of the conformal second order operator -16 R subject to zero boundary conditions at infinity. This inverse operator also admits covariant curvature expansion and in the lowest order yields the function __ FV coinciding with that of the RFT gauge (<ref>),
__ FV=
__ RFT + O(^2),
and, therefore, generates in the quadratic order the same expression for the anomaly action
__ FV=
__ RFT + O(^3).
Using Eqs. (<ref>) and (<ref>) it is easy to see that the difference between RFT and FV actions is given by the exact expression
__ RFT - __ FV =
1/16π^2∫ d^4x √(g)(__ RFT-__ FV)
×[ α C^2+2β Δ_4
(__ RFT-__ FV)],
bilinear in the local Weyl squared term and conformally invariant nonlocal functional
__ RFT - __ FV = 1/4 1/Δ_4_4
+ ln(1+1/61/-R/6R) = O(^2).
Therefore within perturbation theory these two actions remain coinciding even in the cubic order and become different only starting from the fourth order in the curvature.
Perturbatively both terms of (<ref>) produce similar nonlocal structures of tree-like nature, that is the terms characteristic of the tree-level approximation in field theory. Such terms are composed of the powers of inverse d'Alembertians acting on the curvature tensor structures or on the products of similar nonlocal tensor structures built according to the same pattern. However, taken separately as exact entities they have essentially different types of nonlocality. RFT action formalism involves the Green's function of the fourth order Paneitz operator, whereas the FV version of the action is based on the Green's function of the second order operator -16 R. Both operators are conformally covariant, but the Weyl transformation of -16 R is different from (<ref>)
-16 R=
e^-3σ (-16R̅) e^σ, g_μν=e^2σg̅_μν.
Moreover, FV action formalism involves a special logarithmic nonlinearity absent in RFT gauge fixing. The action of the Paneitz operator derivatives in (<ref>) can destroy this logarithmic structure, but the __ FVC^2-term in __ FV still contains it intact.
A further comparison of the RFT and FV actions can be done along the lines of their “naturalness”. RFT gauge (<ref>) is based on structures organically belonging to the conformal anomaly formalism in the sense that it involves the same fundamental objects—the Branson curvature _4 and the relevant Paneitz operator Δ_4 which are immanently present in the flow of the anomalous action along the conformal group orbit (<ref>). One could even interpret this gauge as the one providing the extremum of β-terms in this expression with respect to the variation of the orbit parameter σ. This interpretation is, however, erroneous because g_μν, g̅_μν and σ cannot be treated as independent variables in Eq. (<ref>).
On the contrary, FV gauge (<ref>) uses a somewhat extraneous entity—the scalar curvature—which is singled out only by the fact that it turns out to be the bearer of the metric conformal mode. As the result the advantage of FV gauge is that it does not involve higher than second order derivatives and does not produce double pole nonlocalities. Another advantage is that the equation (<ref>) disentangling the FV anomaly action from the full effective action becomes in view of R̅=0 much simpler
[ g ] = __ FV[ g ] + [ g̅ ] |_g̅_μν[ g ],
where g̅_μν[ g ] = e^-2__ FV[ g ]g_μν, which is obviously consistent with the fact that __ FV[ g̅ ]=0 because __ FV[ g̅ ]≡0.
As compared to the FV version, among technical disadvantages of the RFT gauge and the action is the presence of fourth order derivatives of the Paneitz operator. Due to this the RFT version turns out to be vulnerable from the viewpoint of possible generalizations. For example, a modification of the gauge (<ref>) by the additional Weyl squared term, χ__ RFT→χ__ RFT+aC^2 would not work, because the relevant modification __ RFT→__ RFT+a (2Δ_4)^-1C^2 is badly defined for the reasons described above in the footnote <ref>—the additional term should have a total derivative structure.
The generalization to spacetimes with nontrivial topology is also not straightforward, because the condition (<ref>) should not contradict nonvanishing Euler number of the manifold, which for compact manifolds without a boundary reads e_E=132π^2∫ d^4x√(g) E(x). Say, for a compact manifold of a finite volume V=∫ d^4x √(g) the gauge (<ref>) can be chosen to be
χ(g̅) = √(g̅) (E̅
-2/3R̅
-32π^2e_E/V̅),
but this leads to a nonlinear integro-differential equation for the relevant
4√(g)Δ_4 = √(g)(E-2/3 R-32π^2 e^-4/⟨ e^-4 ⟩e_E/V),
⟨ e^-4 ⟩ ≡1/V∫ d^4x √(g) e^-4,
which apparently can be solved analytically only by perturbations in e_E/V.
Unless stated otherwise, below we consider asymptotically flat spacetime with a trivial topology, whose Euler characteristics should be modified by the boundary term. For generic 4-dimensional manifolds with a smooth boundary it reads
e_E=132π^2(∫_ M d^4x√(g) E(x)+∫_∂ M d^3x√(γ)(x)),
where γ=γ_ab and γ_ab is the induced metric on ∂ M. For asymptotically flat case due to the contribution of ∂ M at infinity | x |→∞ it equals 1, so that everywhere in what follows the bulk part of the Euler characteristics is 132π^2∫ d^4x√(g) E(x)≡ e'_E=e_E-1=0.[I am grateful for this observation to M.Duff. Explicit and simple expression for the boundary term of the Euler characteristics in the 4-dimensional case can be found in <cit.>, =14R_a⊥ b⊥K^ab+16 K^a_b, where K_ab=∇_a n_b is the extrinsic curvature of the boundary, and ⊥ denotes the projection on the outward pointing normal vector n^μ. The last term in exactly reproduces the value of the Euler number e_E=1 for flat and asymptotically flat spaces <cit.>.]
§ CONFORMAL ANOMALY AND COVARIANT CURVATURE EXPANSION
Despite the diversity of nonlocal structures of RFT and FV versions of anomaly action, neither of them seem to appear in conventional perturbation theory for quantum effective action. The covariant form of this perturbation theory in curved spacetime (<ref>) was pioneered in <cit.>, but its logarithmic nonlocal formfactor did not resemble the nonlocal operators of the RFT action (<ref>). Here we show how in spite of these discrepancies the anomaly action originates from covariant perturbation theory of <cit.>.
This perturbation theory arose as a concrete implementation of the ideas of <cit.> as an expansion in powers of covariant tensors of spacetime and fibre bundle curvatures and other covariant background field objects. This expansion is completely equivalent to standard Feynman diagrammatic technique and represents its resummation converting the original perturbation series in noncovariant odjects, like matter and metric field perturbations on top of flat and empty spacetime background, into the series in powers of covariant fields strengths denoted collectively below by and including spacetime and fibre bundle curvature.
To be more specific, consider the theory with the inverse propagator on top of the nontrivial field background F̂(∇)=F^A_B(∇), hat denoting the matrix structure of the operator acting in the space of fields φ=φ^A(x) with a generic spin-tensor index A and ∇=∇_μ denoting the covariant derivative with respect to the corresponding fibre bundle connection,
F̂(∇)=+P̂-1̂/6 R, =g^μν∇_μ∇_ν.
This operator is characterized by the “curvatures”—metric Riemann tensor with its Ricci contractions, fibre bundle curvature R̂_μν determining the commutator of covariant derivatives, [∇_μ,∇_ν] φ
=R̂_μν φ, and the potential term P̂ (the term -1̂6 R is disentangled from the operator potential for reasons of convenience),
=(R^μ_ναβ, R_μν, R, R̂_μν, P̂).
In covariant perturbation theory the one-loop effective action gets expanded in powers of these curvatures
= 1/2 Trln F(∇) = local power div_0+_1
+ _2+_3 + O(^4),
where _n∼^n. Within dimensional regularization of 2ω-dimensional spacetime, ω→ 2, the zeroth and first order terms of the expansion represent pure power divergences (note that we consider the case of a massless theory, or the theory where the mass matrix is included in the potential term P̂ and treated by perturbations), so that these two terms are annihilated by the regularization, while the second order term is given by the expression <cit.>
^(2)_ dim reg = -Γ(2-ω)Γ(ω+1)Γ(ω-1)/2(4π)^ωΓ(2ω+2) μ^4-2ω
×∫ dx √(g) tr {R_μν(-)^ω-2R^μν1̂
-1/18(4-ω)(ω+1) R(-)^ω-2R 1̂
-2/3(2-ω)(2ω+1) P̂(-)^ω-2R
+2(4ω^2-1) P̂(-)^ω-2P̂
+(2ω+1) R̂_μν(-)^ω-2R̂^μν},
where ω=d2→ 2. Here tr denotes the matrix trace and the concrete coefficients implement the originally conjectured structure of dimensionally regularized effective action Lagrangian, (-)^ω-2, that was blueprinted in <cit.>. What is important and should be especially emphasized is that =g^μν∇_μ∇_ν means here the full covariant d'Alembertian acting on a respective scalar R, tensor R_μν or spintensor R̂_μν and P̂ objects.
For brevity we will consider the case of a single conformal scalar field with 1̂=1, P̂=0, R̂_μν=0 and the following values of the trace anomaly coefficients[The coefficients have the opposite sign to those of b=-α/16π^2 and b'=-β/16π^2 in <cit.>, because in our case the stress tensor is defined with respect to the Euclidean effective action =-i_L in contrast to the definition of T^μν=2g^-1/2δ_L/δ g_μν in the Lorentzian signature spacetime of <cit.>. Comparison with <cit.> should also take into account another sign of the stress tensor defined by the variation with respect to the contravariant metric.]
α=-1/120, β=1/360,
γ=-1/180,
for which the action (<ref>) takes the form—a particular case of (<ref>),
^(2)_ ren = 1/32π^2∫ dx √(g) {1/60[R_μνγ(-)R^μν
-1/3Rγ(-)R]+R^2/1080}
=1/32π^2∫ dx √(g) {1/120
C_μναβγ(-)C^μναβ
+R^2/1080}+O(^3).
Here γ(-) is the nonlocal formfactor (in minimal subtraction scheme with ln(4π) and Euler constants absorbed in μ)
γ(-)=ln(-/μ^2)-16/15,
and the transition to the last line is valid up to the higher order terms in curvature and based on the nonlocal generalization of the identity
∫ d^4x √(g) C^2=2∫ d^4x √(g) (R_μνR^μν-13 R^2)
derived in <cit.> by integration by parts and use of the nonlocal representation of the Riemann tensor in terms of the Ricci one (see footnote <ref> below).
The first term of this action is obviously conformal invariant in quadratic order, so that the linear in curvature part of the anomaly originates from the last term which is the RFT (or FV) action (<ref>) in the quadratic approximation with γ=-1/180. Thus, the RFT or FV action is fully recovered in this approximation from perturbation theory and, as expected, turns out to be local.
§.§ Cubic order
Quadratic order of the covariant curvature expansion is, in fact, a trivial generalization of the flat space expressions for self-energy operators of Feynman diagrammatic technique, because ln(-/μ^2) is just a straightforward replacement of the typical momentum space formfactor ln(p^2/μ^2) by its position space version. At higher orders the situation becomes much more complicated and usually represented in terms of correlators of stress-tensor and other observables, written down in momentum space representation, see <cit.> for the treatment of generic conformal field theories. These correlators are, of course, contained in the effective action expanded in curvatures which, for reasons of general covariance, we prefer to consider in coordinate representation.
In this representation the effective action becomes for each order N in the curvature a sum of nonlocal monomials
∫ d^4x_1⋯ d^4x_N F(x_1,…,x_N)∇...∇(x_1)...(x_N)
with nonlocal multiple-point coefficients and covariant derivatives somehow acting on the product of curvatures at their various points. The absence of convenient and generally covariant momentum space representation makes us to work in coordinate representation and invent a special language which would simplify the formalism and make it manageable <cit.>. This language is based on the operator representation of nonlocal formfactors,
F(x_1,…,x_N) = (∇_1,…,∇_N)
×δ(x_1,x_2) δ(x_1,x_2)⋯δ(x_1,x_N),
where (∇_1,…,∇_N) is the operator valued function of N independent covariant derivatives such that each ∇_i is acting on its own x_i. This allows one to write the orders of perturbation theory as
^(N) = 1/2(4π)^2∫ d^4x √(g)∑_M _M(∇_1,…,∇_N)
× I_M(x_1,…, x_N) |_{x}=x,
where summation runs over all invariant monomials in curvatures of a given n-th order
I_M(x_1,…,x_N) ∼∇⋯∇(x_1)⋯(x_N)
and after the action of all independent derivatives on their arguments all these arguments {x}=(x_1,… x_N) have to be identified.
In the cubic order for the full set of curvatures (<ref>) there are 29 such invariant structures built of these curvatures and their covariant derivatives with all indices fully contracted with each other. Moreover, in view of the scalar (no free indices) nature of the formfactors and the formal identity ∇_1+∇_2+∇_3=0 (reflecting the possibility of integration by parts without surface terms, which is a counterpart to the momentum conservation in Feynman diagrams) the formfactors of ^(3) can be written down as functions of three d'Alembertians _1, _2 and _3 independently acting on three arguments of I_M(x_1,x_2,x_3). Thus, cubic order reads as
^(3) = 1/2(4π)^2∫ dx √(g)∑^29_M=1_M(_1,_2,_3)
× I_M(x_1,x_2,x_3) |_{x}=x.
The list of cubic invariants and their formfactors is presented in <cit.>. It is very long and, as its details are not necessary for our purposes, we will not fully present it here. We only give the general structure of the nonlocal formfactors of these invariants. It reads as a sum of three different groups of terms
_M(_1,_2,_3) = A_M (_1,_2,_3)
+∑_1≤ i<k^3D^ik_M/(_i-_k)ln_i/_k + B_M.
Here (_1,_2,_3) is the fundamental cubic formfactor corresponding to the triangular Feynman graph of massless theory with unit vertices <cit.>,
(_1,_2,_3)
= ∫_α≥ 0d^3α δ(1-α_1-α_2-α_3)/α_1α_2(-_3) + α_1α_3(-_2) + α_2α_3(-_1),
which cannot be reduced to an elementary function. The operator-valued coefficients A_M, B_M and D_M^ik are rational functions of three -arguments with a polynomial numerator P(_1,_2,_3) and the denominator containing together with the product _1_2_3 also the powers of a special quadratic form of these arguments D,
A_M, D^ik_M, B_M ∼P(_1,_2,_3)/_1_2_3 D^L, L≤ 6,
D = _1^2+_2^2+_3^2 - 2_1_2 - 2_1_3 - 2_2_3.
In this cubic order of the curvature expansion the conformal anomaly (<ref>), which is quadratic in curvatures, was explicitly derived by the direct variation of the metric in <cit.>. Though this derivation has demonstrated nontrivial localization of the nonlocal terms under straightforward tracing the metric variational derivative, it still remained rather technical and not very illuminating because it has not revealed the anomalous part of the action. It turns out, however, that the transition to another basis of curvature invariants, suggested in <cit.>, explicitly disentangles this part.
§.§ Conformal resummation: Fradkin–Vilkovisky anomaly action
The recovery of the anomaly part of the action and its conformal invariant part is based on a simple idea that the latter should consist of the series of Weyl invariant structures. The construction of Weyl invariants can be done by the gauge fixing procedure of the above type—choosing the representative metric on the group orbit by imposing the conformal gauge. Obviously the set of invariants surviving after imposing this gauge will be minimal if the gauge would explicitly annihilate the maximum number of invariants in their original full set. For this reason the FV gauge (<ref>) is much easier to use for the separation of the total set of invariants into the Weyl type ones and those which vanish when the gauge is enforced. As R is one of the curvatures in the set of the FV gauge is more useful for the purpose of such a separation than the RFT gauge (<ref>) which nonlinearly intertwines all the curvatures. Intuitively it is also clear because R, in contrast to C^α_βμν, is a bearer of the conformal mode.
In the purely metric sector such a separation is attained by the transition to the new curvature basis <cit.>,
= ( R^μ_ ναβ,R_μν,R ) →= ( C^α_ βμν,R ),
via expressing Ricci tensor in terms of the Weyl tensor and the Ricci scalar[In fact, the original basis and the curvature expansion of <cit.> consisted of R_μν and R because in asymptotically flat Euclidean spacetime Riemann tensor can be expressed as nonlocal power series in the Ricci tensor,
R_αβμν=1/(∇_μ∇_α R_νβ-∇_ν∇_α R_μβ) -(α↔β)+O(^2),—the corollary of contracted Bianchi identity.]. This expression follows from the contracted Bianchi identity which for the Weyl tensor reads as
∇^β∇^α C_αμβν=1/2 R_μν-1/6∇_μ∇_ν R
-g_μν/12 R + O(^2).
This equation can be solved by iterations for Ricci tensor in terms of nonlocal series in powers of two objects—Ricci scalar R and the new traceless (and up to quadratic order transverse) tensor C_μν which is itself a nonlocal derivative of Weyl,
C_μν = 2/∇^β∇_α C^α_ μβν.
The resulting series begins with
R_μν = C_μν + 1/3∇_μ∇_ν1/R + 1/6 g_μνR + O(^2).
Effective action reexpansion imples the transition from I_M(x_1,…, x_n) to a new basis of invariants
Ĩ_M(x_1,...x_n) ∼∇...∇(x_1)...(x_n),
which can be separated in the set of monomials I_C(x_1,…, x_n) involving only C_μν and the set of monomials I_R(x_1,…, x_n) containing at least one scalar curvature factor,
I_C(x_1,...x_n) ∼ ∇...∇ C(x_1)... C(x_n),
I_R(x_1,...x_n) ∼ ∇...∇ R(x_1)C(x_2)... C(x_n),
∇...∇ R(x_1)R(x_2)C(x_3)... C(x_n), ...
Expansion in the new basis of invariants implies, of course, the transition to a new set of their relevant formfactors
_M(∇_1,...∇_n)→_C(∇_1,...∇_n), _R(∇_1,...∇_n),
and the new expansion takes the form
=W+_R,
where W is the Weyl and _R is the mixed Weyl–Ricci scalar parts of the whole expansion, which we write in abbreviated form (omitting multiple spacetime arguments and the operation of equating them)
W = 1/32π^2∫ d^4x √(g)∑_n,C_C^(n)I_C^(n),
_R = 1/32π^2∫ d^4x √(g)∑_n,R_R^(n)I_R^(n).
Note that W and its Weyl basis invariants are not Weyl invariant, because apart from Weyl tensors they contain covariant derivatives and nontrivial formfactors which do not possess conformal invariance properties.
The main statement on the conformal decomposition of the effective action of <cit.> is that
[ g ] = __ FV[ g ]
+ W[ g̅ ] |_ g̅_μν = e^-2__ FV[ g ] g_μν,
where __ FV[ g ] is exactly the FV anomaly action introduced above[One can check that the last four lines of Eq. (24) in <cit.> form exact expression for __ FV[ g ] by taking into account that the function Z in this equation coincides with -__ FV and satisfies the equation Z + 12(∇ Z)^2 = 13 R.]. Conformally invariant part is obtained by the “conformization” of W, while the rest of the effective action is exhausted by the Fradkin-Vilkovisky anomaly action. Invariant meaning of this representation is that the Ricci part of the full action is not independent, but fully determined by the anomaly and Weyl parts of the action. This representation looks as the realization of Eq. (<ref>) within perturbation theory in curvatures. This result is likely to resolve a long-standing debate between the proponents of the Riegert action and adherents of the flat space perturbation expansion for the effective action with typical nonlocal logarithmic form factors of the form (<ref>). Note that these form factors do not contribute to the anomaly even though their coefficients are directly related to its expression (<ref>). Rather they become Weyl invariant under the substitution of g̅_μν as their functional argument.
Validity of the representation (<ref>) was checked in the cubic order approximation for the effective action in <cit.>. The transition to the new basis of invariants in the second order leads to (see the second line of Eq. (<ref>)),
W^(2)[ g ] = 1/32π^2∫ dx √(g)1/120 C_μναβ γ(-)C^μναβ,
^(2)_R[ g ] = 1/32π^2∫ dx √(g)1/1080R^2,
whereas in the third order it results in a great simplification of the “Ricci scalar” formfactors _R^(3) as compared to the original ones—they become much simpler and, moreover, in their expressions of the form (<ref>) the coefficients A,D^ik_M,B_M of (<ref>) completely loose powers of the function D in the denominator. Thus, modulo the contributions of ln(_i/_k)/(_i-_k) the formfactors _R^(3) acquire the tree-level structure. The terms with these factors get, however, completely absorbed with accuracy O(^4) by the replacement W^(2)[ g_μν ]→ W^(2)[ g̅_μν ] in view of the following relation <cit.>
W^(2)[ g ] - W^(2)[ g̅ ]
∼∫ dx √(g) C_μναβ[ln(-)-ln(-)]C^μναβ
= ∫ dx √(g)ln(_1/_2)/_1-_2
[_2-_2] C_1 μναβ C^μναβ_2 + O(^4),
_2-_2∼_3+ O(^2),
where the right hand side is the set of relevant cubic order terms with the above factor acting on two Weyl tensors out of three curvatures in RCC-type invariants. What remains in the sector of cubic I^(3)_R-invariants is the set of tree-like nonlocal form factors which comprise the curvature expansion of FV action up to ^3 order inclusive. This observation done in <cit.> can be formalized as the following sequence of identical transformations
[ g ] = W^(2+3)[ g ] + ^(2+3)_R[ g ] + O(^4)= W^(2+3)[ g̅ ]
+ ^(2+3)_R[ g ] +(W^(2)[ g ] - W^(2)[ g̅ ] )_^(2+3)__ FV + O(^4) + O(^4),
where the group of the last three terms forms Fradkin–Vilkovisky anomaly action expanded with ^3-accuracy. Explicitly the cubic part of __ FV for the model of a single conformal scalar field with (<ref>) reads <cit.>
^(3)__ FV = -1/32π^2∫ dx √(g){1/19440(2/_3 - _1/_2 _3) R_1 R_2 R_3
+ 1/1620 _2_3 C_1^αβ∇_α R_2 ∇_β R_3
+1/540(4/_2 - 1/_3 - 2 _1/_2_3 - _3/_1_2) C_1^μν C_2 μν R_3
+1/135(1/_1_2 - 2/_2_3) ∇^μ C_1^να∇_ν C_2 μαR_3
- 1/135 _1 _2_3∇_α∇_β C_1^μν∇_μ∇_ν C_2^αβ R_3}|_ {x}=x,
where C_μν is the “Weyl” part (<ref>) of Ricci tensor (<ref>)
§.§ The problem of double poles and global conformal transformations
The expression (<ref>) shows that in the cubic order the anomalous effective action is free from double pole nonlocal terms. For the FV action this is obviously true to all orders of the curvature expansion, since all its tree type nonlocalities originate from the Green's function of the conformal scalar operator -16 R. However, for the RFT action double poles formally appear starting from the fourth order in the curvature because the metric variation of _χ=_ RTF in (<ref>) leads to the action of the inverse Paneitz operator upon the square of the Weyl tensor C^2 = C_μναβC^μναβ due to a formal variational rule
∫ d^4x √(g) C^2δ_ RFT = ∫ d^4x √(g) (Δ_4^-1C^2)δ(…).
This operation is not well defined, because C^2 is not a total derivative and the repeated action of 1/ upon generic test functions in four dimensions leads to IR divergent integrals—see footnote <ref>. In the cubic order of _ RFT this problem does not arise because of the extra factor in R, as it was checked in <cit.> by explicit calculations of ⟨ TTT ⟩ correlators, but one is not granted to be free from this difficulty for higher order correlators.
In fact this is a typical situation of IR divergences in two dimensions, where the kernel of 1/ has a logarithmic dependence at infinity, and the correlators of undifferentiated conformal fields ϕ are UV divergent, while the correlators ⟨∂ϕ(x)∂ϕ(y)⋯⟩ stay well defined. Apparently, the same property in four dimensions also underlies absence of unitarity in dipole theories with 1/^2-type propagators recently discussed in <cit.>. The mechanism of transition from operators to their derivatives in shift symmetric theories actually helps to justify the RFT action as a source of well defined stress tensor correlators and extend the validity of results in <cit.> to all higher orders.
This follows from the observation that the Paneitz operator reads
√(g)Δ_4 = ∂_μ[√(g)(∇^μ∇_ν + 2R^μν - 2/3Rg^μν)]∂_ν
and, therefore, perturbatively on the flat space background can be represented as
√(g)Δ_4 = ^2 + V, = δ^μν∂_μ∂_ν, V = ∂_μ V^μν∂_ν,
where the perturbation V = O() has a special form—another differential operator V^μν sandwiched between two derivatives with all derivatives acting to the right (which is indicated by the arrow). Within perturbation theory in powers of V the action of the inverse operator on a generic test function ψ—scalar density—could have been understood as the expansion
ϕ = 1/√(g)Δ_4ψ = ∑_n=0^∞(-1)^n/^2(V1/^2)^nψ
= ∑_n=0^∞(-1)^n/^2(∂_μ V^μν1/^2∂_ν)^nψ,
where we deliberately permuted the factors of ∂_ν and 1/^2 using their formal commutativity in order to provide the action of 1/^2 on the total derivative function. Thus all terms of this expansion except the first one become infrared finite. The first term (1/^2)ψ, however, makes this function ϕ ill defined. On the contrary, its derivative ∂_αϕ becomes consistent if one understands the first term of the expansion as (1/^2)∂_αψ, so that the prescription for the operation of ∂_α(1/√(g)Δ_4) on a generic non-derivative type test function reads as
∂_α1/√(g)Δ_4ψ=
∑_n=0^∞(-1)^n/^2∂_α(∂_μ V^μν1/^2∂_ν)^nψ.
With this prescription the term C^2_ RFT in the RFT action becomes perturbatively well defined to all orders of expansion. Indeed, this term with _ RFT given by (<ref>) and on account of total derivative structure √(g)(E-23 R)=∂_α E^α can be rewritten by integration by parts as
4∫ d^4x√(g) C^2_ RFT=-∫ d^4x √(g)E^α ∂_α1/√(g)Δ_4(√(g) C^2)
with the above prescription (<ref>). This confirms a well defined nature of all multiple point correlators of stress tensor generated by RFT action.
Finally, it is worth discussing the effective action behavior under global conformal transformations with σ_0= const. Higher order curvature terms of the effective action scale as negative powers of e^σ_0 and therefore are irrelevant in the IR limit. In <cit.> this was a main argument in favor of a dominant role of the Wess–Zumino action (<ref>) in this limit because Δ[g, σ] behaves linearly in σ_0 (or logarithmically in the distance). Indeed,
Δ[g, σ + σ_0] = Δ[g, σ]
+σ_0(γ/32π^2∫ d^4x √(g) C^2 + β e'_E),
where e'_E is the Euler characteristics of the manifold modulo its boundary contribution (see footnote <ref>). Note, however, that this behavior cannot be captured within the nonlocal RTF form of the anomaly action (<ref>) because it is valid only under Dirichlet boundary conditions for the Green's function of Δ_4 (which would be violated by the σ_0-shift). In other words, the expression (<ref>) lacks the contribution of the zero mode of the Paneitz operator, which on the contrary is explicitly featuring in (<ref>). For compact manifolds with possibly nontrivial topology global Weyl transformations would not contradict boundary conditions, and these transformations will obviously show up in the generalized RFT gauge (<ref>) as an ambiguity of the solution for Eq. (<ref>), →+σ_0.
§ STRESS TENSOR IN CONFORMALLY RELATED SPACETIMES
Equations (<ref>) and (<ref>) show that the anomalous action makes sense as an object specifying the difference of effective actions on conformally related metrics and other fields. Outside of this context this action, being a subject of shifting by an arbitrary conformal invariant functional W^ conf[ g ], as in Eq. (<ref>), is not very instructive because such a shift can include essential physical information on conformally invariant degrees of freedom. Anomaly action _χ, or it would be better to say, the Wess–Zumino type action (<ref>)—the generating functional of _χ—is really useful in situations when the physics of a conformally related spacetime with the metric g̅_μν is fully known. Then the effective action at g_μν can be completely recovered from the knowledge of the Weyl anomaly.
The simplest situation belongs to the class of conformally flat spacetimes when g̅_μν can be associated with flat metric for which all the metric field invariants are vanishing and [ g̅ ] is either exactly zero or calculable for quantum matter fields in flat spacetime. In particular, the fundamental observable which can then be obtained is the UV renormalized expectation value of the stress tensor of classically conformally invariant fields,
√(g) ⟨
T^αβ⟩=2
δ_ ren/δ g_αβ
provided ⟨ T̅^αβ⟩=0 or known from flat space physics. Here we derive from (<ref>) the expression for the difference of (densitized) stress tensors √(g) ⟨ T^α_β⟩ - √(g̅) ⟨ T̅^α_β⟩, which for a conformally flat spacetime coincides with a well-known Brown–Cassidy expression <cit.> and generalizes it to the case of a nonvanishing Weyl tensor.
§.§ Conformal anomaly from the divergent part of the effective action
To derive the behavior of the renormalized stress tensor on the conformal group orbit we, first, have to trace the origin of conformal anomaly as the result of subtracting UV divergences from covariantly regularized effective action, _ ren=_ reg-_∞. In dimensional regularization, _ reg=^(d), these divergences are given by
_∞ = -1/16π^2ϵ∫ d^d x √(g) a_2
= 1/16π^2ϵ∫ d^d x √(g) (α ^(4)C^2+β ^(4)E ),
where ϵ = 4-d, ^(4)C^2 and ^(4)E are the four-dimensional invariants formally continued to d-dimensions and a_2 is the relevant second Schwinger–DeWitt coefficient of the corresponding heat kernel expansion for the inverse propagator of the theory <cit.>,
a_2 = -(α ^(4)C^2 +β ^(4)E+γ R),
^(4)C^2 = R_μναβ^2 - 2R_μν^2 + 1/3 R^2,
^(4)E = R_μναβ^2 - 4R_μν^2 + R^2.
This structure of a_2 follows from the local conformal invariance of the pole residue of _∞ at d=4 and associated with the integrability (or conformal Wess–Zumino) condition for a conformal anomaly. It includes the topological Gauss–Bonnet density √(g)E, Weyl tensor squared and the total derivative R terms.
Conformal anomaly arises as a contribution of the conformal transformation of the one-loop counterterm (<ref>) subtracted from the regularized effective action
√(g) ⟨
T^α_α⟩=-2g_αβδ_∞/δ g_αβ,
because the regularized (but not yet renormalized by counterterm subtracting) action _ reg is assumed to be conformally invariant[Or the Weyl invariance violation of dimensionally regularized _ reg is proportional to (d-4)^2 as it happens for spin one case <cit.>, so that it does not contribute to the residue of the simple pole in dimensionality.]. The R term does not contribute to the divergences but it appears in the conformal anomaly in view of the conformal transformation of the Weyl squared term continued to d dimensions. Moreover, within the above subtraction scheme its coefficient γ in the anomaly turns out to be determined by the coefficient α of the Weyl term <cit.>.
Indeed, introduce conformally covariant Weyl tensor in d dimensions
^(d)C_μναβ = R_μναβ +2P_β[μg_ν]α - 2P_α[μg_ν]β,
^(d) C^μ_ναβ = ^(d)C̅^μ_ναβ,
which is written down in terms of the Schouten tensor
P_μν≡1/d-2(R_μν - Rg_μν/2(d-1)).
In view of the relation between the square of Weyl tensors ^(d)C^2≡^(d)C_μναβ^2 and C^2≡^(4)C^2_μναβ (both formally continued to d dimensions) <cit.>
^(4)C^2 = ^(d)C^2 - ϵ/2(E-C^2-19 R^2) + O(ϵ^2)
one has
δ/δ g_μν∫ d^dx √(g) C^2 = δ/δ g_μν∫ d^dx √(g) ^(d)C^2
+ϵ/2δ/δ g_μν∫ d^4 x √(g) (C^2+19 R^2) + O(ϵ^2).
Then, since the tensor ^(d)C_μναβ is conformally covariant in any dimension, g_μν(δ/δ g_μν) ∫ d^dx √(g) ^(d)C^2 = -ϵ2√(g) ^(d)C^2, we have
1/ϵg_μνδ/δ g_μν∫ d^dx √(g)C^2 = -1/2√(g)(C^2+2/3 R) +O(ϵ).
Using this in (<ref>) one recovers the C^2 and the R terms in the expression for the anomaly
√(g) ⟨ T^α_α⟩ = -1/16π^2√(g) a_2,
with the parameter γ related to the coefficient α of the Weyl squared term <cit.>
γ=2/3α.
This simple expression for the trace anomaly in terms of the second Schwinger–DeWitt coefficient also follows from the zeta-function regularization <cit.>.
The Gauss–Bonnet part of the anomaly follows from the conformal variation of the ^(4)E-term in the divergent part of the action. Just like R, as the residue of the pole in _∞ the integral of √(g)^(4)E at least naively does not contribute to the stress tensor, because in four dimensions this integral is a constant Euler characteristics of the manifold. But in a covariant renormalization procedure the coefficient of 1/ϵ in _∞ cannot be treated other than as a d-dimensional object, so that ∫ d^dx√(g)^(4)E is no longer a topological invariant, and its metric variation is nontrivial. Therefore, rewriting, similarly to (<ref>), the dimensionally continued Gauss–Bonnet density in terms of ^(d)C^2,
^(4)E = R_μναβ^2 - 4R_μν^2 + R^2
= ^(d)C^2 - (2-3ϵ)(R_μν^2 - 13 R^2)+O(ϵ^2),
one has
1/ϵδ/δ g_αβ∫ d^dx √(g) ^(4)E = -√(g)( 12W^αβ+^(3)H^αβ
+2R_μνC^μανβ) + O(ϵ),
where the two new tensors arise
^(3)H^αβ = R^αμR^β_μ - 2/3RR^αβ - 1/2g^αβR_μν^2 + 1/4g^αβR^2,
W^αβ = lim_ϵ→ 01/ϵ(4 ^(d)C^α_μνλ^(d)C^βμνλ - g^αβ ^(d)C^2 ).
The limit to d=4 for the tensor W^αβ is regular here because at d=4 there is the important identity
4 ^(4)C^α_μνλ^(4)C^βμνλ= g^αβ^(4)C^2
—it can be proven by antisymmetrization over five indices in the four-dimensional spacetime <cit.>. Tensors ^(3)H^αβ and W^αβ have the following traces
^(3)H^α_α=13R^2 -R_μν^2=12(E-C^2), W_α^α = C^2.
Thus from (<ref>) and (<ref>) we have the relation
2/ϵg_αβδ/δ g_αβ∫ d^dx √(g) ^(4)E = -√(g)^(4)E + O(ϵ),
which recovers the contribution of E-term in the conformal anomaly (<ref>) with the expression (<ref>) for a_2.
§.§ Minimal form of Wess-Zumino action and a-theorem
Of course there is a big ambiguity in the above analytic continuation of the coefficients relating 4-dimensional objects to their d-dimensional counterparts. This ambiguity reduces to the renormalization by finite 4-dimensional counterterms ∫ d^4x√(g) R_μναβ^2, ∫ d^4x√(g) R_μν^2 and ∫ d^4x√(g) R^2 among which in view of the total-derivative nature of the Gauss-Bonnet density only one counterterm can additionally break Weyl invariance and change the coefficient γ of the R term in the conformal anomaly. This is because the combination ∫ d^4x √(g)(C^2-E)=2∫ d^4x √(g)(R_μν^2-13 R^2) is Weyl invariant, and such a counterterm can be chosen as the square of the curvature scalar, satisfying
g_μνδ/δ g_μν∫ d^4x√(g) R^2
= -6√(g) R.
Therefore this finite local counterterm can be used to alter the coefficient γ and, in particular, put it to zero by a special finite renormalization which we will denote by a subscript Ren,
_ ren[ g ]→_ Ren[ g ]
≡_ ren[ g ] + γ/192π^2∫ d^4x
√(g) R^2.
Regularization and subtraction scheme dependence of γ-coefficient manifests itself in the violation of the relation (<ref>) for the dimensionally regularized electromagnetic vector field <cit.>, but ultimately does not change the physics of the theory because of the locality of the covariant counterterm ∫ d^4x√(g) R^2, whose subtraction point should be determined from the comparison with the observable value of its coupling constant. In the cosmological example considered below the above renormalization (<ref>) corresponds to fixing the coupling constant in the Starobinsky R^2-model <cit.>.
The renormalization (<ref>) has an important consequence – with γ=0 the terms with quartic derivatives of σ, contained in the combination β16π^2∫ d^4x (4√(g̅) σΔ̅_4σ-1/9√(g) R^2) of (<ref>), completely cancel out, and the resulting minimal Wess-Zumino action does not acquire extra hihger-derivative degrees of freedom,
_ Ren[ g ]-_ Ren[ g̅ ]
=α/16π^2∫ d^4x √(g̅) C̅_μναβ^2σ
+β/16π^2∫ d^4x √(g̅) {E̅ σ-4 (R̅^μν
-12g̅^μνR̅ ) ∂_μσ ∂_νσ
-4 σ
(∇̅^μσ ∇̅_μσ)
-2 (∇̅^μσ ∇̅_μσ)^2}.
This minimal version of the action for the dilaton field σ was discussed in <cit.> and used in the derivation of the a-theorem in <cit.> – monotonically decreasing coefficient a=β/16π^2 in the RG flow of the theory from UV to IR domains. This theorem is based on the sign of the last quartic interaction term for this field, related to the cross section of the forward 2→ 2 dilaton scattering which should be positive in unitary theory, its unitarity being related to the absence of higher-derivative ghosts in (<ref>).
§.§ Renormalized stress tensors
The behavior of the stress tensor on the orbit of the conformal group can be obtained by using the commutativity of the following functional variations
[ g_μν(y)δ/δ g_μν(y),
g_βγ(x)δ/δ g_αγ(x)]=0,
which allows one to write
δ/δσ(y)√(g)⟨ T^α_β(x) ⟩ = 2g_βγ(x)δ/δ g_αγ(x)δ_ ren/δσ(y)|_g_μν = e^2σg̅_μν
= g_βγ(x)δ/δ g_αγ(x)√(g)(y)⟨ T^μ_μ(y)⟩|_g_μν = e^2σg̅_μν.
Bearing in mind that g_βγδ/δ g_αγ=
g̅_βγδ/δg̅_αγ at fixed σ and functionally integrating this relation over σ one has
√(g) ⟨ T^α_β⟩ - √(g̅) ⟨T̅^α_β⟩
= 2g̅_βγδ/δg̅_αγΔ[ g̅,σ ],
where Δ[ g̅,σ ] = _ ren-_ ren is given by (<ref>).
Before calculating this difference by the metric variation of Δ[g̅, σ] it is instructive to obtain it directly from the divergent part of the action as it was done in <cit.>. Note that _ ren-_ ren = -(_∞-_∞) because _ reg does not contribute to the anomaly (see footnote <ref>). Therefore,
√(g) ⟨ T^α_β⟩ |_ g̅^ g = -2 g_βγδ_∞/δ g_αγ |_ g̅^ g
To calculate the contribution of the ^(4)C^2-term in _∞ we rewrite it in terms of ^(d)C^2 and use Eq. (<ref>). This leads to the contribution of the first term of this equation
δ/δ g_μν∫ d^dx √(g) ^(d)C^2 = -ϵ/2√(g) W^μν-4√(g) ^(d)B^μν,
^(d)B^μν=(1/d-2R_αβ
+∇_(α∇_β))C^μανβ,
where the tensor W^μν is defined by Eq. (<ref>) and ^(d)B^μν is the d-dimensional Bach tensor. Assembling this with the second term of Eq. (<ref>) we get on the orbit of the conformal group
1/ϵg_βγδ/δ g_αγ∫ d^dx √(g) ^(4)C^2 |_ g̅^ g
= -√(g)[ 4/ϵ^(d)B^α_β
+1/18^(1)H^α_β ]_ g̅^ g + O(ϵ),
where the tensor ^(1)H^α_β is given by the equation
^(1)H^α_β = 1/√(g)g^αγδ/δ g^βγ∫ d^4x √(g)R^2
= -1/2δ^α_β R^2 + 2RR^α_β +2δ^α_β R - 2∇^α∇_β R,
and we took into account that the both tensor densities √(g) W^α_β and √(g) B^α_β in four dimensions are invariant on the conformal orbit. Outside of four dimensions the Bach tensor density transforms on this orbit as (here as above g_μν=e^2σg̅_μν)
√(g) ^(d)B^α_β |_g̅^g = -ϵ/2√(g̅)(R̅^μν+2∇̅^(μ∇̅^ν))(σC̅^α_μβν)
+O(ϵ^2),
which makes the first term on the right hand side of (<ref>) well defined at d→ 4. Note that the expression √(g̅)(R̅^μν+2∇̅^(μ∇̅^ν))(σC̅^α_μβν) treated as a functional of independent g̅_μν and σ is Weyl invariant under local conformal transformations of the barred metric. This can be easily inferred from the invariance of Eq.(<ref>) under the interchange g_μν↔g̅_μν and σ→ -σ or directly checking the conformal transformation of g̅_μν (with a fixed scalar σ).
The contribution of Gauss–Bonnet term to the stress tensor behavior on the conformal orbit is obtained from using (<ref>)–(<ref>). Collecting this contribution with the contribution (<ref>) of the Weyl tensor squared part we finally have
√(g) ⟨ T^α_β⟩|_ g̅^ g = -α/4π^2√(g̅) (R̅^μν +2∇̅^(μ∇̅^ν))(σC̅^α_μβν)
+1/8π^2√(g) [ β ^(3)H^α_β
+α/18 ^(1)H^α_β+2β R^μνC^α_μβν ]_ g̅^ g.
This is a generalization of the Brown–Cassidy formula to the case of a nonzero Weyl tensor. The first term of this expression is Weyl invariant in view of the above remark and can be represented by its unbarred version.
The check of consistency of this formula with the original expression for the conformal anomaly is trivial in view of ^(3)H^α_α=(E-C^2)/2, ^(1)H^α_α=6 R and tracelessness of the Weyl tensor,
√(g) ⟨ T^α_α⟩|_ g̅^ g = √(g)/16π^2[β E-β C^2+2α3 R]_g̅^g = -√(g) a_2/16π^2|_ g̅^ g,
where the last equality follows from the conformal invariance of the density √(g) C^2 and from the relation (<ref>) between the coefficients γ and α, α=32γ.
The recovery of (<ref>) from the direct variation of the Wess–Zumino action (<ref>) goes as follows. We use metric variational formulae
δ/δ g_αβ∫ d^4x √(g) C^2σ = -2√(g)(R_μν+2∇_(μ∇_ν)) (σ C^αμβν),
δ/δ g_αβ∫ d^4x √(g) _4σ = √(g) Δ^αβσ,
δ/δ g_αβ∫ d^4x √(g) φΔ_4σ=-√(g)/2 D^αβ[φ,σ],
which hold for generic scalar test functions σ and φ with the differential operator Δ^αβ acting on σ,
Δ_αβ = 1/3(g_αβ-∇_α∇_β)
+[ 2(g_αβP_μν - g_αμP_βν - g_ανP_βμ)+ 8/3g_μνP_αβ.
.+ 2Pg_αμg_βν - 5/3Pg_αβg_μν - 2W_αμβν]∇^μ∇^ν
+ ( g_αβg_μν - g_αμg_βν - g_ανg_βμ) (∇^μ P)∇^ν,
and the bilinear form D^αβ(φ,σ),
D_αβ[φ, σ] = -1/2g_αβφ σ - 2σ_αβφ
+ 2σ_αφ_β - 1/3 g_αβσ_μφ^μ - 2/3φ_μ(αβ)σ^μ
+ [ 2W_αμβν + 1/3(g_μνR_αβ - g_αμg_βνR)]φ^(μσ^ν)
+ 1/3( 4φ_αμσ^μ_β - g_αβφ_μνσ^μν) + ( φ⇔σ ),
where φ_α≡∇_αφ, σ_αβ≡∇_β∇_ασ, φ_αβγ≡∇_γ∇_β∇_αφ, etc. Note that the trace of Δ^αβ coincides with the Paneitz operator, g_αβΔ^αβ=Δ_4, which matches with the conformal variation (<ref>), and the bilinear form D^αβ(φ, σ) is traceless in view of conformal invariance of √(g)Δ_4.
Using these relations we get from (<ref>) and (<ref>)
√(g) ⟨ T^α_β⟩ |_ g̅^ g = -α/4π^2√(g)(R^μν +2∇^(μ∇^ν))(σ C^α_μβν)
+√(g)/8π^2(2βΔ^α_βσ
+β D^α_β[σ,σ]) + √(g)(γ12
+β18) ^(1)H^α_β |_ g̅^ g.
The term in the first line here coincides with its barred version in (<ref>)—this easily follows from the relation (<ref>) where the integrand can be identically replaced by the barred one. The γ/12^(1)H^α_β term here matches with the α/18^(1)H^α_β term of (<ref>) in view of the relation α=32γ. And finally, the identity holds
√(g) [^(3)H^α_β
+118^(1)H^α_β +2 R^μνC^α_μβν]_ g̅^ g
=√(g)(
2Δ^α_βσ + D^α_β[σ,σ]),
which completely reconciles the two expressions (<ref>) and (<ref>) for the stress tensor behavior on the orbit of the conformal group.
§ CONFORMALLY FLAT SPACETIME
The generalization (<ref>) of Brown-Cassidy formula to the case of a nonvanishing Weyl tensor might be not very useful, because in the general case not much can be said about ⟨ T^α_β⟩ |_g̅. Therefore we will restrict ourselves with the case of the conformally flat spacetime for which the conformal transformation of the metric can lead to the metric g̅_μν of flat spacetime, where ⟨ T̅^α_β⟩ is either zero or can be obtained from flat space physics. Interestingly, in this case the parameter of the conformal transformation σ making this transition satisfies the equation
Δ_4 σ = 1/4_4
and in asymptotically flat case with Dirichle boundary conditions has a unique solution (<ref>), σ=_ RFT. This, apparently not very well known fact, can be proven by using the equation for the conformal transformation of the four-dimensional Schouten tensor (<ref>) (g_μν=e^2σg̅_μν)
P_μν-P̅_μν = -σ_μν -σ_μσ_ν + 1/2σ_ασ^α g_μν,
where σ_μ≡∇_μσ and σ_μν≡∇_ν∇_μσ. Assuming that g̅_μν is flat space metric with P̅_μν=0, differentiating twice and again using this relation to express P_μν in terms of the derivatives of σ one has
∇^μ∇^ν(P_μν+σ_μν
+σ_μσ_ν- 1/2σ_ασ^α g_μν)
=Δ_4σ-1/4_4 = 0,
whence it follows that the conformal invariant metric (<ref>) in the RFT gauge (<ref>) is actually the flat space one when the Weyl tensor is zero
R̅^α_ βμν=0, g̅_μν=e^-2_ RFT[ g ]g_μν|_ C_αβμν=0.
Note that g̅_μν here is not automatically diagonal unit matrix δ_μν, because this is the invariant statement which is valid in any coordinate system.
§.§ Anomaly driven cosmology
Applications of the conformal anomaly in the cosmological context have a long history, see for example <cit.>. In particular, cosmology with the Friedman–Robertson–Walker (FRW) metric represents the situation when the anomalous action Δ[g̅,σ] entirely determines the physics of the field model and via effective equations of motion produces a nontrivial back reaction of quantum matter on the dynamical metric background. The most interesting example is, perhaps, the case when [ g̅ ] in (<ref>) nontrivially contributes to this back reaction effect rather than just serves as an inert flat space background.
This is the spatially closed cosmology driven by a conformal field theory (CFT) from the initial state in the form of a special microcanonical density matrix, which was orginally suggested in <cit.> and recently reviewed in <cit.>. With the density matrix defined as the projector on the space of solutions of the Wheeler–DeWitt equations <cit.> the statistical sum in this model has a representation of the Euclidean quantum gravity (EQG) path integral
Z = ∫ D[ g_μν,ϕ ] e^-S[ g_μν,ϕ ],
where integration runs over the metric g_μν and matter fields ϕ which are periodic on the Euclidean spacetime of topology S^1× S^3 with the time τ compactified to a circle S^1.
When the classical action S[ g_μν,ϕ ] is dominated by numerous CFT fields with their action S_CFT[ g_μν, ], the statistical sum can be approximated by the contribution of the saddle point of this integral. This is the extremum of the total action including the tree-level gravitational Einstein–Hilbert action S_EH[ g_μν] and the effective action [ g_μν] of these CFT fields[Disregarding the graviton loops can be justified by the domination of conformal fields outnumbering the metric, and retaining the Einstein–Hilbert term obviously follows from the fact that this term with renormalized gravitational and cosmological constants is anyway induced from the quantum conformal sector.],
_ tot[ g_μν] = S_EH[ g_μν] +[ g_μν],
e^-[ g_μν] = ∫ D e^-S_CFT[ g_μν, ].
Choosing as g_μν the FRW metric with the scale factor a(τ) and the lapse function N (^2_(3) is the metric of the 3-dimensional sphere of a unit radius),
ds^2=N^2dτ^2+a^2d^2_(3)=
a^2(τ)(dη^2+d^2_(3)),
one immediately finds that in terms of the conformal time variable η, related to the Euclidean time τ by the relation dη=dτ/a(τ), this metric is conformally equivalent to the metric g̅_μν≡ g_μν^ EU of the Einstein static universe with spatial sections—the 3-dimensional spheres of some constant radius a_0,
ds̅^2=a_0^2 (dη^2+d^2_(3))≡
g_μν^ EUdx^μ dx^ν,
ds^2=e^2σ ds̅^2, g_μν=e^2σ g_μν^ EU, σ=lna/a_0.
Therefore the CFT effective action expresses in terms of the same action on a static Einstein universe [ g_μν^ EU]≡_ EU and Wess–Zumino action (<ref>) with the above conformal parameter σ
[ g_μν]=
Δ[ g_μν^ EU,σ ]
+_EU.
The calculation of _EU is strongly facilitated by the static nature of the background, but it still yields a nontrivial result in view of compactification of time on S^1. To begin with, note that although g_μν^ EU explicitly depends on the size a_0 of S^3, the value of _EU is a_0-independent for a fixed period of the conformal time η=∮ dη. This follows from the invariance of the effective action under global conformal transformations (<ref>) for conformally flat spacetimes with zero bulk part of the Euler characteristics (which is the case of S^1× S^3). This also can be confirmed by using scaling properties of the conformal fields. Indeed, the energies of conformal quanta on a static spacetime scale as 1/a_0 and their Hamiltonian reads,
Ĥ=∑_ωω/a_0(â^†_ωâ_ω±1/2),
where summation runs over all quantum numbers (and spins) of the energies ω/a_0 of all field oscillator modes on a static 3-dimensional sphere of the radius a_0 and â^†_ω and â_ω are the relevant creation-annihilation operators (± signs correspond to bosons or fermions). The path integral over (anti)periodic conformal (fermion) boson fields with a period T=∮ dτ N on a static metric background is exactly calculable and equals the equilibrium statistical sum at the temperature 1/ T which expresses as a function of the conformal time period η= T/a_0
e^-_ EU=∫ D
e^-S_CFT[ g_μν^ EU, ]
= Tr e^- TĤ
=exp(-η E_ vac-F(η)).
Here F(η) is the free energy of the gas of conformal particles and E_ vac is a UV divergent Casimir energy which should be covariantly renormalized
F(η) = ∑_ω[± ln(1∓ e^-ωη) ],
E_ vac = (∑_ω± ω/2)_ ren.
Thus, the dependence on a_0 is absorbed into the dependence on η which should be fixed under the rescaling of a_0. Note that it is η that should be kept fixed under the global conformal transformation which simultaneously rescales the lapse function N and a_0 in the definition of the conformally invariant η=∮ dτ N/a_0.
Remarkably, the covariant renormalization of the vacuum Casimir energy E_ vac also follows from the behavior of the effective action on the orbit of the conformal group. The Einstein universe extending from -∞ to +∞ in η is mapped to flat space by the transition to the radial coordinate ρ
η↦ρ = a_0 e^η,
-∞<η<+∞,
0≤ρ<∞,
with the conformal relation between the two metrics
ds^2_EU=e^2σ ds_ flat^2, σ = -η=lna_0/ρ,
ds_ flat^2=dρ^2+ρ^2 d^2_(3).
For the vacuum state (the limit η→∞ and F(η)→ 0 in Eq. (<ref>)) _ EU→ E_ vacη. On the other hand, from Eq. (<ref>) with the above expression for σ
Δ[ g_ flat,σ ]=β/8π^2∫ d^4x√(g_ flat)(_ flatσ)^2
-1/32π^2(γ/6 + β/9) ∫ d^4x√(g_EU)R^2_EU.
Bearing in mind that _ flatσ=-2/ρ^2, ∫ d^4x√(g_ flat)↦ 2π^2∫ dρ ρ^3, R_EU=6/a_0^2 and ∫ d^4x√(g_EU)↦ 2π^2a_0^4∫ dη, one has
_ EU-_ flat=Δ[ g_ flat,σ ] =β∫dρ/ρ-
(3/8γ + β/4)∫ dη
=3/4 (β-γ/2)∫ dη.
Therefore, under an obvious assumption that _ flat=0 one has
E_ vac=3/4 (β-γ/2).
In other words, after covariant renormalization by covariant counterterms the Casimir energy gets the value compatible with the behavior of the renormalized effective action on the conformal group orbit (or with the Brown–Cassidy formula for the vacuum stress tensor). This compatibility was indeed checked by direct renormalization of the UV divergent sum over field modes in (<ref>) <cit.>.
Let us now turn to the contribution of the conformal transformation from the generic FRW metric to that of the static Einstein universe in (<ref>). To begin with we use the freedom of finite renormalization (<ref>) which reduces the theory to the case of anomaly (<ref>) with γ=0 and, in particular, renders E_ vac=34β. In the cosmological context this freedom corresponds to the adjustment of the coupling constant of the Starobinsky R^2-action <cit.> which plays an important role in inflation theory and the dark energy model. Then, with γ=0 and σ given by (<ref>) the Wess–Zumino term in (<ref>) takes the form <cit.>
_ Ren[ g ]-_ Ren[ g_ EU] = 3β/2∮ dτ N (a'^2/a - a'^4/6 a),
when written down in terms of the original FRW coordinates with the notation for the invariant time derivative a'=da/Ndτ. Note that the result is again independent of the constant a_0 because it contains only differentiated σ and, moreover, it does not involve higher order derivatives of a(τ). The last property is entirely due to the fact of γ being renormalized to zero and due to the cancellation of higher derivative terms in the minimal form of Wess-Zumino action (<ref>).
Now we assemble together the Einstein-Hilbert action (with the reduced Planck mass M_ P=1/√(8π G) and the cosmological constant ), the action on the Einstein universe space (<ref>) and (<ref>). This leads to the total effective action on the generic Euclidean FRW background periodic in Euclidean time with the period η measured in units of the conformal time
_ tot[ a,N ] = 6π^2 M_P^2∮ dτ N {-aa'^2
-a+/3 a^3
+β/4π^2 M_P^2(a'^2/a -a'^4/6 a +1/2a)} + F(η),
η=∮dτ N/a.
Here the contribution of the conformal anomaly and Casimir energy (<ref>) (with γ=0) are both weighted by the parameter β of the topological term in the conformal anomaly. The free energy of the gas of conformal particles F(η) is a function of the effective (“comoving”) temperature of this gas – the inverse of the circumference η of the cosmological instanton (<ref>). Despite essentially non-stationary metric background this gas stays in equilibrium state because of scaling properties of its particles and produces back reaction on the Friedmann metric background.
Applications of the action (<ref>) have been considered in the number of papers <cit.> and recently reviewed in <cit.>. Physics of the CFT driven cosmology is entirely determined by this effective action and the effective (Euclidean) Friedmann equation. The latter follows from the action by varying the lapse N(τ) and expressing the Hubble factor in terms of the energy density. In cosmic type gauge N=1, ȧ=da/dτ, it reads
1/a^2-ȧ^2/a^2=ε/3M_±^2(ε),
ε=M_P^2
+1/2π^2 a^4∑_ωω/e^ηω-1,
M_±^2(ε)=M_P^2/2(1±√(1
-βε/6π^2M_P^4) ),
where the total energy density ε includes the cosmological constant contribution and the radiation density of conformal field modes distributed over Planckian spectrum with the comoving temperature 1/η. The nonlinear effect of the Weyl anomaly manifests itself in the effective Planck mass squared explicitly depending on ε which takes two possible values M_±^2(ε).[To avoid mixup of the signs in M_±^2 and sign factors associated with the statistics of conformal ω-modes we present here the radiation spectrum only for bosonic case.] These equations should be amended by the expression for the conformal time period that interpolates between the turning points of the solution with ȧ(τ)=0. Note that the right hand side of the Friedmann equation does not contain Casimir energy density – it turns out to be fully screened due to the dynamical effect of the Weyl anomaly. This is the result of the finite renormalization (<ref>) leading to a particular value of the anomaly coefficient of R, γ=0.
For the choice of + sign in M_±^2 the solutions of this quantum Friedmann equation turn out to be the so-called garlands – the cosmological instantons of S^1× S^3 topology, which have the periodic scale factor a(τ) oscillating on S^1 between maximal and minimal values a_± <cit.>. These instantons serve as initial conditions for the cosmological evolution in the physical Lorentzian spacetime. This evolution follows from a(τ) by the analytic continuation a_L(t)=a(τ_++it), (da_L/dt)^2=-ȧ^2, to the complex plane of the Euclidean time at the turning point with the maximal scale factor a_+=a(τ_+). It can incorporate a finite inflationary stage if the model is generalized to the case when a primordial cosmological constant is replaced by the potential of the inflaton field ϕ, → V(ϕ)/M_P^2, staying in the slow-roll regime during the inflationary stage[Alternatively, the role of inflaton can be played by Ricci curvature in the Starobinsky R^2-model, the coupling of the R^2 term being subject to the renormalization respecting the zero value of α in the total Weyl anomaly <cit.>.] and decaying in the end of inflation by a usual exit scenario <cit.>. The energy scale of inflation – its Hubble parameter H∼√(/3) turns out to be bounded from above by √(2)π M_P/√(β), so that to solve the problem of hierarchy between the Planck and inflation scales one needs β≫ 1 which matches with the previously adopted assumption that numerous conformal fields drastically outnumber all other fields and dominate over their loop corrections.
For the negative sign in M_±^2 the solutions represent vacuum S^4-instantons of the no-boundary type with the vanishing minimal value of the scale factor a_-=0. They correspond to the diverging η∼∫_0^a_+da/aȧ→∞ or zero temperature. These solutions, however, do not contribute to the statistical sum because of their infinitely positive action _ tot→+∞ — the quantum effect of the trace anomaly which flips the sign of the negative tree-level action of the Hartle-Hawking instantons <cit.> and sends it to +∞ <cit.>. Thus the CFT cosmology scenario is free from the infrared catastrophe of the no-boundary quantum state which would imply that the origin of an infinitely big Universe is infinitely more probable than that of a finite one.
§ RENORMALIZATION GROUP AND THE METAMORPHOSIS OF THE RUNNING SCALE
This section has essentially discussion nature and is associated with the covariant perturbation theory of the above type. One of the motivations for this discussion is that, in spite of a widespread concept of running cosmological and gravitational constants, which is especially popular within the asymptotic safety approach, there is a very profound and persuading criticism of this concept <cit.>. It is based on numerous arguments of the tadpole structure of the cosmological and Einstein terms, on concrete results for graviton scattering amplitudes <cit.> which cannot be interpreted in terms of a universal scaling of and G, etc.
At the same time in renormalizable gravity models with multiple couplings the solution of the full set of RG equations includes running cosmological and gravitational constants <cit.>. So the question arises how to interpret their running scale. Here is the attempt to do this in terms of the covariant curvature expansion developed in <cit.>.
We start with the classical action which is the sum of local curvature invariants of growing dimensionality (4+m) in units of the mass
S[ g_μν]=∑_m,N^(m)_N∫ d^4x √(g) ^(4+m)_N(x).
They are monomials of N-th order in curvature tensors which are acted upon by covariant derivatives
^(m)_N(x)=∇...∇_m-2N(x)...(x)^N,
dim ^(m)_N(x)
≡[ ^(m)_N(x) ]=m.
The curvature monomials enter the action with coupling constants ^(m)_N of the decreasing (with growing m) dimensionality
[ ^(m)_N ]=d-m, m=0,1, … .
Summation in (<ref>) can run over finite set of terms providing the renormalizability of the theory, or formally extended to the infinite set in the framework of generalized RG theory with infinite set of couplings {}=^(m)_N.
Within covariant perturbation theory the full metric is decomposed as a sum of the flat spacetime metric g̃_μν and the perturbation h_μν
g_μν=g̃_μν+h_μν,
so that each curvature invariant becomes expanded as an infinite series in powers of h_μν forming a new set of h-monomials on the flat space background
∫ d^4x √(g) ^(m)_N=∑_M=N^∞∫ d^4x √(g̃) I_M^(m)(h),
I_M^(m)(h)∝∇̃...∇̃_mh(x)...h(x)^M.
Then in the notations of the covariant perturbation theory the calculation of the renormalized effective action leads to the same sequence of monomials acted upon by the operator form factors _n^(i)({}, ∇̃_1,...∇̃_1) which make them nonlocal, {} denoting the full set of couplings (<ref>). Within dimensional regularization these renormalized coupling constants get rescaled by the normalization parameter μ and expressed in terms of their dimensionless analogues λ^(m)_N(μ)
^(m)_N=μ^d-mλ^(m)_N(μ),
and the perturbation theory form factors also express as the functions of dimensionless arguments
_M^(m)({},∇̃_1,...∇̃_M)=
μ^d-mγ_M^(m)({λ(μ)},∇̃_1μ,...
∇̃_Mμ)
Correspondingly the effective action becomes
[ g_μν]=∑_(m)μ^d-m∑_M=0^∞∫ d^dx √(g̃)
×γ_M^(m)({λ(μ)},∇̃_1μ,...
∇̃_Mμ)
I_M^(m)(h_1,h_2,...h_M) |_ {x}=x,
where I_M^(m)(h_1,h_2,...h_M) is the analogue of the invariant (<ref>) with split spacetime arguments. A typical assumption of the RG theory that the renormalized action is independent of the running scale then leads to the set of equations for λ^(m)_N(μ) with the beta functions following from the residues of spacetime dimension poles in the formfactors _M^(m)({λ(μ)},{∇̃/μ}),
μd/dμ[ g_μν]=0 →μd/dμλ^(m)_N(μ)=β^(m)_N(μ)
({λ(μ)}).
A critical step now consists in the choice of the running scale which could probe the high energy limit of the theory and embrace a simultaneous scaling of all formfactors and invariant monomials of (<ref>). Then the replacement of the parameter μ by this scale will identically bring the effective action to the form explicitly revealing its UV limit. The choice of this scaling object can be very different depending on the concrete physical setup. If the theory has a dimensional scalar field ϕ with a nonvanishing and slowly varying mean value it would be natural to identify RG normalization μ with ϕ. This would lead to the nontrivially “running” in ϕ of the cosmological and Einstein terms, →(ϕ) and G→ G(ϕ), (amended of course by a gradient expansion series in derivatives of ϕ), but of course these terms acquire the interpretation of the Coleman-Weinberg type potential and nonminimal coupling of ϕ to the scalar curvature.
We, however, are interested in the UV scaling of all derivatives ∇̃→∞, which in momentum space representation of scattering amplitudes is conventionally represented by the high energy Mandelstam invariants or some other combinations of external momenta. In the coordinate representation of the covariant perturbation theory of <cit.> the role of this scale should be played by some operator. So we suggest as a candidate for this object the following nonlocal operator D̃ which also formally tends to infinity in the limit of ∇̃→∞ and in fact embraces a simultaneous scaling of all invariant monomials in (<ref>),
D̃≡(-∑_N=1^∞_N)^1/2, _N≡g̃^μν∇̃_μ∇̃_ν.
Though being very formal, this operator is well defined in each N-th monomial order because it becomes truncated to the finite sum when acting on the monomial of N perturbations h_1,...h_N, and for N=0 it is just zero because of its action on an independent of x constant,
D̃_N≡(-∑_M=1^N
_M)^1/2, D̃_0=0.
In the UV domain ∇̃_n→∞, when ∇̃_n/D̃_N=O(1), n≤ N, the formfactors in each N-th order become after the replacement μ→D̃ the functions of a single operator variable D̃_N,
μ^4-mγ_N^(m)(λ(μ) | ∇̃_1μ,...
∇̃_Nμ) |_ μ→D̃_N →
(D̃_N)^4-mγ_N^(m)(λ(D̃_N) | O(1))≡
(D̃_N)^4-mλ_N^(m)(D̃_N),
and the expansion of the formally independent of μ action takes the form
[ g_μν] |_ μ→D̃→ ∑_m∑_N=0^∞∫ d^4x √(g̃)
× (D̃_N)^4-mλ_N^(m)(D̃_N)
I_N^(m)(h_1,h_2,...h_N) |_ {x}=x.
The next step consists in the recovery of the covariant form of the expansion in terms of the original spacetime curvature. Curiously, despite the fact that the covariant perturbation theory of <cit.> is rather often being referred to in literature, subtle details of this step are usually disregarded which leads to confusing statements on the ambiguity of this procedure, dependence on the gauge by which the metric perturbation h_μν is related to the curvature <cit.>, etc. At the same time, this procedure is unique, provided that one does not treat g̃_μν and ∇̃_μ as Cartesian δ_μν and ∂_μ, but rather proceeds in generic coordinate system and uses the only invariant statements that the curvature of the tilded metric is vanishing R̃^α_ βμν=0. This is the covariant equation for g̃_μν in terms of the curved metric g_μν and its curvature R^α_ βμν, whose solution exists as perturbation expansion in R^α_ βμν and also requires imposing the gauge <cit.>. But the result of substituting this solution back into manifestly noncovariant (double field) series (<ref>) is gauge independent because of the implicit invariance of the left hand side of (<ref>).
In the convenient DeWitt type gauge ∇̃^ν h_μν-12∇_μ h=O[ h^2], h≡g̃^αβh_αβ, the solution for h_μν and ∇̃_μ in terms of g_μν and ∇_μ reads in the lowest order as <cit.>
h_μν=-2/R_μν
+O[ ^2], ∇̃_μ=∇_μ+O[ ].
Using this in (<ref>) we get the replacement of h-monomials by the covariant curvature monomials along with the replacement of D̃_N by D_N,
I_N^(m)(h_1,h_2,...h_N)→
1/_1..._N^(m+2N)_N(x_1,...x_N)+O[ ^N+1],
D̃_N→ D_N+O[ ],
where D_N is obviously defined by (<ref>) in terms of full-fledged covariant d'Alembertians =g^μν∇_μ∇_ν, and we reabsorb the coefficient (-2)^n into the symbolic definition of the N-th order covariant monomial – the analogue of the local ^(m)_N(x), see Eq. (<ref>), with split N spacetime arguments
^(m)_N(x_1,...x_N)=
∇...∇_m-2N(x_1)...(x_N), N≥ 1.
For N=0 this monomial can be defined as an irrelevant constant bringing no contribution in the UV limit.
Thus the UV limit of the effective action takes the form
[ g_μν]→∫ d^4x √(g)∑_m,N≥ 0^∞λ_N^(m)(D_N)(D_N)^4-m/_1..._N
× ^(m+2N)_N(x_1,⋯ x_N) |_ {x}=x,
where we remind that the dimensionless formfactors λ_N^(m)(D_N) follow from the running RG couplings of the theory λ_N^(m)(μ) by the replacement of μ with the operator D_N.
Let us consider application of this result to the cosmological constant sector involving the metric invariants of dimensionality m=0 and ^(4)_0=/16π G. This classical cosmological term gives rise to the infinite set of zero dimension invariants
∫ d^4x √(g)=∑_n=0^∞∫ d^4x √(g̃) I_n^(0)(g̃,h),
I_0^(0)(g̃,h)=1,
I_1^(0)(g̃,h)=-12h,
I_2^(0)(g̃,h)=14h^2-12 h_μν^2, …
(indices are contracted by the flat metric and h=g̃^μνh_μν), whereas at the quantum level they generate the sequence of high energy m=0 structures of (<ref>)
∫ d^4x√(g)∑_N=2^∞λ_N^(0)(D_N)
(D_N)^4/_1..._N ^(2N)_N(x_1,... x_N)|_{x}=x,
where the zeroth order term is zero in view of D_0=0 (see Eq.(<ref>) and the first order term is also absent due to its tadpole (total derivative) nature – remember that D_1=(-_1)^1/2 and D_1^4/_1=_1 is acting on ^(2)_1(x_1).[Important caveat is necessary here concerning the annihilation of the total derivative terms. The surface terms at infinity should be vanishing, which is equivalent to a good IR behavior of the nonlocal form factor λ_1^(0)(D_1) at → 0. We will assume this property basing on the maximum logarithmic singularity of λ_1^(0)(D_1) which is a function of log(-) solving the RG equation. The same also applies to integrations by parts considered in what follows. Otherwise, the procedure of subtracting the boundary terms, like the Gibbons-Hawking surface action at asymptotically flat infinity, will be needed, which we briefly discuss below.]
The expansion starts at N=2 with the term which has the following structure
4∑∫ d^4x √(g) ^(2)(x) λ_2^(0)(√(-2)) ^(2)(x)
=∫ d^4x√(g)(R_μνF_1()R^μν+
RF_2()R)+O[ ^3].
Here we took into account that the set of invariants ^(4)_2(x_1,x_2) can be represented as a sum of terms factored out into the products of Ricci tensors and Ricci scalars with some coefficients[Bilinear in Riemann curvature terms under the integration sign also reduce to bilinear combinations of R_μν and R by using the expression for Riemann tensor in terms of the Ricci one <cit.>, see footnote <ref>.] a and b,
^(4)_2(x_1,x_2)=aR_μν(x_1) R^μν(x_2)+b R(x_1)R(x_2),
and also used an obvious corollary of integration by parts
∫ d^4x√(g) F(_1,_2)(x_1)(x_2) |_ {x}=x
=∫ d^4x√(g) (x)F(,)(x).
Remarkable feature of the expression (<ref>) is that the power-law operator factors in (D_N)^4/_1..._N at N=2 completely cancelled out to give the dimensionless formfactors F_1() and F_2() which originate as linear combinations of relevant running λ_2^(0)(√(-2)) obtained by solving the RG equation. Even more remarkable is the fact that this is a nonlocal term which is quadratic in the curvature even though it has originated from the sector of cosmological term expanded in the series of zero dimension invariants. This is what can be called as metamorphosis to high-energy partners of the cosmological constant suggested by J.Donoghue in <cit.>. Their structure is a direct corollary of the dimensionality arguments within RG approach. The arising form factors of the curvature squared terms are the descendants of RG running couplings of the zero dimension invariants which participate in the decomposition of the cosmological constant term.
In fact, the same structure (<ref>) gets reproduced for the contribution of any dimension m in the expansion (<ref>). For even dimensionality[For the set of 2-dimensional curvatures only even dimensions m enter the expansion (<ref>), but this can always be generalized to the case of odd-dimensional “curvatures”, like for example the extrinsic curvature in Hořava gravity models.], m→ 2m, this can be easily demonstrated by decomposing any (2m+4)-dimensional quadratic invariant as this was done above
^(2m+4)_2(x_1,x_2)=∑_m_1+m_2=2m^(m_1+2)_1(x_1)^(m_2+2)_1(x_2).
Using this in (<ref>) one has complete cancellation of the dimensional factor (D_2)^4-2m/^2∼^-m in the expression
∫ d^4x √(g)∑_m_1+m_2=2m^(m_1+2)_1(x)
×λ_2^(2m)(D_2)(D_2)^4-m_1-m_2/^2 ^(m_2+2)_1(x)
=∫ d^4x√(g)(R_μνF_1()R^μν+
RF_2()R)+O[ ^3].
Noting that with ^(m+2)_1=∇⋯∇^m^(2)_1 this follows from integration by parts and the use of various corollaries of contracted Bianchi identity (∇^ν R_μν=12∇_μ R, etc.),
∫ d^4x √(g)∑_m_1+m_2=2m∇⋯∇_m_1(x)F()∇⋯∇_m_2(x)
=∫ d^4x √(g)(R_μν ^m F_1()R^μν+
R ^m F_2() R)
+O[ ^3].
Here the operators F_1() and F_2() have the same dimension as F() and originate from F() by the algebra of contracting the indices of covariant derivatives. Using this relation in the left hand side of (<ref>) one gets the right hand side with completely cancelled powers of .
Thus, Eq.(<ref>) with m=2 implies the conversion of the gravitational coupling constant into the dimensionless formfactors of the Einstein term partners. These partners have the same structure as the cosmological term partners quadratic in curvatures. This is again the metamorphosis of RG running of the form 1/16π G(μ)=μ^2λ^(2)_2(μ)→ F_1,2().
Note that all this takes place in the UV limit where all curvatures in their monomials are rapidly varying in spacetime with their derivatives ∇→∞. At intermediate energies, when the mass scale M surfaces up, the scaling (<ref>) ceases to make sense and roughly should be replaced with D∼ M, and instead of (<ref>) one gets exactly the cosmological constant partners of Donoghue <cit.> which have the structure of
M^4∫ d^4x √(g)(R_μνF_1^ part()/^2R^μν+
R F_2^ part()/^2R).
The dimensionless form factors F^ part_1,2() here are accumulating loop corrections with nonlocal logarithmic structures of the form
F^ part()∼lnM^2-/M^2.
Note that these partners are still in high-energy domain -≥ M^2, but they are subdominant as compared to the leading contribution (<ref>) with dimensionless form factors which incorporate the logarithmically running solutions of RG equations. This is because the partners (<ref>) are suppressed by power law factors M^4/^2. Exact form of these formfactors at intermediate scales was derived at one-loop order in <cit.> for rather generic theory of massive fields by using the heat kernel technique of <cit.>. In IR domain -≪ M^2 they are of course expandable in local gradient series reflecting the decoupling phenomenon <cit.>.
Similarly, the gravitational constant partner in IR reads as
M^2∫ d^4x √(g)(R_μνF_1()/R^μν+
R F_2()/R),
which reminds the construction of the nonlocal action for long-distance modifications of gravity theory in <cit.>. This differs from the cosmological constant partner by another powers of M and the power of in the denominator.
One should be more careful at this point – while the case of (<ref>) is well defined in asymptotically flat spacetime, the cosmological constant partner (<ref>) is IR divergent for the reasons discussed above. The action of 1^2 is not well defined in four dimensions (or, equivalently, ∫ d^4x√(g)(1)^2 is IR divergent), so that the perturbation expansion in the dimension zero sector should be critically reconsidered. To trace the origin of this difficulty note that the first three terms of the cosmological term expansion (<ref>) are divergent, whereas a similar expansion for the Einstein term becomes well defined only after the subtraction of the Gibbons-Hawking surface term ∫_∞ d^3σ^μ(∂_μ h-∂^ν h_μν) at the infinity of asymptotically flat spacetime. Due to this subtraction we can write for the integral of the invariant ^(2)_1(x)=-R(x), weighted in the Einstein action by ^(2)_1=1/16π G, a legitimate expansion (<ref>) starting with the quadratic order in h_μν,
∫ d^4x√(g)(-R)-∫_∞ d^3σ^μ(∂_μ h-∂^ν h_μν)
=∑_M=2^∞∫ d^4x√(g̃) I_M^(2)(g̃,h),
I_2^(2)(g̃,h)=-14 h_μνh^μν+18 hh-12(∇̃^ν h_μν-12∇̃_μh̃)^2.
Then the above calculational strategy leads to the effective action (<ref>) whose tree level IR limit should match low energy physics with the Planck mass cutoff M^2 and the form factors F_1(0)=1 and F_2(0)=1/2. This tree level answer up to ^3-corrections directly corresponds to the above expression for I_2^(2)(g̃,h) with h_μν given by Eq. (<ref>) in terms of the curved space metric g_μν <cit.>.
To the best of our knowledge, no such subtraction is known for cosmological term expansion (<ref>), so that its rigorous treatment is still to be done. It is interesting if new structures can be generated by the regularization of this IR behavior. Apparently, this should be based on the analogue of the Graham-Fefferman construction for asymptotically AdS spaces <cit.> and deserves further studies.
In any case, the UV behavior of both cosmological and gravitational constant partners, which should be not sensitive to IR problems, is determined by curvature squared terms (<ref>) with running dimensionless “couplings”. Their formfactors F_1() and F_2() follow from the RG running of the relevant constants λ^(0)_2(μ) and λ^(2)_2(μ), but the transition λ^(0,2)_2(μ)→ F_1,2() is not straightforward and is mediated by Eqs.(<ref>) and (<ref>)-(<ref>).
§ CONCLUSIONS
To summarize our notes on conformal anomaly, nonlocal effective action and running scales let us briefly dwell on possible applications of our results and related issues.
As it is clear from the above considerations, the conformal anomaly action is a carrier of the effective rather than fundamental conformal degree of freedom. Either in the nonlocal or the Wess-Zumino form, it is the difference of action functionals of two configurations belonging to the orbit of the conformal group. So unless one of these actions is known the corresponding physical setup is not complete. In this respect, our approach is very different from the works which endow the conformal factor e^2σ the nature of the fundamental field <cit.> or, for example, ready to sacrifice the fundamental nature of the Higgs boson in favor of 36 fundamental zero dimension scalars σ for the sake of a complete eradication of Weyl anomaly and justification of the primordial cosmological perturbations spectra <cit.>.
The CFT driven cosmology of Sect.<ref> seems to present such an example where the physical setup is complete within a certain approximation scheme. This approximation is associated with the dominance of conformal invariant matter fields over the loop effects of gravity and other types of matter and simultaneously puts the model in the subplanckian domain of energies below the cutoff M_P/√(β) when the coefficient of the topological conformal anomaly β≫ 1 <cit.>. To match with the widely accepted bounds on the energy scale of inflation ∼ 10^-6M_P one needs β∼ 10^13, which cannot be attained by a contribution of low spin conformal fields β=(1/360)(N_0+11N_1/2+62N_1) unless the numbers N_s of fields of spin s are tremendously high.
On the contrary, this bound can be reached by appealing to the idea of conformal higher spin (CHS) fields <cit.>. A relatively low tower of higher spins will be needed, because a partial contribution of spin s to β grows as s^6. These partial contributions β_s for CHS totally symmetric tensors and Dirac spin-tensors read in terms of ν_s – their respective numbers of polarizations (negative for fermions) <cit.>,
β_s=ν_s^2(3+14ν_s)/720, ν_s=s(s+1), s=1,2,3,... ,
β_s=ν_s(12+45ν_s+14ν_s^2)/1440,
ν_s=-2(s+12)^2, s=12,32,52,... .
The solution of hierarchy problem thus becomes a playground of 1/N-expansion theory for large number N of conformal
species. Moreover, with the inclusion of CHS fields the status of conformal anomaly essentially changes and becomes similar to that of the chiral anomaly. Chiral anomaly has phenomenological confirmation within chiral symmetry breaking theory, it also has important implications in lepton physics, physics of early Universe, its baryon asymmetry theory, etc. It has a topological nature and is generated in virtue of Adler-Bardeen theorem only at the one-loop level. Local Weyl anomaly also has topological (a-type) contribution <cit.>, but for low spins it is contributed by all orders of loop expansion. CHS spins, however, have their inverse propagators ∼^s+... and, therefore, for high s are UV finite beyond one loop approximation. So their Weyl anomaly is also exhausted by the one-loop contribution, and there is a hope that their effect in the CFT driven cosmology is nonperturbative. As this effect intrinsically, by a dynamical mechanism of effective equations of motion <cit.>, provides the upper bound on the energy range of inflation M_P/√(β)≪ M_P, this also justifies omission of graviton loops and quantum effects of other (non-conformal) types of matter.
There are, however, serious problems on the road to the realization of this model. To begin with, CHS fields in curved spacetime are not explicitly known yet, except conformal gravitino with s=3/2 and Weyl graviton with s=2. Recent progress in generalizing these models to arbitrary s on the Einstein-space background allowed one to compute their 1-loop Weyl anomaly coefficients (<ref>)-(<ref>) (by indirect AdS/CFT method in <cit.> and directly in <cit.>). This result, however, leaves the issue of unitarity violation caused by inevitable higher derivatives in wave operators of these fields. Moreover, these fields should form a hidden sector not observable at present, which implies the necessity of their eradication in the course of cosmological expansion. What might be useful for this purpose is the idea of renormalization group flow from UV to IR decreasing the value of β (the so called a-theorem of <cit.>) or Weyl symmetry breaking which would generate masses of CHS fields and thus shorten their massless tower. Finally and most importantly, the fundamental theory of these interacting CHS fields should necessarily be organized within a special higher spin symmetry <cit.>. A complete version of this theory is still missing, not to say about its constructive extension to curved spacetime. Thus, the progress here strongly depends on advancing theory of CHS fields <cit.>.
The issue of RG running constants and G has, as it is shown, a rather unexpected resolution. The manifestation of this UV running actually takes place in the nonlocal formfactors of the quadratic curvature (dimension four) terms rather than in the sector of low dimension operators. This metamorphosis originates from establishing a rather nontrivial scaling operator (<ref>) embracing all powers of the curvature expansion and exploiting a conventional RG assumption that the renormalized theory does not depend on the choice of normalization (or subtraction) point. Then simple, though somewhat tedious, dimensionality considerations lead to this result. Dimension zero and dimension two cosmological and Einstein terms do not run themselves but still contribute to the running of the dimension four terms which can be considered as UV partners of and G. This metamorphosis of RG running couplings into the formfactors of the curvature squared terms sounds important, because it is the quadratic term in the effective action that mainly determines either the asymptotic freedom of the model or its cutoff beyond which effective field theory breaks down.
In the IR domain these partners, due to the presence of mass scale M, also start from the quadratic order in the curvature, but they have essential nonlocality – of the type M^4∫ d^4x √(g)(1)^2 coming from cosmological constant sector <cit.> and of the form M^2∫ d^4x √(g)1 originating from the gravitational constant one. While the latter is well defined in IR limit due to the subtraction from the IR divergent bulk Einstein action of the Gibbons-Hawking surface term <cit.>, for the IR cosmological partner <cit.> the situation is trickier – in view of IR divergences it requires additional subtraction procedure. Perhaps even more radical changes will be needed to circumvent this problem like the curvature expansion on top of the homogeneous (dS or AdS) background with nonzero curvature.
Of course, there can be other choices of the running scale D different from (<ref>. Nothing prevents from replacing it, say, with (∑_N (-_N)^k)^1/2k or other combinations of contracted derivatives. However, for curvature squared terms of the action all such choices (satisfying the homogeneity property with respect to derivative rescalings) lead to one and the same operator ∼(-)^1/2 because for the second order of the curvature expansion all d'Alembertians reduce to the single one, _1=_2, in view of integration by parts (<ref>). The only ambiguity is the choice of the d'Alembertian itself, but it is fixed by the requirement of general covariance. Alterations in the choice of D certainly affect higher orders in the curvature, but the curvature squared part, which is most important for UV asymptotic freedom or determination of the effective field theory cutoff, stays uniquely defined.
Ambiguity in the choice of D can arise in the class of theories which have
a more or less conventional RG running of the gravitational coupling G – renormalizable Hořava gravity models <cit.>. In these Lorentz symmetry violating models a possible covariant curvature expansion undergoes (3+1)-splitting – the set of basic curvatures includes the extrinsic curvature K_ij, i,j=1,2,3, of spatial slices of constant time τ. The Einstein term of general relativity is replaced by the sum of the kinetic term ∼(16π G)^-1∫ d^4x √(g)K_ij^2 and the potential term built as a polynomial in 3-dimensional curvature and its spatial derivatives. The RG running of G in the kinetic term proceeds as the insertion of the form factor G^-1(D) between two factors of K_ij(x),
1/G∫ d^4x√(g) K_ij^2
→∫ d^4x√(g) K_ij(x)1/G(D)K^ij(x).
Thus no tadpole problem for the RG running of G takes place here – just like in Yang-Mills type theories this occurs without forming a total derivative structure.
However the relevant scaling operator D of a unit anisotropic scaling dimension, which replaces the spacetime covariant square root of (-), turns out to be ambiguous. Point is that in Lorentz violating models the notion of physical scaling dimension is replaced by the anisotropic one which in (3+1)-dimensional Hořava gravity is -3 for the time coordinate and -1 for spatial coordinates. Correspondingly the dimension 6 wave operator of the theory is of the second order in time derivatives and of the sixth order in spatial derivatives. Therefore, D∼ (-∂_τ^2-Δ^3/M^4)^1/6 where Δ is the spatial covariant Laplacian and M is a physical mass scale parameter. This parameter may be different in various (scalar and transverse-traceless) sectors of the metric field <cit.>, and this is a source of ambiguity in the running scale of Hořava models. Modulo this problem RG running in renormalizable non-projectable Hořava gravity is well defined and in (3+1)-dimensional case has a legitimate interpretation of asymptotic freedom <cit.>.
§ ACKNOWLEDGEMENTS
Essential part of this paper was inspired during the workshop “Quantum Effective Field Theory and Black Hole Tests of Einstein Gravity” (IFPU, Miramare, Trieste, Italy, September 12-16, 2022), and A.O.B. is very grateful to organizers and participants of this workshop. The authors deeply appreciate the efforts by J.Donoghue, M.Duff, E.Mottola and H.Osborn of critically reading our manuscript. A.O.B. is also grateful for fruitful discussions and correspondence with John Donoghue, Michael Duff, Alexander Kamenshchik, Emil Mottola, Roberto Percacci, Hugh Osborn, Ilya Shapiro, Kostas Skenderis, Arkady Tseytlin, Alex Vikman, Richard Woodard and especially to G. A. Vilkovisky for long term collaboration on covariant perturbation theory for quantum effective action. This work was supported by the Russian Science Foundation grant No 23-12-00051.
unsrturl
|
http://arxiv.org/abs/2306.04157v1
|
20230607051421
|
Phase formation in hole- and electron-doped rare-earth nickelate single crystals
|
[
"P. Puphal",
"V. Sundaramurthy",
"V. Zimmermann",
"K. Küster",
"U. Starke",
"M. Isobe",
"B. Keimer",
"M. Hepting"
] |
cond-mat.str-el
|
[
"cond-mat.str-el",
"cond-mat.mtrl-sci",
"cond-mat.supr-con"
] |
AIP/123-QED
Phase formation in hole- and electron-doped rare-earth nickelate single crystals]Phase formation in hole- and electron-doped rare-earth nickelate single crystals
[email protected]
Max Planck Institute for Solid State Research, Heisenbergstraße 1, D-70569 Stuttgart, Germany
The recent discovery of superconductivity in hole-doped infinite-layer nickelates has triggered a great interest in the synthesis of novel nickelate phases, which have primarily been examined in thin film samples. Here, we report the high-pressure optical floating zone (OFZ) growth of various perovskite and perovskite-derived rare-earth nickelate single-crystals, and investigate the effects of hole-, electron-, and self-doping. For hole-doping with Ca and Sr, we observe phase separations during the growth process when a substitution level of 8% is exceeded. A similar trend emerges for electron-doping with Ce and Zr. Employing lower doping levels allows us to grow sizeable crystals in the perovskite phase, which exhibit significantly different electronic and magnetic properties than the undoped parent compounds, such as a decreased resistivity and a suppressed magnetic response. Our insights into the doping-dependent phase formation and the resulting properties of the synthesized crystals reveal limitations and opportunities for the exploration and manipulation of electronic states in rare-earth nickelates.
[
M. Hepting
July 31, 2023
=================
§ INTRODUCTION
The Ruddlesden-Popper nickelates R_n+1Ni_nO_3n+1 (R = rare-earth ion) with the perovskite (n = ∞) or perovskite-derived crystal structures (n ≠∞) are prototypical quantum materials <cit.>, showing a plethora of electronic, orbital, and magnetic phases already without charge carrier doping <cit.>. The perovskite-derived phases with n = 1 - 3 have been actively studied since the 1980s <cit.>. To synthesize the perovskite phase RNiO_3, high pressure is required to stabilize the Ni^3+ oxidation state and the distorted crystal structure where the octahedral tilt angles are determined by the radius of the R ion. The least distorted perovskite nickelate LaNiO_3 was first synthesized in 1957 <cit.>, whereas nickelates with R other than La were realized in 1991 <cit.>. The RNiO_3 compounds have been of long-standing interest due to their sharp metal-to-insulator transition <cit.> and a magnetic ground state with an unconventional spin spiral <cit.>. Additionally, the metal-to-insulator transition is accompanied by an orthorhombic to monoclinic structural phase transition <cit.>. The transition temperatures of these phases decrease as the size of the R ion increases. One exception is the compound with the largest anion, LaNiO_3, which exhibits a rhombohedral structure and remains metallic and paramagnetic down to the lowest temperatures, provided that the oxygen content is stoichiometric <cit.>.
Upon hole- or electron-doping of the Ruddlesden-Popper nickelate, new electronic phases can emerge <cit.>, along with potential functional properties <cit.>. In RNiO_3 powders, hole- and electron doping up to 10% has been achieved through the substitution of the R ion by divalent Sr/Ca and tetravalent Th/Ce ions, respectively <cit.>. Typically, the powder synthesis is carried out under high external oxygen gas pressures <cit.>. Recent studies have reported substitutions in powders as high as 40% <cit.>. However, further investigations are needed to determine if such high substitution concentrations are homogeneously incorporated into the microstructure of the RNiO_3 powder grains.
Since 2019, the field has been reinvigorated <cit.> by the discovery of superconducting behavior in hole-doped rare-earth nickelates with the infinite-layer crystal structure <cit.>. This structure can be achieved through the topotactic oxygen deintercalation of the perovskite phase <cit.>. These infinite-layer nickelates with Ni^1+ are nominally isoelectronic and isostructural to cuprate high-temperature superconductors with Cu^2+ ions <cit.>. Additionally, superconductivity has been observed in the undoped nickelate Nd_6Ni_5O_12 <cit.>, which, however, can be considered a self-doped cuprate analogue <cit.>. Yet, superconductivity in the material family of nickelates has only been observed in thin film samples <cit.>, even though the latest studies have indicated that it could be an intrinsic property of the bulk phase of doped infinite-layer nickelates <cit.>. Furthermore, theoretical studies propose that superconductivity may also arise in electron-doped nickelates, such as La_2.4Zr_0.6Ni_2O_6 where La^3+ is partially substituted by tetravalent Zr^4+ ions <cit.>.
Currently, a direct synthesis of the infinite-layer phase of nickelates is unfeasible, due to the highly metastable Ni^1+ state in square-planar NiO_2 units. Similarly, the direct synthesis of other Ruddlesden-Popper derived nickelates with NiO_2 planes, such as Nd_6Ni_5O_12, has yet to be achieved. However, well-established routes exist for the topochemical removal of the apical oxygen from the parent nickelate phases, such as using H_2/Ar gas, or CaH_2 powder as a reducing agent <cit.>. In particular, the CaH_2-assisted reduction has been successfully demonstrated not only on perovskite thin films <cit.> and polycrystalline powders <cit.>, but also on single-crystals <cit.> with volumes up to 1 mm^3 <cit.>.
A technical breakthrough enabling the growth of large R_n+1Ni_nO_3n+1 single-crystals was the advent of the high-oxygen pressure high-temperature optical floating zone technique, yielding bulk single-crystals of LaNiO_3 <cit.>, PrNiO_3 <cit.>, La_3Ni_2O_7 <cit.>, and (La,Pr)_4Ni_3O_10 <cit.>. However, a problem with the oxygen transport towards the center of the boule was observed in this synthesis method <cit.>. As a result, oxygen deficient phases may form, but can be alleviated by post annealing under high gas pressure in autoclaves.
Overall, the technical advances in the perovskite single-crystal growth as well as the demonstrated topotactic reduction of LaNiO_3 single-crystals to infinite-layer LaNiO_2 <cit.>, call for the synthesis of doped perovskite single-crystals that can possibly be reduced to the infinite-layer phase. In this work we present the optical floating zone growth of single-crystals of doped perovskite and Ruddlesden-Popper phase nickelates under 300 bar of oxygen partial pressure, and discuss the opportunities and limitations of this method. We begin with an overview on the synthesis of the undoped compounds LaNiO_3 and PrNiO_3, as well as an attempt of the growth of the n = 5 Ruddlesden-Popper nickelate Pr_6Ni_5O_16. As a second type of samples, we investigate Ce- and Zr-doped (La,Pr)NiO_3, which are nominally electron-doped. We also report on Sr-doped LaNiO_3, which is nominally hole-doped.
§ METHODS
Precursor powders were prepared by mixing the corresponding stoichiometries of La_2O_3 (99.99% Alfa Aesar), Pr_6O_11 (Alfa Aesar, 99.99%) and NiO (99.998% Alfa Aesar), as well as CeO_2 (99.99% Alfa Aesar), CaCO_3 (99.999% Alfa Aesar), SrCO_3 (99.998% Roth), ZrO_2 (99.978% Alfa Aesar), and Eu_2O_3 (99.995% Roth). Subsequently, the powders were ball-milled for 20 minutes and the mixtures were transferred in alumina crucibles to box furnaces, followed by heating to 1100^∘C.
Cylindrically shaped feed and seed rods were prepared by ball-milling the sintered materials, which were filled into rubber forms with 6 mm diameter. The rubber was evacuated and pressed in a stainless steel form filled with water using a Riken type S1-120 70 kN press. All rods were heat treated at 900^∘C to avoid cracking or breaking due to the oxidization process, when the density of the rod becomes too high.
The single-crystal growth was carried out in a high pressure, high-temperature, optical floating zone furnace (model HKZ, SciDre GmbH, Dresden, Germany), that allows for gas pressures in the growth chamber up to 300 bar. The growth chamber has a length of 72 mm and 20 mm wall thickness. A xenon arc lamp operating at 5 kW was used as a heating source with the rare vertical mirror alignment of the HKZ. The rods were then aligned in the HKZ on steel holders followed by the installation of the high pressure chamber. Subsequently the chamber was filled up to 14/ 30/ 85/ 200/ 300 bar oxygen pressure and held at a flow rate of 0.1 l/min. The crucial part in the nickelate growth is the initial melt connection which is complicated, as we typically start with a mixture of the Ruddlesden-Popper phase (La,Pr)_2NiO_4 and NiO, which both have lower melting points than the desired perovskite phase. However, a preannealing at high pressures leads to cracking and even breaking of the rods. Thus, we either increased the power until a homogeneous zone melt was achieved, or premelted the rods at the given pressure to preform the final phase. In the former case, achieving a homogenous zone melt is necessary before connecting the two rods, but the starting process of the growth was often very challenging. Both procedures have been employed for the materials synthesized in this work, and have proven to be equally successful. Notably, a premelting of the rods does not increase the oxygen content in the obtained boule, as the given oxygen pressure and oxygen transport in the melt stabilizes this, and the subsequent growth is easier.
All growths were conducted under oxidizing atmosphere, with the partial oxygen pressure tailored to yield the best outcome for each composition (see Tab. <ref>, <ref>). Common to all compounds investigated here is the increase in melting temperature when the phase containing Ni^3+ is formed. This effect is amplified by the endothermal oxidation process which we counteracted by a continuous increase of the lamp power, particularly during the early stage of the growth. This means that newly introduced material will be largely overheated and becomes very fluid. Thus, without premelting, the diameter of the grown boule is dictated by the flow of the low-viscosity melt that requires constant feeding. For all growths, we found that a growth rate of 2 mm/h with an additional feed rate of 2 mm/h results in stable conditions.
Powder x-ray diffraction (PXRD) was performed at room temperature using a Rigaku Miniflex diffractometer with Bragg Brentano geometry, Cu K_α radiation and a Ni filter. Rietveld refinements were conducted with the FullProf software suite <cit.>.
The x-ray Laue diffraction images were collected with a Photonic Science CCD detector using a standard W broad x-ray source operated at 35 kV and 40 mA. For indexing of the Laue patterns, the software ORIENTEXPRESS was used.
Electron microscopy images with both secondary electrons (SE) and backscattering electrons (BSE) were taken with a Zeiss Merlin electron microscope operated at 12 kV, 600 mA at a sample distance of 5 mm. Energy-dispersive x-ray spectra (EDS) were recorded with a NORAN System 7 (NSS212E) detector in a Tescan Vega (TS-5130MM) SEM.
Magnetic susceptibility measurements were performed using a vibrating sample magnetometer (MPMS VSM SQUID, Quantum Design) and electrical transport measurements with a Physical Property Measurement System (PPMS, Quantum Design).
X-ray photoelectron spectroscopy (XPS) data were collected using a commercial Kratos AXIS Ultra spectrometer and a monochromatized Al K_α source (photon energy, 1486.6 eV). The base pressure during XPS was in the low 10^-10 mbar range. The spectra were collected using an analyzer pass energy of 20 eV. XPS spectra were analyzed using the CASAXPS software <cit.>. All samples were cleaved in a glove box, mounted on carbon tape or In foil, and transported under inert atmosphere to the XPS chamber.
§ RESULTS
§.§ Undoped nickelates
Figure <ref>a shows the as-grown LaNiO_3 boule together with the characterization by PXRD and a Laue diffraction image (Fig. <ref>e). The OFZ synthesis with a HKZ-type furnace of LaNiO_3 as well as PrNiO_3, La_3Ni_2O_7, and (La,Pr)_4Ni_3O_10 has been reported previously <cit.>, although only a few details were given about the growth, phase diagram and the formation of secondary phases.
As can be seen in Fig. <ref>a, the PXRD reveals the presence of a small amount of NiO in addition to the primary LaNiO_3 phase in a pulverized piece broken off from the boule. The emergence of secondary phases such as NiO will be discussed in detail for the other nickelate compounds in this study. In accord with previous studies on LaNiO_3, we refine the PXRD data in the rhombohedral space group R3̅c, which also describes the structure of the doped La-based perovskites in our study (see Tab. <ref>, <ref>). As apparent from Laue diffraction (Fig. <ref>e), the growth direction of LaNiO_3 corresponds to the rhombohedral (110) direction. Under the application of a relatively strong force, the LaNiO_3 boule can be cleaved. The exposed surfaces are rough with small facets (Fig. <ref>c,d) that are mostly aligned in parallel to the (110) plane. The boule can also be cleaved along the orthogonal direction, which is displayed in Fig. <ref>d where the (11̅0) direction points out of the picture while the c-axis lies in the horizontal plane (see also the corresponding sketch of the crystal structure in Fig. <ref>b).
Figures <ref>a,b diplay the boules and the PXRD patterns of our PrNiO_3 growth and the attempted growth of Pr_6Ni_5O_16. To achieve a boule of PrNiO_3 with a length of about 8 cm (see Fig. <ref>a), a continuous pressure of 300 bar was held for several days. The results of our PXRD analysis of PrNiO_3 and our other growths are summarized in Tab. <ref>, displaying the lattice constants and phase compositions in wt%.
In agreement with previous studies, we refine the structure of PrNiO_3 in the orthorhombic space group Pbnm. In Fig. <ref>c we present the first phase diagram of NiO - PrO_2, which at the given pressure of 300 bar oxygen only contains four phases: NiO, PrNiO_3, Pr_4Ni_3O_10 and PrO_2.
Similarly to LaNiO_3 (Fig. <ref>b), we detect a minor amount of NiO in the stoichiometrically grown PrNiO_3 (Fig. <ref>b). Note that an admixture of NiO to the perovskite phase can become relevant after a topotactic reduction where ferromagnetic Ni forms which can dominate the signal in magnetic susceptibility measurements <cit.>. In cases of very small NiO admixture, its presence can be below the detection limit of standard laboratory PXRD. A method that allows to detect even subtle amounts of NiO inclusions is BSE imaging. In particular, unlike SE images that reflect the sample topography, BSE images reveal the spatial distribution of the elements. Figure <ref>d shows the simultaneously acquired SE and BSE images of a broken surface from the PrNiO_3 boule. In the latter image, NiO rich regions are clearly observed as the dark contrast.
In principle, the emergence of NiO rich regions should be alleviated in a growth from a precursor with Pr excess. However, we find that when varying the stoichiometry to a subtle Pr rich content, Pr_4Ni_3O_10 forms as an intergrown phase, especially in regions in proximity to the surface of the boule (see light contrast in the right panel of Fig. <ref>e). This occurs because impurity phases are usually pushed to the surface in an OFZ growth, as nucleation occurs in the core of the boule. Due to our external heating source the heat is only indirectly transported to the center resulting in a small gradient. Nonetheless this observation is in contrast to the pressure stability, as the center also sees the least pressure and Pr_4Ni_3O_10 is more stable at lower pressures.
For PrNiO_3, the growth direction corresponds to the orthorhombic (100) direction. Similarly to LaNiO_3, under the application of a relatively strong force, the PrNiO_3 boule can be cleaved. The exposed surfaces are rough with small facets that are mostly aligned in parallel to the (100) plane. The boule can also be cleaved along the orthogonal direction, where the (010) direction points out of the plane and the (001) direction is the horizontal one. Notably, for both La- and Pr-compounds these cleaving patterns emerge even for the doped cases that will be discussed below.
Next, we turn to the synthesis of the n=5 Ruddlesden-Popper phase Pr_6Ni_5O_16 under 300 bar oxygen partial pressure. If successful, the compound could provide a perspective for topotactic reductions to the Pr_6Ni_5O_12 phase, in analogy to superconducting Nd_6Ni_5O_12 films <cit.>, and seems ideally suited for the high-pressure OFZ growth, as the pressure stability range should fall into the region accessible with the HKZ. However, instead of Pr_6Ni_5O_16, our PXRD characterization reveals a phase mixture of the perovskite and the n=3 phase Pr_4Ni_3O_10 (Fig. <ref>b), which are the phases that encompass the n=5 compound in the growth phase diagram. Similarly, we find in an attempted synthesis (not shown here) that also the n=5 variant La_3Eu_3Ni_5O_16 cannot be stabilized under 300 bar oxygen (see Tab. <ref>), in spite of the different ionic radii of La and Eu. This is distinct from other layered materials such as cuprates, where the mixing of ionic radii of the smaller rare earth ion with large Ba facilitates the stabilization of complex layered structures. Here we find that instead of the n=5, the n=1 and 3 Ruddlesden-Popper phases form. Hence, we conclude that at a given pressure of 300 bar, the synthesis of La- and Pr-based Ruddlesden-Popper phases higher than n=3 is likely unfeasible, as the existence of these phases is probably restricted to a very narrow range in pressure and composition space.
§.§ Electron-doping of PrNiO_3
Electron-doping of PrNiO_3 can be achieved through substitution of the trivalent Pr ions by nominally tetravalent Ce ions. Here we carry out the OFZ growth at 280 and 300 bar oxygen partial pressure, respectively, aiming for 5% Ce doping of the perovskite phase. The upper panel in Fig. <ref>a shows the as-grown boules for 280 (top) and 300 bar (bottom). In comparison to the growth of the undoped compounds, a more homogeneous and viscous liquid forms during this growth, facilitating the growth of a long and homogeneous boule. In our characterization, the boules grown under 280 and 300 bar show similar phase formations, and hence we only focus on the latter in the following. Already for the small amount of 5% doping, a phase mixture between Ce-substituted PrNiO_3, (Pr,Ce)O_2, and NiO forms (see PXRD in Fig. <ref>a), presumably because the solubility limit of Ce is very low. The phase formation in different regions of the boule is best seen in BSE imaging, with Fig. <ref>b giving a wide-scale overview of the cross section of the boule along the growth direction.
In the OFZ growth, we are constantly feeding the weighed-in stoichiometry, thus when a solubility limit in growth is reached the seed crystal incorporates too little of the corresponding element. Hence this accumulates in the growth until an eutectic point (the minimum in the phase diagram, e.g 0.2 and 0.75 shown in Fig. <ref>c) in the phase diagram is reached. Here, the eutectic part freezes out and forms a layer of the eutectic stoichiometry (which contains a major part of the impurity phase). This eutectic can be seen in Fig. <ref>b, which shows a large crossection of the boule via BSE with elemental resolution. Here black lines are visible revealing these eutectic rings. Figure <ref>d shows a magnified version of these eutectic mixes, revealing three phases intergrown with each other. Note that for all subsequent BSE images in the other subchapters we only show the magnified versions of the eutectic parts in our boule and XRD focuses on these sections as well, but in all cases similar eutectic rings are formed and completely homogenous parts exist as well, similar to what is visible in Fig. <ref>b on the top right.
Nonetheless, the matrix is a perovskite phase as the Laue diffraction images from broken surfaces of the boule (not shown here) can be indexed in space group Pbnm, which suggests that large grains of Ce-substituted PrNiO_3 have crystallized in our boule, although BSE imaging reveals that the other two phases are intergrown in these grains (Figs. <ref>b,d). Importantly, we do not detect any traces of higher order Ruddlesden-Popper phases, such as Pr_4Ni_3O_10, which is in contrast to the Pr-excess growth of the undoped compound shown in Fig. <ref>e. Instead, we observe already the formation of (Pr,Ce)O_2 which is an endmember of the growth phase diagram. Hence, we propose the modified growth phase diagram shown in Fig. <ref>c for a Ce-mixed pseudo binary composition.
§.§ Electron-doping of LaNiO_3
Next, we focus on the electron-doping of LaNiO_3. As a first attempt we attempted to synthesize crystals of composition La_0.8Ce_0.2NiO_3, which according to prior work on infinite-layer thin films is optimal for superconductivity, but observed that the growth was not very stable as can be seen in the image of the grown boule in Fig. <ref>a. Large crystals could not be extracted from the growth and we find a low wt% of the perovskite phase (see Tab. <ref>). Thus, we go for lower substitutions and we obtain large single-crystals with millimeter dimensions (as shown in Fig. <ref>a) for the nominal compositions La_0.95Ce_0.05NiO_3 and La_0.91Ce_0.09NiO_3. For the La-based compounds, we observe that already small Ce-substitution levels lead to the emergence of a competing phase, which in this case is the pyrochlore phase La_2Ce_2O_7. The presence of this secondary phase is detected in the PXRD patterns (Figs. <ref>b,c) and the BSE image (Fig. <ref>e), revealing that La_2Ce_2O_7 is embedded in the single crystalline perovskite matrix as eutectic rings together with NiO. Hence, Ce-substitution of the perovskite phase is challenging, although small crystals of (La,Pr)_1-xCe_xNiO_3 surrounded by regions of secondary phases can be realized in the boule.
A different route to achieve electron-doping can be the substitution by tetravalent Zr ions. Such doping was recently proposed theoretically for the topotactically reduced n =2 compound La_2.4Zr_0.6Ni_2O_6 <cit.> where superconducting behavior is expected according to the nominal valence of Ni. As a first step, we attempt to synthesize this material at a moderate pressure of 14 bar oxygen. While such pressure was sufficient to synthesize the non-substituted n =2 compound La_3Ni_2O_7 <cit.>, we find for the Zr-doped variant that a phase mixture between the perovskite and the double-perovskite phase La_2NiZrO_6 forms (see Tab. <ref>). As the n = ∞ phase is typically stable starting with 30 bar <cit.>, we reduce the pressure and attempt a synthesis at 2.5-4.5 bar (see Tab. <ref>). In this case, however, we obtain a mixture of the n=0 and n = ∞ phases, finally proving the destabilization of higher order Ruddlesden-Popper phases with electron doping.
As a next step, we focus on the synthesis of the perovskite phase and test the maximally possible Zr substitution content. To this end, we prepare a ”gradient rod” that is composed of sections with different substitution levels of Zr. The prepared sections correspond to 5%, 8.75%, 12.5%, 16.75%, and 20% Zr (Fig. <ref>a). During the growth, we observe that all substitutions help to stabilize the melt, while secondary phases emerge for 8.75% and higher substitutions. Notably, the growth conditions are stable even in the case of evolving phase mixtures. However, no large single-crystals form in these attempts due to the shrinking sizes of the perovskite domains, as evidenced in the BSE images, seen in the decrease of a single contrast matrix (see Figs. <ref>f, g). A rudimentary proposal for the pseudo-binary phase diagram is shown in Fig. <ref>h, which notably only provides points in the center, where our PXRD characterization (exemplary PXRD patterns in Figs. <ref>b,c) reveal the formation of the pure perovskite and perovskite double-perovskite mixtures at higher doping levels, respectively.
Additionally, we explore the phase formation for the growth of La_0.8Zr_0.2NiO_3 at various oxygen partial pressures ranging from 15, 25, 35 to 150 bar (not shown here). We find that for a stable growth of the perovskite phase at least 35 bar pressure is required. Above this pressure, however, we do not detect significant changes in the phase formation, nor the incorporated Zr-substitution content. These results indicate that the solubility limit of Zr in nickelates lies around 8%, which is also confirmed by a detailed EDS analysis (see Appendix B), where even for crystals from a growth with a nominally higher Zr substitution the detected content in the perovskite matrix does not exceed 8%. The formed double-perovskite phase can be considered as an impurity phase where Zr substitutes half of the Ni site, instead of the desired substitution of the La sites. Hence, one might suspect that Zr generally substitutes the Ni site, even for low Zr admixtures. However, our Rietveld refinement of the perovskite phase reveals that in this phase Zr solely occupies the La site. Moreover, we observe a solubility limit of approximately 8% also for substitutions with elements other than Zr (see below).
Motivated by these insights, we next carry out the growth of crystals with much lower Zr substitution content and optimized growth parameters. Pictures of the corresponding La_0.98Zr_0.02NiO_3 and La_0.95Zr_0.05NiO_3 boules are shown in Fig. <ref>. From the boules, we extract large and high-quality perovskite crystals, as confirmed by PXRD (Figs. <ref>a,b) and BSE imaging (Fig. <ref>d), as well as EDS analysis (see Appendix B).
§.§ Hole-doping of LaNiO_3
Hole-doping of perovskite nickelates can be achieved through the substitution with divalent ions such as Ca and Sr. In a previous high-pressure flux growth, the synthesis of small Ca-substituted LaNiO_3 single-crystals was achieved <cit.>, with 8% Ca content determined by XRD refinement, whereas EDS indicated a higher level of 12% in cleaved pieces. In an attempt to realize larger Ca-substituted crystals with the floating zone technique under oxygen pressures ranging from 200-300 bar, we perform growths with nominal compositions La_0.8Ca_0.2NiO_3 and Pr_0.8Ca_0.2NiO_3. However, we find that only the n = 1 Ruddlesden-Popper phases La_1.6Ca_0.4NiO_4 and Pr_1.6Ca_0.4NiO_4 as well as NiO form for such high substitution levels (not shown here). Since our floating zone growth with Zr revealed a solubility limit around 8% (Fig. <ref>e), we next attempt the growth of La_0.92Sr_0.08NiO_3. However, we find that connecting the corresponding feed and seed rods at 300 bar pressure is unfeasible, due to a heavy oxidization and freezing of the melt. To circumvent these issues we start the growth at a lower pressure and by sequentially increasing it up to 300 bar a stable growth is achieved. The results of the growth are presented in Tab. <ref>.
To confirm the anticipated the solubility limit of Sr of 8%, we subsequently attempt the growth of La_0.84Sr_0.16NiO_3. Unlike in the 8% case, a direct connection at 300 bar pressure is possible, highlighting that an increase in pressure is necessary to stabilize the higher nominal Ni^3.16+ oxidation state compared to Ni^3.08+ in La_0.92Sr_0.08NiO_3. Figure <ref>a shows the PXRD of pulverized pieces broken off from different parts of the boule. As expected for the growth of hole-doped compounds where extreme oxygen pressures are required, a concentration gradient is observed towards the center of the boule (see inset in Fig. <ref>a). In particular, we find that the fraction of the desired perovskite phase decreases towards the center of the boule, which is in accord with an increasing oxygen deficiency reported in previous floating zone growths of LaNiO_3 <cit.>. The wt% and lattice constants from three different regions of the boule are summarized in Tab. <ref>.
As a secondary phase in the PXRD, we detect the n = 1 Ruddlesden-Popper phase, which is also evidenced in the BSE imaging as a light color (Figs. <ref>b,c). As can be seen especially in Figs. <ref>b,c, in the center of the boule the grains of the perovskite phase are immersed in a matrix of the n = 1 Ruddlesden-Popper phase and NiO (Fig. <ref>b), whereas the situation is reversed in the outer regions, where the Ruddlesden-Popper phase and NiO form small inclusions in the perovskite phase (Fig. <ref>c). Such an intergrowth behavior appears to be a general characteristic for all investigated substitutions above 8% and points towards the existence of a general solubility limit in perovskite nickelates independent of the dopant species. In contrast, substitution levels far above 8% have been reported for thin film perovskite nickelates <cit.>, which might be facilitated by the epitaxial strain provided by the substrate.
Nevertheless, our EDS analysis (see Appendix B) performed locally on grains with the perovskite phase indicates that the Zr and Sr contents can reach values between 12 and 13% (see Tab. <ref>), which is comparable to the highest value detected by EDS in high-pressure flux grown Ca-substituted LaNiO_3 single-crystals <cit.>. These values are substantially higher 8%, but it is possible that the EDS analysis includes a systematic overestimation of these dopants. In contrast, for the rare-earth element Ce, where the EDS analysis is based on the M instead of the K-line, we find a maximum of 7(1)% substitution (Tab. <ref>), which is compatible with a solubility limit of 8%. In conclusion, complementary studies with analyses methods other than EDS are desirable to determine the maximum substitution content of Zr and Sr realizable in optical floating zone grown perovskite nickelates.
§.§ Physical properties
Recent investigations into the effects of hole- and electron-doping on perovskite nickelates have primarily utilized thin film samples <cit.>. In electron-doped Nd_1-xCe_xNiO_3, films, a suppression of the metal-to-insulator transition of NdNiO_3 was reported, favoring a shift towards a metal-to-metal transition for Ce doping levels as subtle as 2.5% <cit.>. In the following, we explore similar effects in our (Pr,Ce)NiO_3 crystals.
Figure <ref> presents a comparative analysis of the electronic and magnetic properties of our undoped and Ce-doped PrNiO_3 crystals. Note that the crystals have been cut into oriented pieces and annealed in a high gas pressure autoclave at 600^∘C under an oxygen partial pressure of 600 bar for 5 days, followed by a rapid cooling down to room temperature. This methodical approach ensures the exclusion of the influences of oxygen deficiency on the measured properties. Figure <ref>a displays the expected sharp metal-to-insulator transition over several orders of magnitude in the resistivity in the undoped PrNiO_3 single-crystal around 128 K, consistent with previous literature <cit.>. For our single crystals with a nominal composition of Pr_0.95Ce_0.05NiO_3, EDS indicates a lower realized Ce substitution level of around 2(1)%. We observe a metal-to-metal transition around 116 K in Fig. <ref>b. This change from a metal-to-insulator to a metal-to-metal transition is qualitatively similar to that in Nd_1-xCe_xNiO_3 films <cit.>.
In the magnetic susceptibility, anomalies with hysteretic behavior occur at similar temperatures as the metal-to-insulator and metal-to-metal transition in PrNiO_3 and (Pr,Ce)NiO_3, respectively (Figs. <ref>c,d). These anomalies indicate the presence of an antiferromagnetic transition <cit.>. Above the transition, the susceptibility can be well described by a Curie-Weiß law χ = C/T - θ_W, which yields the Curie constant C=2.20(6) emu K/mol Oe^-1=27.6(7)·10^-6m^3K/mol and θ_W=-91(1) K for (Pr,Ce)NiO_3. This is well comparable to undoped PrNiO_3 with a Curie constant C=2.38(1) emu K/mol Oe^-1=29.9(1)·10^-6m^3K/mol and θ_W=-99(1) K. We note that the magnetic signal of our (Pr,Ce)NiO_3 sample contains a contribution from the intergrown (Pr,Ce)O_2 phase, which exhibits an antiferromagnetic transition at 12 K whose signature can be seen both in the resistivity and susceptibility (Figs. <ref>b,d).
The susceptibility curves in Figs. <ref>c,d are expected to be dominated by the paramagnetic signal of the Pr^3+ cations. Nevertheless, the effective moment is extracted using μ_eff=√(3kC/Nμ_B^2)=2.828√(C). Assuming the effective paramagnetic moment of non-disproportionated Ni^3+ LS (μ_eff = 1.73 μ_B=√(3kC/Nμ_B^2)), C_Ni^3+ is estimated to be 0.375 emu K/mol. Subsequently, if we consider C=C_Ni+C_Pr, μ_eff of Pr is calculated to be 3.82 μ_B which is only slightly larger than the expected moment of 3.5 μ_B and the reported one for PrNiO_3 <cit.>.
Next, we characterize the physical properties of a selection of La-based compounds. To this end, we analyze large crystals of LaNiO_3, La_0.95Zr_0.05NiO_3, and La_0.91Ce_0.09NiO_3 by Laue diffraction and cut them into rectangular pieces with the (11̅0) direction pointing out of the plane, while the c-axis and (110) direction lie in the horizontal plane (Fig. <ref>a). Subsequently, the surfaces of the shaped and oriented crystals are polished using a 1 μm grain size. The crystals are then annealed in a high gas pressure autoclave at 600^∘C under an oxygen partial pressure of 600 bar for 5 days, followed by quick cooling to room temperature. We find that oxygen annealing is required to alleviate the oxygen deficiency of the as-grown crystals, which induces antiferromagnetic order<cit.>.
Figure <ref>b) shows the resistivity of the annealed crystals measured along the c axis. The high quality of our single-crystals is reflected in the large residual resistivity ratio (RRR) of LaNiO_3, with a value of 19, exceeding previously reported values <cit.>. The RRR value of the Zr-substituted crystal is found to be 3, and that of the Ce-substituted crystal is 43. The slope of the resistivity curve of all three crystals is similar, with Fermi-liquid behavior at temperatures below 100 K and a T^1.5 scaling behavior at higher temperatures. This is consistent with earlier reports of OFZ grown perovskite nickelates <cit.>. While the Ce substitution lowers the overall resistivity, as expected for electron doping, Zr substitution appears to show the opposite effect.
Due to the metallic nature, the magnetic properties of LaNiO_3 are not as simple as following the expected Curie laws, unlike PrNiO_3. Pure LaNiO_3 has complex magnetic correlations that can not be described by simple Pauli paramagnetism <cit.>. A Stoner enhanced maximum (highlighted by an arrow) is observed in LaNiO_3 centered around 220 K (Fig. <ref>c), which cannot by described by simple Pauli paramagnetism. Zr doping leads to a complete suppression of this observed maximum in the magnetic susceptibility. As a result, the full temperature range of the susceptibility of the Zr doped compound can be captured by a Curie-Weiß fit, with C=1.504(3)· 10^-3 emu K/mol Oe^-1=1.89(4) · 10^-8 m^3K/mol and θ_W=-2.2(1) K. As discussed above, we take C_Zr to be C-0.375. This results in μ_eff=3μ_B which is comparable to an expected moment μ_Zr^2+=2.8μ_B of a J=2 state, suggesting a magnetic Zr contribution. Notably, on the other hand we observe a similar Stoner enhanced maximum and a somewhat lower Curie tail for Ce substitution. In La_0.91Ce_0.09NiO_3, however, a considerable mass fraction is La_2Ce_2O_7. La_2Ce_2O_7 is solely diamagnetic and results in a small contribution in the susceptibility. Thus for the Ce doped crystal, the real mass of the perovskite phase was considered, determined from PXRD via Rietveld refinement.
§.§ Valence states
To further characterize our samples and determine the oxidation states of the substituted ions, we investigate freshly cleaved surfaces of selected single crystals using XPS. Specifically, this technique offers insights into the effectiveness of the intended charge carrier doping. Due to the significant overlap and complex multi-component structures of the La 3d and Ni 2p core-level spectra, we will concentrate on a qualitative analysis in the following.
An overview of all XPS spectra across a wide binding energy range is displayed in Fig. <ref>a. In Fig. <ref>b, the peaks at 854.8 and 851.2 eV correspond to La 3d_3/2, while the doublet at 837.9 and 834.6 eV can be identified as La 3d_5/2, indicating the La^3+ oxidation state. The La 3d_3/2 and Ni 2p_1/2 peaks overlap <cit.>, and the Ni 2p_3/2 peak is accompanied by a satellite line, which is positioned approximately 6–7 eV higher in binding energies. The Ni 2p_1/2 peak follows at 872.3 eV.
Figure <ref>c focuses on the XPS of La_0.91Ce_0.09NiO_3. Ce 3d is known to exhibit a very complex multiplet splitting, resulting in several peaks. Notably, the multiplet structure observed in our La_0.91Ce_0.09NiO_3 spectrum matches the characteristic spectrum of Ce^4+ in CeO_2 <cit.>, thereby confirming the presence of carrier doping in this system since Ce^4+ is replacing La^3+ sites. In contrast, Ce^3+ would give rise to two doublets at 880.9 eV / 885.2 eV and 899.1 eV / 903.4 eV <cit.>, which we do not observe.
In the case of the presence of Zr^4+ in electron-doped La_1-xZr_xNiO_3, the corresponding peak of the Zr 3d_5/2 binding energy is expected around 182 eV <cit.>. However, we observe two Zr 3d_5/2 doublets, corresponding to binding energies of 181.1 eV and 180 eV, clearly indicating a lower Zr valency, and likely even a mixed valence state (see reference data from ZrO_2 in Fig. <ref>d). This suggests that the anticipated electron doping might not have occurred in this system, which is consistent with the increase in resistivity observed in the Zr-substituted crystal, as opposed to the decrease in resistivity in the Ce-substituted crystal (Fig. <ref>b). Nevertheless, we cannot rule out that the shape of the spectrum in Fig. <ref>d is strongly influenced by surface effects, as in the case of Sr-doping, which is described below.
Lastly, we explore the possibility of achieving hole-doping by incorporating Sr^2+ ions in La_0.92Sr_0.08NiO_3. Previous works
have suggested that hole doping is realized in Sr-substituted nickelate thin films <cit.>, but no direct evidence of the Sr^2+ valence state has been reported so far, to the best of our knowledge. In the XPS of our La_0.92Sr_0.08NiO_3 crystal, we find that the Sr 3d multiplet exhibits several overlapping peaks (Fig. <ref>e), in particular the 3d_5/2 peaks at 131.8 eV and 133.8 eV, and the 3d_3/2 peaks at 133.6 eV and 135.6 eV. These peaks differ from the simpler 3d doublet at 134.3 eV and 132.5 eV of Sr^2+ in SrTiO_3 <cit.>. Yet, our observed peak structure is closely similar to that of the Sr ions in La_0.4Sr_0.6CoO_3<cit.>. In the cobaltate, the two doublets were ascribed to distinct surface and lattice Sr sites, due to the formation of SrO and Sr(OH)_2 on the surface. Such a scenario could apply similarly to the Sr ions in our nickelate sample.
§ SUMMARY
In summary, we have successfully grown single-crystals of LaNiO_3, PrNiO_3, La_0.98Zr_0.02NiO_3, La_0.95Zr_0.05NiO_3, and La_0.92Sr_0.08NiO_3. On the other hand, the (La,Pr,Eu)_6Ni_5O_16 and La_2.4Zr_0.6Ni_2O_7 phases did not form under the employed growth conditions. The synthesis of Pr_0.95Ce_0.05NiO_3, La_0.95Ce_0.05NiO_3, La_0.91Ce_0.09NiO_3, and La_0.84Sr_0.16NiO_3 was accomplished, albeit with the drawback of substantial amounts of impurity inclusions in the matrix. In general, we find that substitutions higher than 8%, or any electron doping of the Ruddlesden-Poppers phases n=2,3 lead to phase separation for OFZ growth under 300 bar oxygen pressure.
Our physical properties and XPS characterizations reveal that electron-doping is realized for Ce-substituted PrNiO_3 and LaNiO_3, whereas Zr-substitution likely results in mixed-valent states, potentially affecting the metallic ground state and the Stoner enhanced maximum in the magnetic susceptibility.
Our results underscore the pressing need for new, cutting-edge strategies that transcend the conventional single dopant route to realize bulk crystals of nickelate superconductors. Thus, co-doping with more than one ionic species or self doping via the oxygen stoichiometry, as previously established in cuprates<cit.>, might offer a path to realize superconductivity in topotactically reduced nickelate crystals.
We thank F. Predel for acquiring BSE images at an early stage of this project, and C. Busch for technical support. We acknowledge helpful discussions with S. Hayashida and are grateful for access to the Merlin SEM at the Scientific Facility Nanostructuring Lab (NSL) at MPI FKF.
§ PRESSURE DEPENDENCE
Before the OFZ growth of La_1-xZr_xNiO_3 substitution series, we performed a study on the growth pressure during a single growth of La_0.8Zr_0.2NiO_3. Our initial attempt employed 15 bar oxygen pressure, as used in the nominally grown La_2.4Zr_0.6Ni_2O_7, and we gradually increased the pressure up to 150 bar, where the growth remained stable. We note that the application of 300 bar pressure led to growth instabilities.
In Fig. <ref>, we present a summary of the pressure change effects, revealing that phase formation remains unaffected from 35 bar onwards. Consequently, we chose an intermediate pressure of 85 bar for our final growth. Although the BSE images display a significant phase mixture (Fig. <ref>, we can still isolate small pieces of major phase accumulations from the n=1 Ruddlesden-Popper phases and perovskite phases. Interestingly, the perovskite parts appear to contain an increased substitutional content of Zr (Fig. <ref>), which coexist alongside of tiny domains of NiO and the n=1-Ruddlesden-Popper phase.
§ ELEMENTAL DISPERSIVE X-RAY SPECTROSCOPY
In addition to the extensive XRD and BSE analysis, we also performed EDS characterization, with representative results summarized in Fig. <ref>. We focused the EDS analysis on homogeneous pieces with lower doping levels, confirming the successful substitution through a systematic examination of at least 20 points across various regions of the grown boule on the phase-pure part of perovskite matrix. An example SEM image is provided in Fig. <ref>.
By averaging over all measured points and considering the standard deviation as error bars, we obtained the following substitutional contents: (a) Pr_0.93(3)Ce_0.02(1)NiO_3, (b) La_0.96(2)Ce_0.046(3)NiO_3, (c) 5%: La_0.99(9)Zr_0.071(4)NiO_3, 8.75%: La_0.96(5)Zr_0.11(1)NiO_3, 12.5%: La_0.9(2)Zr_0.12(2)NiO_3, 16.75%: La_0.91(6)Zr_0.12(1)NiO_3, (d) La_0.9(2)Zr_0.07(2)NiO_3, (e) La_0.8(1)Sr_0.12(5)NiO_3.
The phase separation, evident in both BSE and XRD data for doping levels of 8% and higher, is due to the solubility limit of the substituent. This phenomenon is also reflected in the EDS data, which reveal a limited substitution content in the bulk matrix. It is important to note that, in comparison to the rare-earth elements, the Zr and Sr content is systematically overestimated in our EDS analysis by a few percent. The radial distribution of phases in the boule, resulting from the varying oxygen partial pressure within the melt, is best illustrated in our EDS map displayed in Fig. <ref>f. In this figure, all elements are color coded and superimposed onto the same SE image, revealing a red perovskite region at the exterior and a blue n=1 Ruddlesden-Popper phase region at the interior of the grown boule.
§ REFERENCES
|
http://arxiv.org/abs/2306.05253v1
|
20230608144848
|
Quantum computing algorithms for inverse problems on graphs and an NP-complete inverse problem
|
[
"Joonas Ilmavirta",
"Matti Lassas",
"Jinpeng Lu",
"Lauri Oksanen",
"Lauri Ylinen"
] |
math.CO
|
[
"math.CO",
"cs.CC",
"quant-ph",
"52C25 (Primary) 68Q12, 68Q17 (Secondary)"
] |
bibs.bib
./images/
ℬ
|
http://arxiv.org/abs/2306.06039v2
|
20230609170420
|
Possible high $T_c$ superconductivity in La$_3$Ni$_2$O$_7$ under high pressure through manifestation of a nearly-half-filled bilayer Hubbard model
|
[
"Hirofumi Sakakibara",
"Naoya Kitamine",
"Masayuki Ochi",
"Kazuhiko Kuroki"
] |
cond-mat.supr-con
|
[
"cond-mat.supr-con",
"cond-mat.mtrl-sci",
"cond-mat.str-el"
] |
APS/123-QED
[email protected]
Advanced Mechanical and Electronic System Research Center(AMES), Faculty of Engineering, Tottori University, 4-10 Koyama-cho, Tottori, Tottori 680-8552, Japan
Computational Condensed Matter Physics Laboratory, RIKEN, Wako, Saitama 351-0198, Japan
Department of Physics, Osaka University, 1-1 Machikaneyama-cho, Toyonaka, Osaka 560-0043, Japan
Department of Physics, Osaka University, 1-1 Machikaneyama-cho, Toyonaka, Osaka 560-0043, Japan
Forefront Research Center, Osaka University, 1-1 Machikaneyama-cho, Toyonaka, Osaka 560-0043, Japan
Department of Physics, Osaka University, 1-1 Machikaneyama-cho, Toyonaka, Osaka 560-0043, Japan
Inspired by a recent experiment showing that La_3Ni_2O_7 exhibits high T_c superconductivity under high pressure, we theoretically revisit the possibility of superconductivity in this material. We find that superconductivity can take place which is essentially similar to that of the bilayer Hubbard model consisting of the Ni 3d_3z^2-r^2 orbitals. Although the coupling with the 3d_x^2-y^2 orbitals degrades superconductivity, T_c can still be high enough to understand the experiment thanks to the very high T_c reached in the bilayer Hubbard model.
74.20.Mn,74.70.−b
Possible high T_c superconductivity in La_3Ni_2O_7 under high pressure through manifestation of a nearly-half-filled bilayer Hubbard model
Kazuhiko Kuroki
July 31, 2023
=============================================================================================================================================
Introduction.—Seeking for new unconventional high T_c superconductors has been a great challenge ever since the discovery of the two families of unconventional high-T_c superconductors, cuprates<cit.> and iron-based<cit.>. Several previous studies have shown that the cuprates are already in an ideal situation in that they are described by a single orbital Hubbard model near half-filling on a square lattice, and hence their T_c may be difficult to transcend<cit.>.
One possible approach for pursuing even higher T_c is to realize in actual materials the bilayer Hubbard model, for which several studies have shown that the superconducting T_c can be higher than that of the d-wave superconducting state in the single orbital Hubbard model<cit.>. In fact, the bilayer Hubbard model has been widely studied from the past<cit.>, and s±-wave superconductivity is found to be strongly enhanced near half-filling when the vertical electron hopping (t_⊥) between the layers is several times larger than the in-plane hopping, and the Fermi level (E_F) lies in the vicinity of the edge of one of the bands<cit.>. Nowadays, a band whose edge lies just below or above E_F is often referred to as an incipient band, and has attracted interest in the study of iron-based superconductors<cit.>, bilayer and ladder-type lattices<cit.>, and flat band superconductivity<cit.>.
In fact, one of the present authors proposed that a double layer Ruddlesden-Popper compound La_3Ni_2O_7 can be a good candidate for realizing the bilayer Hubbard model that satisfies the above mentioned conditions<cit.>. In this material, for which the Ni 3d electron configuration is d^7.5, the 3d_3z^2-r^2 orbitals are elongated in the z (out-of-plane) direction so that t_⊥ between the layers is much larger than the in-plane hoppings between the neighboring d_3z^2-r^2 orbitals, and also the d_3z^2-r^2 orbitals are nearly half-filled. Hence the d_3z^2-r^2 portion of the electronic structure appears to be favorable for superconductivity from the above mentioned viewpoint of the bilayer model, although deviation compared from the ideal model arises due to the presence of the Ni 3d_x^2-y^2 bands, which are nearly quarter-filled, overlapping and hybridizing with the d_3z^2-r^2 bands.
Given this background, a recent experimental finding that La_3Ni_2O_7 exhibits high T_c superconductivity at high pressures<cit.>, which in itself has huge impact, is certainly intriguing. There, it was shown that the material undergoes a superconducting transition with a highest T_c of 80 K under pressure above 14 GPa. Already several theoretical studies on this material, which have been performed independently from ours, have appeared right after the discovery of superconductivity<cit.>.
Inspired by this experiment, here we theoretically revisit the possibility of superconductivity in La_3Ni_2O_7 by constructing a four-orbital model that takes into account the crystal structure at high pressures. We find that s±-pairing superconductivity, which is essentially similar to that of the bilayer Hubbard model, can take place with high T_c that is consistent with the experimental observation. Although the coupling between d_3z^2-r^2 and d_x^2-y^2 orbitals degrades superconductivity, T_c can still be high because of the very high T_c attained in the bilayer Hubbard model. We also discuss ways to further enhance superconductivity of this material.
Method.—First, we perform first-principles calculation to obtain the band structure of La_3Ni_2O_7 using the QUANTUM ESPRESSO code <cit.>.
Perdew-Burke-Ernzerhof parametrization of the generalized gradient approximation (PBE-GGA) <cit.>
and the scalar-relativistic version of the optimized norm-conserving Vanderbilt pseudopotentials <cit.> taken from PseudoDojo <cit.> are used.
Experimental lattice constants and theoretical atomic positions of La_3Ni_2O_7 under the pressure of P=29.5 GPa are taken from Ref. <cit.> for the input parameter.
Since the orthorombicity at P=29.5 GPa is quite small ((a-b)/a∼ 1.3 %, where a,b are lattice constants for space group Fmmm),
we adopt a body-centered tetragonal structure (I4/mmm, Fig. <ref>(a)) as in La_2CuO_4, with the lattice constants determined as an average of the original ones, i.e., a^*=b^*=(a+b)/2√(2). We take 100 Ry plane-wave cutoff energy, a 12 × 12 × 12 k mesh, 0.02 Ry width for Gaussian smearing.
We then extract (maximally localized) Wannier functions <cit.> using the RESPACK code <cit.>,
by which we also obtain the hopping parameters among the Wannier functions.
We construct a four-orbital model consisting of the d_x^2-y^2 and the d_3z^2-r^2 like Wannier orbitals centered at two Ni sites per unit cells.
Important parameter values are given in Table <ref>.
Figure <ref>(c) shows superposed band-structures given by first-principles and Wannier interpolation, where precise fitting around the Fermi level is achieved.
We explore the possibility of superconductivity
for the obtained low-energy four-orbital model within the fluctuation-exchange approximation (FLEX) <cit.>.
As the interaction term of the Hamiltonian, we only take the on-site interactions, namely, intraorbital(interorbital) Coulomb interactions U(U'),
Hund's coupling J, and pair hopping J'.
We assume the orbital rotational symmetry, namely, we take the same value of U for the d_x^2-y^2 and the d_3z^2-r^2 orbitals,
and U'=U-2J, J=J'.
Since typical values for cuprate is U/t=7-10
(where |t|≃ 0.45 eV is first-principles value<cit.> of the nearest neighbor hopping among the d_x^2-y^2 orbitals), we take U=3 eV.
We also take J=0.1U, i.e., J=J'=0.3 eV and U'=U-2J=2.4 eV.
We calculate the self-energy induced by the spin-fluctuation formulated as shown in the literatures <cit.> in a self-consistent calculation.
The real part of the self-energy at the lowest Matsubara frequency is subtracted in the same manner with Ref. <cit.>
to maintain the band structure around the Fermi level obtained by first-principles calculation.
The obtained Green's function and the pairing interaction, mediated mainly by spin fluctuations, are plugged into the linearized Eliashberg equation.
Since the the eigenvalue λ of the Eliashberg equation reaches unity at T=T_c,
we adopt it as a measure of superconductivity at a fixed temperature, T=0.01 eV.
For convenience, we will call the eigenfunction (with the largest eigenvalue) of the linearized Eliashberg equation at the lowest Matsubara frequency iω(=iπ k_ BT) the “superconducting gap function”. We take a 16×16×4 k-point mesh and 2048 Matsubara frequencies for the FLEX calculation.
Results and Discussions.—In Fig. <ref>(a), we show the eigenvalue of the Eliashberg equation λ at T=0.01 eV as a function of the band filling n, denoted as “original model”. n=1.5 corresponds to the stoichiometric composition of the actual material, and n is varied assuming a rigid band. In the figure, we also show a yellow shade presenting the range of the typical values of λ for the high T_c cuprates obtained in the same way<cit.>. It can be seen that the present model, for n=1.5 or larger, exhibits large λ values comparable to those of the cuprates, which implies that the calculation results are consistent with the experimental observation of T_c∼ 80 K.
In Fig. <ref>(d), we show the superconducting gap function Δ(k,iω) of the present model at n=1.5 in the band representation. It can be seen that the gap function is large at portions of the band where the d_3z^2-r^2 orbital component is large, and the bonding and antibonding portions of the d_3z^2-r^2 band (see Fig. <ref>(b)) has opposite signs of the gap.
To understand the origin of the large λ values, we study three other models in the same manner, namely, models in which the following couplings between d_3z^2-r^2 and d_x^2-y^2 orbitals are eliminated: (i) the interorbital interactions U', J, J', (ii) the hybridization, and (iii) both the interorbital interactions and the hybridization. Definition of the models, including those discussed later, is summarized in Table <ref>. The band structure without the hybridization is also presented in Fig. <ref>(c). In model (iii), the d_3z^2-r^2 and d_x^2-y^2 orbitals are completely decoupled, so that the superconducting state is equivalent to that of the bilayer Hubbard model consisting solely of the d_3z^2-r^2 orbitals. It can be seen that both the hybridization and the interorbital interactions degrade superconductivity of the bilayer Hubbard model, but since λ of the bilayer model is significantly large, λ of the original model (full model with both the interorbital interactions and the hybridization included) is still large enough to explain the experimental observation.
The nature of the superconducting gap of the original model (Fig. <ref>(d)) can be more clearly understood by comparing it to that of model (ii) (the model in which the two orbitals are decoupled in one-body level) shown in Fig. <ref>(e). Here, the gap has opposite signs between the bonding and antibonding d_3z^2-r^2 bands. It is an s±-wave superconducting gap in the wide sense of the term in that it changes sign between the two bands, but the antibonding band does not form a Fermi surface. We stress that even when one of the bands do not intersect the Fermi level, the spin fluctuations with finite energy arise as a pairing glue<cit.>. The overall resemblance of the superconducting gaps in Figs. <ref>(d) and (e) further confirms our picture that the superconductivity in the present model is d_3z^2-r^2 orbital driven.
In this context, it is also intriguing to give a look into the present system from a strong coupling viewpoint. Calculating within second order perturbation, the interlayer exchange coupling between the d_3z^2-r^2 orbitals gives J_⊥=4t^2_⊥/U≃ 0.6 eV for U=3 eV, which is quite large compared to, for example, the nearest neighbor superexchange coupling in the cuprates. J_⊥ is also much larger than the intralayer hopping between the neighboring d_3z^2-r^2 orbitals. Such a large J_⊥ should lead to opening of a spin gap, and induce interlayer pairing superconductivity<cit.>, whose gap function changes its sign between bonding and antibonding bands in momentum space. This strong coupling picture is indeed consistent with the FLEX results for both the pure bilayer Hubbard model<cit.> and the present model. We note that there is no antiferromagnetic ordering in spin-gapped systems, and this should also apply to the present model of La_3Ni_2O_7. In fact, the Stoner factor of magnetism (the maximum eigenvalue of Uχ_0(q,0), where χ_0(q,0) is the irreducible susceptibility at the lowest Matsubara frequency) at n=1.5 is obtained within FLEX as 0.955 for the original model, which is smaller (less tendency toward magnetism) than 0.967 obtained for model (iii), namely, a model that can be considered as equivalent to the bilayer Hubbard model, in which magnetic ordering should not be present.
In Refs. <cit.>, some of the present authors studied cases where superconductivity emerges or is enhanced due to the interorbital interactions between the d_x^2-y^2 and other d orbitals.
The effect of the interorbital interactions in the present model is the opposite, namely, they degrade superconductivity.
A large difference is that there is a bonding-antibonding splitting in the d_3z^2-r^2 band in the present bilayer system, which might be the reason why the effect of the interorbital interactions is the opposite. Further study on the origin of the difference between the single and bilayer systems is underway.
Finally, we discuss possible ways to further enhance superconductivity. The band filling dependence presented in Fig. <ref> suggests that T_c may be enhanced by doping electrons. In case it is difficult to dope electrons in the actual material, here we propose alternative ways for achieving a similar effect. We consider a model in which (iv) the level offset between the d_x^2-y^2 and the d_3z^2-r^2 orbitals Δ E=E_x^2-y^2-E_3z^2-r^2 is increased by δ(Δ E)=0.2 eV or (v) |t_⊥| is increased by δ|t_⊥|=0.2 eV (see also Table <ref>). As depicted in Fig. <ref>(a), in both models, the band filling dependence of λ appears to be shifted toward the left (i.e., toward the smaller n regime), so that larger values of λ are attained at n=1.5, i.e., the stoichiometric band filling. From a material designing viewpoint, increasing Δ E and/or |t_⊥| might be achieved by considering mixed anion materials.
The effect of increasing Δ E and/or |t_⊥| can be understood by counting the number of electrons occupying the d_3z^2-r^2 orbitals (namely, summing up the d_3z^2-r^2 orbital weight assuming non-interacting band structure) for each case. In Fig. <ref>(b), we plot λ against n[d_3z^2-r^2], which is the average number of electrons per d_3z^2-r^2 orbital. It can be seen that λ is mainly determined by n[d_3z^2-r^2] (within these three models), which once again supports the picture that the present superconductivity is d_3z^2-r^2 orbital driven. Here, increasing Δ E and/or |t_⊥| results in self-doping of electrons from the d_x^2-y^2 to d_3z^2-r^2 orbitals (see Fig. <ref>(b)). Superconductivity is enhanced as n[d_3z^2-r^2] approaches unity, that is, as d_3z^2-r^2 orbital approaches half-filling, so that the electron correlation effects are enhanced, and at the same time, the Fermi level approaches both the bonding band top and the anti-bonding band bottom, thereby shifting the spin fluctuations toward the lower energy regime and making it more effective as a pairing glue.
Summary.—To summarize, we have studied the possibility of superconductivity in La_3Ni_2O_7 taking into account the crystal structure under high pressure. The system can be considered as a bilayer Hubbard model of the d_3z^2-r^2 orbitals coupled with the d_x^2-y^2 orbitals through interorbital interactions and hybridization. Although the interorbital couplings degrade superconductivity, the T_c can still be high enough to explain the experimental observation, thanks to the very high T_c reached in the bilayer Hubbard model. We have also discussed possible ways to enhance the superconductivity. Electron doping is likely to enhance superconductivity, but in case this is not feasible, increasing Δ E and/or |t_⊥| are alternative ways of achieving a similar effect. This is because these modifications result in a self-doping of electrons from the d_x^2-y^2 to the d_3z^2-r^2 orbitals.
Studies on material designing along this line is underway.
We are supported by JSPS KAKENHI Grant No. JP22K03512 (H. S.) and JP22K04907 (K. K.).
The computing resource is supported by
the supercomputer system HOKUSAI in RIKEN, and
the supercomputer system (system-B) in the Institute for Solid State Physics, the University of Tokyo.
|
http://arxiv.org/abs/2306.09190v1
|
20230615151619
|
A Search for Nonlinear Balanced Boolean Functions by Leveraging Phenotypic Properties
|
[
"Bruno Gašperov",
"Marko Đurasević",
"Domagoj Jakobović"
] |
cs.NE
|
[
"cs.NE"
] |
Kinetic based optimization enhanced by genetic dynamics
Giacomo Albi[ Dipartimento di Informatica, Università di Verona, Verona, Italy, e-mail: [email protected]], Federica Ferrarese[ Dipartimento di Matematica, Università di Trento, e-mail: [email protected]], and Claudia Totzeck[Department of Mathematics and Informatics, University of Wuppertal, e-mail: [email protected]]
============================================================================================================================================================================================================================================================================================================================================================
In this paper, we consider the problem of finding perfectly balanced Boolean functions with high non-linearity values. Such functions have extensive applications in domains such as cryptography and error-correcting coding theory. We provide an approach for finding such functions by a local search method that exploits the structure of the underlying problem. Previous attempts in this vein typically focused on using the properties of the fitness landscape to guide the search. We opt for a different path in which we leverage the phenotype landscape (the mapping from genotypes to phenotypes) instead. In the context of the underlying problem, the phenotypes are represented by Walsh-Hadamard spectra of the candidate solutions (Boolean functions). We propose a novel selection criterion, under which the phenotypes are compared directly, and test whether its use increases the convergence speed (measured by the number of required spectra calculations) when compared to a competitive fitness function used in the literature. The results reveal promising convergence speed improvements for Boolean functions of sizes N=6 to N=9.
§ INTRODUCTION
Boolean functions find their applications in a wide range of areas, such as cryptography and error-correcting codes <cit.>, telecommunications <cit.>, systems biology <cit.>, and circuit design <cit.>. The properties of Boolean functions that are of critical importance, especially in cryptography, include high non-linearity and balancedness. Non-linearity indicates its distance from the closest affine function, while balancedness points to the equality of the number of zeros and ones in its truth table. In order to find Boolean functions with desired characteristics, metaheuristics such as genetic algorithms (GAs), local search, and algebraic constructions, are commonly used <cit.>. While balancedness is trivial to check, calculation of non-linearity involves calculating the Walsh-Hadamard spectrum, which is computationally costly even when the fast Walsh-Hadamard transform algorithm is used[The fast Walsh-Hadamard transform algorithm reduces the complexity from 𝒪(N^2) (naive implementation) to 𝒪(NlogN).] <cit.>. This is especially problematic for Boolean functions with larger numbers of inputs N. On top of that, the size of the search space itself equals 2^2^N, i.e., it grows super-exponentially with N. Hence, reducing the number of Walsh-Hadamard spectrum evaluations is pivotal to increasing the convergence speed of practically any approach for finding Boolean functions with high non-linearity based on metaheuristics.
To this end, we propose a novel selection criterion for use in metaheuristics which is then employed to directly compare the phenotypes of candidate solutions (Boolean function) and ascertain which solution is more likely to have neighbors with higher non-linearity values. The underlying idea is to exploit the structure of the phenotype landscape - the mapping between the genotypes (Boolean functions encoded as bitstrings) and phenotypes (Walsh-Hadamard spectra). More specifically, the design of the criterion is driven by the close links between phenotypic and fitness landscapes for the underlying problem, given that fitness is typically defined as equal to the non-linearity, which is in turn related to the maximum absolute value of the Walsh-Hadamard spectrum (phenotype). Hence, our work contributes to exploring the relationship between the two landscapes. To demonstrate its viability, the selection criterion is incorporated into a simple first-improvement local search algorithm. It should be noted that our work builds upon the nascent strand of related research in which the goal is to somehow unveil the (in the context of this problem very intricate and convoluted) structure of the phenotype landscape <cit.>. This area of research is still very scarce, as most works, in contrast, focus on exploiting or extensively analyzing the structure of the classical fitness landscape under simple fitness functions (e.g. non-linearity) <cit.>. Another aspect in which our work differs is its exclusive focus on (perfectly) balanced Boolean functions, to which somewhat less attention has been paid in the literature. Interestingly, despite the fact that the consideration of only balanced functions significantly shrinks the search space, this in practice does not lead to noticeable improvements when using metaheuristic-based approaches, such as genetic algorithms. A presumed reason is the increased roughness of the ensuing fitness landscape <cit.>.
The paper is organized as follows. In Section <ref>, preliminaries on Boolean functions and the concept of a phenotype landscape are given. Related work is discussed in Section <ref>. Section <ref> presents the first-improvement local search strategy, the proposed phenotype-based selection criterion, and other key components of the method. The experimental results are given in Section <ref>, including first an introductory analysis of the relationship between different types of balancedness-preserving mutation operators and the non-linearity values of the resulting neighbors and then the main results obtained via two local search variants. Finally, Section <ref> wraps the paper up with a conclusion and a list of potential avenues for further research.
§ PRELIMINARIES
§.§ Boolean functions
A Boolean function is a function in which the input and function values assume only two values, namely 0 and 1.
We define an n-variable Boolean function as a mapping f: 𝔽_2^N →𝔽_2.
Each such function can be uniquely represented by a truth table, which represents pairs of inputs x ∈𝔽_2^N and function values f(x) corresponding to those inputs.
The vector of all output values f(x) is called the value vector Ω_f.
The size of this vector is 2^N, whereas the size of the search space is equal to 2^2^N.
A common requirement for Boolean functions is that they should have the highest possible non-linearity.
The non-linearity (nl_f) of a Boolean function is defined as the minimum Hamming distance between a Boolean function and all affine functions, where the Hamming distance between two functions f and g denotes the number of different output values for the same input value, i.e. the number of inputs x ∈ F_2^N such that f(x)≠ g(x).
For a given Boolean function, we can calculate its non-linearity as:
nl_f = 2^N - 1 - 1/2max_a ∈𝔽_2^N |W_f(a)|,
where W_f(a) represents the Walsh-Hadamard coefficient. These coefficients can be calculated using the Walsh-Hadamard transform defined as:
W_f (a) = ∑_x ∈𝔽_2^N (-1)^f(x) ⊕ a· x,
where ⊕ denotes the addition modulo two (bitwise XOR) and a· x denotes the logical AND of a with each coordinate of x. This expression measures the correlation between the function f and the linear function a · x. As previously stated, a common goal is to obtain Boolean functions of the highest possible non-linearity value, which happens when the maximum absolute value of the corresponding Walsh-Hadamard is as small as possible.
One interesting variant of Boolean functions is provided by balanced Boolean functions. These functions have a Hamming weight of 2^N-1, meaning that the value vector consists of an equal number of zeros and ones. Balanced Boolean functions are of particular interest since they are more appropriate for being used in cryptosystems <cit.>, due to them not having a bias in their output values.
§.§ Phenotype landscape
Given the set of genotypes Θ and the set of phenotypes 𝒫 (typically 𝒫⊆ℝ^n), a phenotype function p: Θ↦𝒫 simply maps genotypes (parameters) to phenotypes:
p(θ_i) = p_i,
where θ_i is any genotype and p_i its uniquely defined phenotype. In our case, θ_i corresponds to a bitstring representation of a perfectly balanced Boolean function and p_i to its Walsh-Hadamard spectrum. The landscape associated with the mapping p is referred to as the phenotype landscape. As a side note, we mention that in cases when phenotypes represent behaviors of controlled agents and genotypes stand for parameters of the associated controller, which is a common scenario in evolutionary reinforcement learning, this type of landscape is also referred to as a behavior (or feature) landscape.
§ RELATED WORK
There is a rich history of work in finding highly non-linear Boolean functions via metaheuristics <cit.>, starting with Millan et. al. <cit.>, who approached the problem with a basic GA, a directed hill climbing method, and also a GA with hill climbing. Other attempts use alternative algorithms like particle swarm optimization <cit.>, simulated annealing <cit.>, or the clonal selection algorithm (CLONALG) <cit.>. Some researchers, like Manzoni et al. <cit.>, incorporated different local search methods into metaheuristics to improve and study the resulting convergence speed and diversity. Throughout the years, various solution representations have been proposed when dealing with the construction of Boolean functions.
The most natural and commonly used representation is the bitstring representation, in which the truth table is encoded as a string of bits of length 2^N. However, recent years saw a rise in the popularity of symbolic-based representations, in which the Boolean function is defined as an expression that can be executed.
Various algorithms relying on such a representation have been considered in the literature, such as genetic programming <cit.> and its Cartesian variant <cit.>.
Although this representation was found to be the most successful one, it still lags behind the bitstring representation in popularity <cit.>. Apart from the aforementioned representation, other representations were also proposed, such as the integer-based <cit.> and floating point representations <cit.>, but neither received a lot of attention in the literature.
An alternative approach to representing Boolean functions is to use the Walsh-Hadamard spectrum to encode solutions <cit.>. Although it is a promising approach, its performance is inferior in comparison to the traditional truth table-based representations.
Until now, non-linearity has been the most commonly considered criterion when evolving Boolean functions, especially in the context of single objective optimization <cit.>.
However, there have been several attempts to construct Boolean functions while considering multiple criteria simultaneously, such as optimizing non-linearity together with algebraic degree <cit.> or autocorrelation <cit.>.
Some studies even consider optimizing more than two criteria simultaneously <cit.>, with good results being achieved.
Most studies dealing with the simultaneous optimization of multiple criteria use either a linear weighted combination of the criteria or a two-level approach in which the first criterion is optimized until a desired value is reached, after which the second one is optimized.
However, certain studies consider the application of multi-objective algorithms for this purpose as well <cit.>.
Only a handful of studies have focused on analyzing the evolutionary process in order to gain deeper insight into the search for high-quality Boolean functions.
Among works focusing on fitness landscape analysis, Jakobovic et al. <cit.> rely on local optima networks (LONs) in order to investigate the influence of different decisions (fitness functions, neighborhood operators, etc.) on the optimization of cryptographic properties.
Somewhat similarly, Picek et al. <cit.> complement fitness analysis with a symmetry analysis, noting that this additional information might lead to more effective search methods.
Furthermore, Picek et al. <cit.> also conduct a fitness landscape analysis in order to analyze the difficulty in obtaining maximal possible non-linearity when constructing balanced Boolean functions.
The performed fitness landscape analysis did not reveal any differences in the landscapes between problems of different input sizes that could justify the increase in the problem difficulty when the number of input variables is increased.
§ METHOD
§.§ Genotypes and phenotypes
The genotypes are simply given as truth tables in the bitstring representation. The corresponding phenotypes are naturally provided by its Walsh-Hadamard (magnitude) spectrum, and this formulation is therefore used in our approach. Alternatively, phenotypes might be defined, for example, as a set of statistics of the spectrum.
§.§ Variation operators
We consider several types of balancedness-preserving mutation operators: a) swaps (both single and multiple), b) cyclic shifts, c) inversions, and d) permutations. The analysis will be used to inform the choice of the operator used in the ensuing local search algorithm. We consider the effect of the three main types of mutations on multiple fitness functions and differently defined phenotypes. Let us denote a binary string by s, where s=s_1 s_2… s_L where s_i is its i-th element, and L its length. In our case, s=Ω_f for some Boolean function f, and L=2^N, using the same notation as in Subsection <ref>. Given two binary strings s and t, the Hamming
distance ℋ(s, t) is simply defined as the number of elements (positions) in which they differ.
In a single swap, two indices i and j, i ≠ j, such that s_i≠ s_j, are randomly selected and the corresponding values are swapped to obtain a new binary string s'. In a multiple swap, the same procedure is repeated multiple (k) times, all while ensuring that subsequent swaps do not undo previous ones. Cyclic shifts simply move the elements of the string to the right by l positions, while those that "fall off" the end are then re-added to the beginning of the string. Finally, in an inversion, two indices i and j, i < j, are randomly chosen and the values in the substring s_i… s_j are inverted (replaced by the substring s_j… s_i), and hence s' = s_1 … s_i-1 s_j… s_i s_j+1… s_L, where s' again denotes the new (mutated) string. If s' = s the process is repeated until s' ≠ s. Finally, a permutation is given by a generalization of an inversion. Again, two indices i and j, i < j, are randomly chosen and the values in the substring s_i … s_j are permuted such that s' ≠ s is ensured. Note that, in this case, the neighborhood of any s equals the space of all perfectly balanced Boolean functions. The effective sizes of the neighborhoods associated with each mutation type and concrete examples for L=6 are given in Table <ref>.
§.§ Selection criterion
An intuitive first choice for the fitness function would be to use the non-linearity value of the given Boolean function f, i.e.:
fitness_1 = nl_f.
In <cit.> a novel and more informative fitness function, which considers an additional property of the Walsh-Hadamard spectrum, is used:
fitness_2 = nl_f + 2^N-# maxvalues/2^N
This fitness function has an additional term that penalizes the number of appearances of the maximal absolute value in
the Walsh-Hadamard spectrum, denoted by #maxvalues. The goal is to promote solutions with few occurrences of the maximal absolute value in the spectrum, as these are in a certain phenotypical sense closer to solutions exhibiting higher non-linearity values. This similarity might be demonstrated by inspecting the right tails of the histograms of their absolute spectrum values. The fitness_2 function is shown to work particularly well in conjunction with local search (LS) <cit.>. It represents the first step towards considering whole phenotypes when evaluating solutions. Both fitness_1 and fitness_2 lead to combinatorial (discrete) fitness landscapes, albeit with differing resolutions (degree of graduality).
Building upon such phenotypical considerations, and utilizing even more properties of the spectrum (which represents the phenotype), we propose a novel selection criterion fully inspired by the phenotype, which can be interpreted as a fully-fledged generalization of fitness_2. More specifically, according to our criterion, not only the number of appearances of the maximal absolute value in the spectrum is penalized, but also the number of appearances of the M-th largest absolute value, for all M. Penalization is less severe for larger M values. Hence, say, a reduction of the number of the maximum absolute value appearances by only one is preferred over a reduction of the number of the second largest absolute value appearances by an arbitrary number. Consequently, when choosing between two solutions, the histograms of their phenotypes (magnitude spectra) are compared. In what follows we formalize the proposed criterion.
Selection criterion. Given two solutions x and y, the one with the higher non-linearity[Which implies lower maximum absolute value in the spectrum.] is preferred. If the non-linearities are the same, the number of appearances of the largest absolute value in the Walsh-Hadamard spectra are compared, preferring the one with the smaller number of appearances. If it is still a tie, compare the number of appearances of the next (M-th) largest possible value in the spectra, M ∈{2,3, …, M_max}, in ascending order, and choose the one with the smaller number of appearances. Repeat until the tie is broken. The same procedure is illustrated in the pseudocode provided below in Algorithm <ref> and illustrated graphically in Figure <ref>. Observe that, according to the used notation, #maxvalues is the same as
#larbasval_1.
We raise the point that the criterion boils down to building and comparing the right tails of the histograms of magnitude spectra, starting with the largest value, all until the first difference in the number of appearances is observed.
Finally, a delicate decision needs to be made as to what happens if the tie is not resolved at all, i.e. if the two resulting histograms are identical. One option is to make the comparison strict, i.e., to only accept strictly better solutions, while the other one is to embrace neutrality. In what follows, for the sake of simplicity, and to avoid the overhead costs associated with accepting a new solution, we choose the former and leave the neutrality considerations for further research. We finally emphasize that instead of using the proposed selection criterion, one might also generalize the fitness function fitness_2 by using M_max penalization terms instead of merely one. This would result in a fitness function that is discretized on a much finer scale than fitness_2 and would in practice require calculations up to impractically large numbers of decimal places when comparing solutions. Hence the use of the proposed selection criterion (as opposed to using it in a form of a fitness function) provides a superior choice in our view. Note that it can also be straightforwardly used in combination with other meta-heuristics, such as GAs and evolutionary strategies (ESs), or, owing to its transitivity, in a slightly modified form to compare more than just two solutions.
§.§ Local search
In the context of local search and hill climbers, commonly used in tackling the underlying problem, two main move strategies (pivoting rules) are most frequently used - first-improvement and best-improvement <cit.>. In the first-improvement variant, as soon as a solution with larger fitness is encountered, it is instantly accepted. On the contrary, in the best-improvement strategy, all neighbors are always evaluated and only then is the one with the largest fitness value selected. In the case of a tie, the next solution can be selected randomly. Although which particular strategy performs better depends on the specifics of the task, it is generally found that a vast majority of landscapes are more effectively explored with the former, while the latter sometimes works better for very smooth landscapes <cit.>. Other proposed strategies include, for example, worst and partially worst improvement <cit.>, where the worst improving neighbor among the evaluated neighbors is selected. This again requires evaluation of the whole neighborhood or at least a significant part of it. First-improvement makes for a natural choice in the context of the considered problem given that a swap can increase the non-linearity only by 2. Hence, searching the remainder of the neighborhood after finding an improving solution would not only be much more costly and inefficient but also potentially useless as no better solution could be found, in the sense of higher non-linearity. Although this may not be necessarily the case if other fitness functions are used, our initial experimentation indicates that best-improvement leads to much larger numbers of required evaluations. On top of that, performing the full neighborhood search for each solution is extremely expensive for larger values of N. Consequently, the first-improvement variant is selected.
§ EXPERIMENTAL RESULTS
§.§ Preliminary analysis
In what follows we perform a preliminary analysis for the sake of selecting the mutation operator. We set N=8 (L=2^N=256). For each type of mutation, we sample 5000 genotypes (represented as bitstrings) randomly from the genotypical space, and for each sampled solution, two neighbors are produced by applying the mutation. Then we perform a brief analysis of the fitness correlation between initial solutions and their neighbors, where fitness is simply given by the non-linearity of the solution. The results are summarized in Table <ref>. We employ Spearman's rank correlation coefficient (for measuring the monotonicity of the relationship) and Pearson correlation coefficient (for measuring its linearity). All p-values are significant at α=0.05 significance level. As expected, large positive correlation values are seen for all mutations of swap type, with more swaps leading to reduced correlation values. More interestingly, cyclic shifts lead to positive (albeit much smaller) correlation values, despite the fact that this type of mutation leads to large Hamming distances between the original and new genotypes (on average 2^N-1, which is the same as the expected Hamming distance between two randomly and independently generated genotypes). This hints at the fact that a smaller portion of the non-linearity of a Boolean function is not sensitive to such shifts/rotations. Lastly, inversions lead to somewhat larger correlation values than permutations, which might be due to the fact they preserve a part of the structure (like cyclic shifts).
Taking the results into account, there are several reasons why swaps make for the preferred mutation operator. First, due to high genotypic and phenotypic similarity between neighbors, useful information is not discarded, unlike is the case with other mutation schemes that tend to be more similar to random search. Second, re-computation of the entire Walsh-Hadamard transform is not necessary after each swap, given that there exists a simple update rule that enables evaluation in linear time with respect to the size of the truth table <cit.>. This significantly reduces complexity and is in line with our primary goal of rendering the search more efficient. Finally, it is characterized by simplicity and yields neighborhoods that are large enough to enable local search to reach solutions with high non-linearity values, especially for smaller N.
§.§ Local search results
Two local search algorithms (one using fitness_2 from <cit.>, denoted by , and one relying on the herein introduced selection criterion, called referring to histograms) are compared for 6 ≤ N ≤ 9 (2^6 ≤ L ≤ 2^9). It should be emphasized that fitness_2 provides a strong benchmark, as it was shown to be particularly effective for this problem in previous research <cit.>, especially in conjunction with local search <cit.>. The results obtained by using fitness_1 are not shown given that they are vastly inferior to those resulting from the use of fitness_2, which is fully in line with the conclusions drawn in <cit.>. Each run of the local search is performed by starting from a randomly generated perfectly balanced genotype. It proceeds until one of the following happens: a) the target non-linearity value (which depends on the selected size N) is obtained, b) convergence to a non-target local optimum takes place, or c) the evaluation budget is exceeded. Convergence (scenario b)) is checked by comparing the current solution with all of its neighbors and is said to happen if no neighbor is better than it. If a) takes place, the run is said to be successful, and otherwise, it is said to have failed. The budget constraint is set to 500000 evaluations. The target non-linearity values for different N sizes are provided in Table <ref>. For N ≤ 8, the largest possible or known[For n=8, it is strongly suspected (but not proved) that the maximum non-linearity is 116, see Dobbertin's conjecture <cit.>. The lowest upper bound is 118.] values are simply used. However, for N=9, the target is set to 236 because local search most often converged to this value during our initial experimentation. The number of runs is set to 200 for all N values except in the case N=9 when it is set to 25 due to significant computational expense.
First, consider the percentages of successful runs for the two algorithms and for different N values, given in Table <ref>. For all considered cases, yields higher percentages of successful runs. To test for statistical significance, Fisher's exact tests[Given that the sample sizes are relatively small, while failure probabilities 1-p are relatively low, the normality assumption is not fully reasonable, and hence Fisher's exact tests are preferred over one-sided Z tests for proportions.] (for proportions) are performed. Statistical significance at the level of α = 0.05 is found for N=6 (p=0.0006) and N=8 (p=0.0000), and hence in all these cases, the null hypothesis of the true odds ratio of the populations underlying the observations equals one is rejected. For N=7 the obtained p-value is p=0.12171 and hence we cannot reject the null hypothesis. Remark that for N=9 the target value is smaller than the largest known value (240), making successful runs much more likely regardless of the variant used.
The main results are shown in Figure <ref> and Tables <ref> and <ref>. Figure <ref> depicts the number of fitness evaluations needed to reach the target non-linearity value in successful runs, shown as a box plot together with the respective percentiles. We first note that the results for N=9 are better than might be expected because of the previously mentioned reason (laxer target value). In general, the plot clearly indicates that the use of selection criterion leads to smaller numbers of required fitness evaluations. To further demonstrate this, we perform the one-sided Mann-Whitney U test on the distributions of the number of evaluating criteria for and . The null hypothesis is that there is no significant difference between the distribution of values in the two groups. The alternative hypothesis is that the first distribution is stochastically less than the second distribution. We perform such tests for all considered N values (6 ≤ N ≤ 9). All resulting p-values are less than 10^-4 and hence significant at the α=0.05 significance level. Consequently, the null hypothesis is rejected, pointing to the superiority of over . Finally, we emphasize that for the variant, an additional cost of building the histogram is incurred. However, this involves scattering elements across buckets (histogram bins) which can be done in linear time 𝒪(L).
§ CONCLUSION
We proposed a new method for searching for highly non-linear perfectly balanced Boolean functions that utilizes the properties of the phenotypes, represented by Walsh-Hadamard spectra. It is underpinned by a novel selection criterion according to which the phenotypes are directly compared to each other. The experimental results indicate the superiority of the proposed approach with respect to the number of costly Walsh-Hadamard spectrum evaluations that need to be performed to reach the target non-linearity value. In the following work, we plan to investigate the effect of accepting neutral moves in the search for highly non-linear Boolean functions. Some research seems to suggest that in most cases accepting neutral solutions leads to better performance <cit.>, although neutrality's relation to the evolutionary computation approaches has been a contentious topic <cit.>. Secondly, we plan to study the feasibility of the methods for larger N values, as well as the use of other encoding schemes besides binary strings (such as trees which might be used in conjunction with genetic programming). Thirdly, a memetic algorithm-based approach might be investigated <cit.> by using the local search in combination with a population-based global technique. Finally, another potentially promising path might lie in using novelty search <cit.> to try to prevent early convergence by promoting behavioral diversity, or similarly, quality-diversity approaches <cit.> to obtain a wide repertoire of both phenotypically diverse and high-performing solutions. To this end, alternative phenotypic representations could be employed, such as those speculatively proposed in <cit.>, like representations that collapse symmetries associated with (balanced) Boolean functions.
unsrt
|
http://arxiv.org/abs/2306.03434v1
|
20230606062242
|
Learning-Based Heuristic for Combinatorial Optimization of the Minimum Dominating Set Problem using Graph Convolutional Networks
|
[
"Abihith Kothapalli",
"Mudassir Shabbir",
"Xenofon Koutsoukos"
] |
cs.LG
|
[
"cs.LG",
"cs.DM"
] |
1]Abihith [email protected]
1,2]Mudassir [email protected]
1]Xenofon [email protected]
[1]organization=Department of Computer Science, Vanderbilt University, city=Nashville, TN, country=USA
[2]organization=Department of Computer Science, Information Technology University, city=Lahore, Punjab, country=Pakistan
[cor1]Corresponding author
A dominating set of a graph 𝒢=(𝒱, ℰ) is a subset of vertices S⊆𝒱 such that every vertex v∈𝒱∖ S outside the dominating set is adjacent to a vertex u∈ S within the set. The minimum dominating set problem seeks to find a dominating set of minimum cardinality and is a well-established NP-hard combinatorial optimization problem. We propose a novel learning-based heuristic approach to compute solutions for the minimum dominating set problem using graph convolutional networks. We conduct an extensive experimental evaluation of the proposed method on a combination of randomly generated graphs and real-world graph datasets. Our results indicate that the proposed learning-based approach can outperform a classical greedy approximation algorithm. Furthermore, we demonstrate the generalization capability of the graph convolutional network across datasets and its ability to scale to graphs of higher order than those on which it was trained. Finally, we utilize the proposed learning-based heuristic in an iterative greedy algorithm, achieving state-of-the-art performance in the computation of dominating sets.
Minimum Dominating Set Problem Combinatorial Optimization Graph Convolutional Networks Heuristic Algorithms Greedy Algorithms Integer Linear Programming
§ INTRODUCTION
Network-based optimization problems constitute a broad class of problems in the field of combinatorial optimization. These optimization problems offer a means to model highly intricate discrete decision problems across diverse domains where pairwise interactions play a crucial role, such as social network analysis <cit.>, wireless communications <cit.>, operations research <cit.>, scheduling <cit.>, and transportation <cit.>. A considerable portion of these problems belongs to the broader class of NP-hard problems, where it is challenging to find exact solutions, as doing so often necessitates a near-complete enumeration of the entire search space. Consequently, computation of exact solutions is practically infeasible, and approximation or heuristic algorithms are generally favored for practical applications. Although these algorithms exhibit significantly faster runtime and possess sub-exponential theoretical complexities, they often yield suboptimal solutions. Therefore, a key area of research revolves around the development of approximation or heuristic algorithms that can provide solutions that are as close to optimal as possible.
The minimum dominating set (MDS) problem is an important network-based optimization problem that involves finding the smallest dominating set of a given graph. A dominating set of a graph is a subset of the vertices in the graph such that every vertex is either in the dominating set or adjacent to a vertex in the dominating set. The MDS problem aims to find the dominating set of minimum cardinality. Dominating sets have a wide range of applications in various fields, including social networks <cit.>, cybersecurity <cit.>, biological networks <cit.>, bioinformatics <cit.>, multi-document summarization <cit.>, and wireless sensor networks <cit.> among others. The MDS problem is known to be NP-hard <cit.>. Furthermore, it is also Log-APX-complete, so assuming P≠NP, no polynomial-time algorithm can achieve an approximation factor better than O(log |𝒱|) for the MDS problem, where |𝒱| is the number of vertices in the problem instance <cit.>.
While approximation algorithms can provide theoretical bounds on optimality, these guarantees can be weak or unsatisfactory in general, and these algorithms may have poor empirical performance, if they exist at all <cit.>. Alternatively, heuristics lack the theoretical guarantees provided by approximation algorithms but can offer fast algorithms with good empirical performance. However, designing heuristics requires extensive manual trial-and-error and domain expertise <cit.>. Learning-based approaches have emerged as another viable approach for solving NP-hard problems, leveraging their ability to handle complex problems and learn abstract relationships from large amounts of high-dimensional data. Learning-based approaches can also exhibit faster computation and improved scalability compared to traditional algorithms. Recent works have applied learning-based algorithms to various NP-hard problems, such as maximal independent set, traveling salesman, knapsack, quadratic assignment, minimum vertex cover, and satisfiability <cit.>. However, these approaches encounter their own challenges, as most problem instances, especially with graph problems, cannot be adequately represented with fixed-length vectors. Additionally, enforcing problem constraints directly on machine learning models can be challenging. There can also be multiple optimal solutions for a given problem instance, requiring an effective learning-based approach to distinguish between distinct nodes in the solution space. Furthermore, NP-hard problems are inherently computationally intractable, and since obtaining labeled training data for these problems necessitates the computation of exact solutions for a series of problem instances, generating a sufficiently large labeled dataset is itself a time-consuming and resource-intensive task.
This work presents a novel graph machine learning framework to compute minimum dominating sets on arbitrary graphs. Given the challenges associated with developing traditional heuristic algorithms, as described earlier, we propose the use of graph convolutional networks (GCNs) to develop a learning-based heuristic. Specifically, our approach employs a GCN to generate a diverse set of likelihood maps over the set of vertices in a given problem instance, and we then treat these probability maps as heuristic functions for use in constructing a dominating set of the graph. We evaluate the empirical performance of the GCN when supplemented with a simple pruning algorithm or implemented in an iterative greedy (IG) algorithm, and compare these results with the state-of-the-art in the computation of dominating sets.
Contributions. The main contributions of this paper can be summarized as follows: 1) We provide a novel dataset of graph instances for the MDS problem with multiple labeled solutions computed per graph instance. 2) We label MDS solutions for graph instances in real-world datasets containing graphs of varied sizes and spanning different settings. 3) We train a GCN model to generate a series of heuristics on input graphs of any arbitrary structure, and we demonstrate that the resulting learning-based heuristics can outperform a classical greedy approximation algorithm. 4) We demonstrate that the GCN model can generalize across datasets and scale to graphs larger than those on which it was trained. 5) We obtain state-of-the-art performance in computation of dominating sets by using the GCN-based heuristics in an IG algorithm. All data and source code required to reproduce our results can be found at <https://github.com/abi-kothapalli/MinimumDominatingSets>.
The remainder of this work is structured as follows. Section <ref> describes related works in the dominating set literature and graph machine learning. Section <ref> then introduces the MDS problem more formally and presents the standard notation and background leveraged throughout the paper, including several key algorithms for the computation of dominating sets. Section <ref> describes our approach to the MDS problem. In Section <ref>, we present our empirical results and compare them with the previous state-of-the-art for the MDS problem. Finally, Section <ref> concludes this work and summarizes our contributions.
§ RELATED WORKS
We briefly review several related works in the literature. Several theoretical works exist that have attempted to bound the size of dominating sets. For a graph 𝒢 with n vertices, let the minimum and maximum degree of any vertex in 𝒢 be δ and Δ, respectively, and let d be the diameter of the graph (that is, the maximum number of edges on the shortest path between any two vertices in 𝒢). We denote by γ(𝒢) the domination number of 𝒢, which is simply the size of the smallest dominating set of 𝒢. It has been shown that γ(𝒢) satisfies both the bounds n/Δ + 1≤γ(𝒢) ≤n/2 and d+1/3≤γ(𝒢) ≤ n - Δ <cit.>. Moreover, <cit.> show that if δ > 1, then γ(𝒢) ≤ n1+ln(δ + 1)/δ + 1. For further discussion on the tightest known bounds on γ(𝒢) for various values of δ, we direct the reader to <cit.>. While such bounds provide useful theoretical results, it remains intractable to find optimal solutions to the MDS problem on general graphs.
Exact algorithms for the MDS problem have also been studied extensively. To the best of our knowledge, the current best exact algorithm for the MDS problem is presented in <cit.>. They employ a branch and reduce based algorithm to compute exact solutions to the MDS problem. Using a measure and conquer approach, the authors determine the runtime complexity of their algorithm to be O(1.4969^n) while only requiring polynomial space. Faster algorithms for specific subclasses of graphs have also been developed. In <cit.>, the authors discuss exact algorithms for chordal graphs, circle graphs, and dense graphs, which provide improvements in runtime compared to the O(1.4969^n) complexity required for general graphs. However, these algorithms are still exponential in complexity. Meanwhile, linear time algorithms for series-parallel graphs, k-degenerated graphs, and trees are presented in <cit.>, <cit.>, and <cit.>, respectively.
When we are not restricted to specific subclasses of graphs, however, the computation of minimum dominating sets remains intractable. As a result, there is significant interest in heuristic and approximation algorithms for the MDS problem. The most well-known approximation algorithm for the MDS problem uses a greedy heuristic that iteratively adds the vertex with the greatest number of non-dominated neighbors to a set until that set forms a valid dominating set of the graph. In <cit.>, it is shown that the size of the set returned by this algorithm is upper-bounded by n+1-√(2m+1), where n:=|𝒱| and m:=|ℰ| represent the number of vertices and edges, respectively, in the problem instance. This algorithm also achieves an O(log |Δ|) approximation factor, and <cit.> demonstrate that, in fact, the logarithm approximation factor is the best one can do, assuming P≠NP. Variants of this greedy heuristic and their empirical performances are discussed in <cit.>.
Conversely, several heuristic algorithms exist in the literature which do not offer the same theoretical guarantees as approximation algorithms but demonstrate strong empirical performance. We will briefly discuss the state-of-the-art algorithms for the computation of dominating sets. The most recent is an iterative greedy (IG) algorithm, proposed in <cit.>, which constructs an initial dominating set and iteratively destructs and reconstructs portions of the solution to improve the solution size. The randomized local search (RLS) algorithm, presented in <cit.>, builds solutions from different permutations of the vertices in a problem instance using a greedy approach and incorporates a so-called jump operator to enhance the solutions. Finally, <cit.> presents an ant colony optimization (ACO) algorithm enhanced with local search. This method generates populations of solutions randomly, which then evolve probabilistically while using local search to prune out redundant vertices. Variants of the ACO algorithm are presented in <cit.> and <cit.>. In <cit.>, the empirical performance of all these methods is compared, and it is shown that the IG algorithm outperforms the others. Therefore, we primarily benchmark the performance of our proposed algorithms against this IG algorithm. Further details on the IG algorithm are provided in Section <ref>.
Finally, we discuss related advances in graph machine learning that we leverage in this work. Specifically, we use the GCN architecture, a type of graph neural network, originally introduced in <cit.>. In our approach, we draw inspiration from <cit.>, which demonstrates the application of the GCN architecture to generate solutions for combinatorial optimization problems on graphs. In their setup, the GCN architecture is trained to learn a diverse set of probability maps over problem instances, which are then used in a tree search to obtain solutions to various combinatorial optimization problems, namely the satisfiability, maximal independent set, minimum vertex cover, and maximal clique problems. In our work, we posit that the probability maps learned by the GCN can directly serve as a diverse set of learning-based heuristic functions for the combinatorial optimization problem at hand. For further discussion on the use of graph machine learning for combinatorial optimization, we direct the reader to <cit.>.
§ BACKGROUND
Let 𝒢=(𝒱, ℰ) be a simple graph where 𝒱 represents the set of vertices in 𝒢 and ℰ represents the set of edges. For a given vertex v ∈𝒱, we define the open neighborhood of v, denoted as N(v), as the set {u ∈𝒱 : (u, v) ∈ℰ}. Similarly, the closed neighborhood of v, denoted as N[v], is defined as N(v) ∪{v}. We can extend these definitions to sets of vertices S⊆𝒱 such that N(S) := ⋃_v∈ S N(v) is the open neighborhood of S, and N[S] := ⋃_v∈ S N[v] = N(S) ∪ S is the closed neighborhood of S.
A set S⊂𝒱 is considered a dominating set of 𝒢 if and only if the closed neighborhood of S spans the vertex set of 𝒢, i.e., N[S] = 𝒱. Equivalently, S is a dominating set of G if for every v∈𝒱, v∈ S or ∃ u ∈ S such that (u, v)∈ℰ. That is, every vertex in 𝒱 is either in S or adjacent to a vertex in S. In the minimum dominating set (MDS) problem, we seek a dominating set, S^*, of minimum cardinality, i.e., |S^*|≤ |S|, for all valid dominating sets S of 𝒢. It is important to note that for certain graphs, S^* may not be unique, as there may exist multiple solutions to the MDS problem, each with the same cardinality. The cardinality of the minimum dominating set is referred to as the domination number of 𝒢, denoted as γ(𝒢).
§.§ Integer Linear Programming Formulation
One effective and practical approach for challenging combinatorial optimization problems is through an integer linear programming (ILP) formulation. Although the equivalent ILP problem remains NP-hard, there exist highly optimized standard linear programming solvers that can efficiently solve small to moderate-sized instances of these problems. The MDS problem can also be reduced to an ILP problem, and we use this formulation in Section <ref> to compute exact solutions to the MDS problem.
The ILP formulation for the MDS problem is as follows. For a graph 𝒢 = (𝒱, ℰ), where 𝒱 = {v_i}_i=1^n is the set of n vertices in the graph, we define a binary variable x_i∈{0, 1} for each vertex v_i. This binary variable indicates whether the corresponding vertex is included in the MDS solution. Specifically, x_i is set to 1 if v_i is selected to be included in the MDS, and x_i = 0 otherwise. We then define the objective function of the MDS problem as:
min ∑_i=1^n x_i.
To ensure that a solution is a valid dominating set, it must satisfy the following constraints:
∑_j
v_j∈ N[v_i] x_j ≥ 1 ∀ i = 1, …, n.
Alternatively, if A = [ a_ij ]∈{0, 1}^n× n is the symmetric adjacency matrix for 𝒢 such that a_ij = 1 if (v_i, v_j) ∈ℰ and a_ij = 0 otherwise, we can represent Equation <ref> equivalently as:
x_i + ∑_j=1^n a_ij x_j ≥ 1 ∀ i =1, … n.
These constraints ensure that the selected vertices indeed form a valid dominating set for 𝒢 by enforcing that for every vertex, either the vertex itself is selected or at least one of its adjacent vertices is selected. The objective function in Equation (<ref>) minimizes the number of selected vertices, ensuring the optimality of the resulting solution. It is worth noting that in the resulting solution, γ(𝒢) = ∑_i=1^n x_i, which is exactly the objective function being optimized.
§.§ Heuristic Approaches
The exact solutions to the minimum dominating set problem, including the integer programming formulation mentioned above, have exponential time complexity and do not scale well. Therefore, various heuristic approaches to find efficient solutions that may not be optimal have been developed. We outline the general structure of a greedy heuristic approach for the MDS problem in Algorithm <ref>. In this algorithm, h: 𝒱→ℝ defines a real-valued heuristic function, and vertices that maximize h(·) are greedily selected until a valid dominating set is formed.
We use two different traditional heuristics as baselines for performance comparisons. The first heuristic corresponds to a well-known greedy approximation algorithm for the MDS problem. This heuristic counts the number of non-dominated neighbors of a given vertex, i.e., the subset of neighbors not dominated by the current dominating set under construction, denoted as S. The greedy heuristic function, which we denote as h_g(v), is defined as follows:
h_g(v) = |N[v] ∖ N[S] |.
When this heuristic is used, we prioritize adding vertices to the solution that will dominate the greatest number of previously non-dominated vertices in the graph. This heuristic is an analog of a greedy heuristic algorithm originally introduced in <cit.> for the set cover problem, but it has since been adapted to a variety of related hard combinatorial optimization problems, including the vertex cover problem and the MDS problem itself <cit.>. As mentioned previously, the use of this particular heuristic is equivalent to a classical greedy approximation algorithm for the MDS problem, and it thus provides certain theoretical guarantees on its performance <cit.>.
The second heuristic we use is a random heuristic, which assigns a random value to each vertex in the graph:
h_r(v) ∼𝒰(0,1).
As a result, when h_r is used in Algorithm <ref>, vertices will be added to the solution set S in a random order until the set S dominates 𝒢. This approach represents a naive strategy for constructing a dominating set, as it does not consider any information from the topology of the input graph. We include this heuristic primarily for illustrative purposes, as it serves as a baseline for comparison with other, more informed heuristics.
For any optimal solution to an instance of the MDS problem, there must exist a valid optimal priority ordering of the vertices in the input graph that leads to the construction of the optimal solution, using a selection procedure similar to Algorithm <ref>. However, there is no tractable algorithm that can determine this ordering a priori, and as such, heuristic functions simply seek an ordering that minimizes the size of the resulting dominating set. Given the recent success of data-driven methodologies, it is natural to explore a machine learning-based approach that, given a set of MDS problem instances and their corresponding solutions, learns an optimal heuristic function. Nevertheless, such an approach does face its own challenges, and implementing off-the-shelf machine learning models may fail to yield satisfactory results. Firstly, an MDS problem instance takes the form of a graph, which is an irregular and permutation-invariant data structure. Such a data structure is not readily compatible with most machine learning models that expect input in the form of vectors in fixed-dimensional Euclidean space. Additionally, an MDS problem instance may not admit a unique solution; in fact, many MDS problem instances have an exponential number of optimal solutions, which can make a model prone to daunting error rates. In the following section, we attempt to address these challenges and train a GCN model to generate learning-based heuristic functions that we can use in the above-described algorithm. The GCN model is trained to compute probability maps over the vertex set 𝒱 of the input graph, predicting the probability of each vertex being part of an optimal solution. We can then directly employ these probabilities as a heuristic function.
§.§ Iterative Greedy Algorithm
Finally, we provide a brief overview of the IG algorithm as presented in <cit.>. IG is a versatile, hybrid metaheuristic framework that can be applied toward a variety of problem domains and shares similarities with other popular metaheuristic methods such as simulated annealing and tabu search. One of its first applications was in solving a type of scheduling problem, as outlined in <cit.>. It has since been successfully utilized for other NP-hard problems, including the binary quadratic programming problem <cit.> and the traveling salesman problem <cit.>. For a more comprehensive understanding of the IG framework, we refer readers to <cit.>. To our knowledge, <cit.> were the first to adapt this framework to the MDS problem and demonstrate that their IG algorithm could achieve state-of-the-art performance in computation of dominating sets.
The pseudocode for the IG algorithm to compute dominating sets is given in Algorithm <ref>. The algorithm begins by generating an initial valid dominating set using Algorithm <ref> with the greedy heuristic function h_g(v) from Equation <ref>. It then applies a local search procedure to further refine the solution. Then, the algorithm iteratively destructs and reconstructs the dominating set, incorporating the local search procedure after each reconstruction. The destruction phase takes an input parameter β, which specifies the proportion of the dominating set to randomly be destroyed. The reconstruction phase then greedily adds back vertices to the partially destructed set until it is once again a valid dominating set, employing the same greedy heuristic h_g(v). For a more detailed explanation of each stage, see <cit.>. We also note that the algorithm also takes an input parameter Δ that limits the number of iterations without improvement, but in practice, we also impose a time limit on the algorithm, similarly to <cit.>.
Since this approach uses the classical greedy heuristic given by h_g(v) in its InitialSolution and Reconstruction procedures, we hypothesize that we could instead use the heuristic learned by the GCN in these procedures. As we later discuss in further detail, the GCN is trained to learn multiple diverse probability maps, and since these probability maps can be treated as heuristic functions, we can iterate through these different probability maps during the IG procedure, enabling the algorithm to leverage the diversity of the heuristic functions learned by the GCN. This has the potential to yield higher-quality solutions compared to those obtained using the classical greedy heuristic alone.
§ METHODOLOGY
The task of devising an effective heuristic that can accurately approximate optimal solutions for the MDS problem is an intricate endeavor, demanding substantial iterative refinement and domain expertise. In this section, we present a novel methodology striving to acquire MDS-specific insights from an extensive collection of problem instances and their corresponding solutions. By adopting a data-driven, learning-based approach, we aim to surpass the performance of existing heuristics, offering a promising avenue for addressing the challenges posed by the MDS problem. Formally, we train a specialized neural network, denoted as f(·), which takes a graph 𝒢 = (𝒱, ℰ) as input and produces a set of probability maps over 𝒱. Each probability map ŷ∈ [0,1]^n indicates the likelihood of each individual vertex belonging to an optimal MDS solution. Such a probability map can then be used to define a heuristic h(v_i) = ŷ_i, where ŷ_i is the output probability corresponding to vertex v_i, enabling us to construct dominating sets as described in Algorithm <ref>. In the following, we discuss the design and development of this neural network, starting with the essential task of dataset generation.
§.§ Dataset Generation
The generation of high-quality datasets plays a critical role in solving combinatorial optimization problems using supervised learning techniques. These problems often involve discrete decision-making processes, and the quality of the data directly impacts the ability of the learning algorithm to capture underlying patterns and relationships effectively. Therefore, carefully crafting and selecting datasets that accurately represent the problem space and capture the relevant information can lead to improved model performance and better optimization results. Additionally, the dataset should encompass a diverse range of problem instances to ensure the model generalizes well to unseen data. In the context of the MDS problem, an additional challenge arises from the existence of multiple optimal solutions for a single instance, and training a machine learning model on only one of those solutions would be insufficient to achieve acceptable accuracy.
While existing literature provides instances of graphs for the MDS problem (e.g., see <cit.>), they lack the domination number and solutions for the corresponding instances, let alone multiple labeled sets of vertices that comprise different optimal solutions for each input graph. In the following, we propose to generate our own dataset of instances for the MDS problem and compute optimal dominating sets for each instance to serve as labeled training data. Our dataset comprises a total of 1349 random binomial graphs generated using the Erdős–Rényi model with varying orders and edge densities. The size of the graphs ranges from 150 to 255 vertices, with an average size of 192 vertices. The average dominination number of the graphs in this dataset is 25 vertices. It is worth noting that we choose to generate relatively sparser graphs, as the MDS for dense graphs tends to consist of fewer vertices and is generally easier to compute. The optimal solutions for each instance are computed using the ILP approach discussed below.
To compute the optimal dominating sets for each instance in the dataset, we reduce the MDS problem on each graph to an equivalent ILP problem, as described in Section <ref>. The ILP problem can then be solved using a linear programming optimizer, such as those provided by <cit.>. Once the optimization problem is solved, we define the solution set for the MDS problem instance (using the notation from Section <ref>) as S = {v_k_1, v_k_2, …, v_k_γ} where γ(𝒢) = ∑_i=1^n x_i and x_k_i = 1 for i ∈{1, …, γ}. This procedure yields a single optimal dominating set per instance in the dataset. However, as noted previously, the MDS for a given instance is not always unique. Therefore, to obtain multiple diverse optimal solutions for each instance, we introduce an additional constraint to the optimization problem and solve the modified problem. The additional constraint is defined as follows:
∑_i=1^γ x_k_i≤γ - 1.
By adding this additional constraint, we force the ILP solver to generate another MDS for the input graph that is distinct from the previous solution. We can then repeat this process by adding another contraint equation given by the newly obtained solution, yielding a third MDS solution, and so forth. It is important to note that the size of the solution set is checked after each iteration to ensure that the additional constraint defined by Equation <ref> does not change the size of the solution set. That is, for the updated set of binary labels {x_i}_i=1^n, we check that ∑_i=1^n x_i = γ(𝒢). This procedure ultimately allows us to generate multiple optimal solutions for each instance, capturing the diverse nature of optimal dominating sets for the MDS problem.
§.§ Training GCN Model
The input for the MDS problem is a graph, and therefore, traditional machine learning models that take regular vectorized input cannot be used. Hence, we use a specialized graph neural network architecture for this purpose. We train our network f(·), adapting the GCN architecture presented in <cit.>. To train the model, we use a subset of graphs from the synthetically-generated dataset described in Section <ref>. We outline the details of the architecture below.
Let 𝒟={(𝒢_i, ℐ_i)} be the training set, where ℐ_i ∈{0, 1}^n_i is a binary representation of one of the optimal MDS solutions generated for the graph instance 𝒢_i with n_i vertices. That is, for each vertex v_j, a one in the solution vector ℐ_i reflects the fact that v_j is part of the MDS, and a zero reflects otherwise. The network f(𝒢_i; θ) is parameterized by θ and is trained to produce m probability maps:
⟨ f^1(𝒢_i; θ), f^2(𝒢_i; θ), …, f^m(𝒢_i; θ)⟩.
Each probability map f^k(𝒢_i; θ) ∈ [0, 1]^n_i encodes the likelihood of each vertex in 𝒢_i belonging to an optimal MDS solution.
The rationale behind generating multiple probability maps is to capture the diversity of solutions that can exist for a given instance of the MDS problem. Since there can often exist several different and non-overlapping solutions to a given problem instance, a network that outputs only a single probability map may not capture the full range of possible solutions. Consider for example Figure <ref>, which illustrates a graph with three unique MDS solutions that are entirely non-overlapping. A naively designed network architecture might produce a probability map that assigns equal likelihood to each vertex. This would not provide a useful heuristic, as it is effectively the same as the random heuristic. To overcome this limitation, our goal is to generate multiple high-quality probability maps that can be used to generate diverse solutions for a single input graph. By training the network on a diverse dataset of instances with multiple labeled solutions, we aim to enable the network to learn and generate a range of probability maps that capture the various valid dominating sets for different graph instances.
These probability maps are generated via an adaptation of the GCN architecture originally proposed in <cit.>. The GCN consists of L+1 layers {𝐇^l}_l=0^L, where 𝐇^l∈ℝ^n× C^l is the l^th feature layer, C^l is the number of feature channels in the l^th layer, and n is the number of vertices in the input graph. Since the network receives a graph 𝒢 without any vertex-specific feature vectors as input, we let 𝐇^0 = 1_n, C^0, meaning that 𝐇^0 contains rows of all-one vectors of size C^0. This ensures that the network treats all vertices equivalently, and predictions are made solely based on the structure of the graph. Each subsequent layer 𝐇^l+1 is then computed from the previous layer as follows:
𝐇^l+1 = σ(𝐇^lθ_0^l + Γ^-1/2𝐀Γ^-1/2𝐇^lθ_1^l)
where A∈{0, 1}^n× n is the symmetric adjacency matrix of the graph; θ_0^l, θ_1^l ∈ℝ^C^l× C^l+1 are the layer-specific trainable weight matrices; Γ is the diagonal vertex degree matrix of 𝐀 with diagonal entries Γ_i, i=(v_i), and Γ^-1/2𝐀Γ^-1/2 is the symmetric normalization of 𝐀; and σ(·) is a nonlinear activation function. For the final output layer 𝐇^L we use the sigmoid activation function, and for all other layers we use ReLU. In the output layer, we set C^L=m, and we treat each row in the output layer as a probability map: 𝐇^L_k = f^k(𝒢; θ). Here, we use m to refer to the total number of output probability maps and k as an index to an arbitrary output probability map.
We train the network to optimize the hindsight loss on the training set, defined as:
ℒ(𝒟, θ) = ∑_i min_k ℓ(ℐ_i, f^k(𝒢_i; θ))
where
ℓ(ℐ_i, f^k(𝒢_i; θ)) = ∑_j=1^n [ ℐ_ijlog f_j^k(𝒢_i; θ) + (1-ℐ_ij)log(1-f_j^k(𝒢_i; θ)) ]
is the binary cross-entropy loss for a single given probability map. Here, ℐ_ij denotes the j^th element of ℐ_i and similarly, f_j^k(𝒢_i; θ) is the j^th element of f^k(𝒢_i; θ). The hindsight loss used here ensures that the loss for each training sample is determined only by the best of the m different probability maps produced by the network; this encourages the network to develop highly diverse probability maps that can ultimately allow us to construct a variety of unique solutions.
§.§ Using GCN Model to Construct Dominating Sets
To construct a collection of m dominating sets for a test graph instance 𝒢 using the trained network f(·; θ), we generate a set of m heuristic functions {h_f^k(·)}^m_k=1 based on the probability maps output by the network. Specifically, for the k^th probability map ŷ = f^k(𝒢_i; θ), we define a heuristic h^k_f(v_i) = ŷ_i, as described previously. We can then directly employ each of these heuristics to generate a dominating set using Algorithm <ref>. Since we have m probability maps for each input graph 𝒢, we can define m different heuristic functions {h_f^k(·)}^m_k=1, which ultimately result in m different candidate dominating sets. We illustrate this process in Figure <ref>.
We also apply a pruning algorithm to refine each of the constructed dominating sets. This pruning algorithm is given in Algorithm <ref>. We find that since Algorithm <ref> greedily chooses vertices using a given heuristic, the resulting dominating sets often end up containing redundant vertices, regardless of the exact heuristic being employed. However, we can easily identify vertices that can be safely removed from the dominating set using a greedy approach, thus minimizing the solution size. Once each of the m candidate dominating sets are pruned, we choose the set with minimum cardinality as our solution. We will hereafter refer to this setup simply as the “GCN” algorithm and the classical greedy and random heuristic approaches given in Section <ref> as the “Greedy” and “Random” algorithms, respectively. Note that we will also apply this pruning algorithm in the Greedy and Random algorithms, in order to accurately compare performance.
As mentioned in Section <ref>, we also test a variant of the IG algorithm that uses the heuristics learned by the GCN model for its InitialSolution and Reconstruction phases. Specifically, we use the heuristic given by the first probability map h^1_f(v) for the InitialSolution procedure, and we then cycle through the m different heuristics for the Reconstruction phase of each iteration. That is, we use h^1_f(v) on the first iteration, h^2_f(v) on the second iteration, and so on. Once h^m_f(v) is reached, we cycle back to h^1_f(v) on the next iteration and repeat. In what follows, we will refer to this setup simply as the “IG-GCN” algorithm and the traditional IG algorithm (with the classical greedy heuristic h_g(v) as described in Section <ref>) as the “IG” algorithm. In the subsequent section, we delve into the specifics of our experimental setup and conduct a comprehensive comparison of numerical results.
§ EXPERIMENTAL EVALUATION
In this section, we outline the details of our experimental setup and present the numerical results of our evaluation. For our experiments, we train the GCN architecture as described in Section <ref> with the following network parameters. We use L=20 graph convolutional layers and set the number of feature channels in each layer as C^l = 32 for all l = 1, 2, …, L. Therefore, the model outputs m=C^L=32 output probability maps for any given input graph. We train the GCN over 250 training epochs, with a learning rate of 0.001. Furthermore, we use β = 0.2 and Δ = 200 for the IG and IG-GCN procedures, as these values were experimentally tuned in <cit.>.
As mentioned in Section <ref>, we use 1122 graphs (83.2%) from our generated dataset of synthetic graphs to train the GCN network, leaving the remaining 227 graphs (16.8%) for testing the performance once the training is complete. The results, in terms of the size of the dominating sets returned by each algorithm on these test graphs, are displayed in Figure <ref>.
Our results clearly demonstrate that the GCN algorithm effectively constructs dominating sets that consistently outperform the Greedy algorithm, resulting in smaller set sizes. Figure <ref> provides a comprehensive comparison between the standard IG algorithm and the IG-GCN algorithm, reaffirming the significant performance improvement achieved by replacing the classical greedy heuristic with the GCN-based heuristic. Moreover, we present the sizes of the optimal solutions for reference, while plotting the sizes of the dominating sets returned by the GCN algorithm in both figures to facilitate a thorough comparison.
We repeat this experimental procedure on larger synthetically-generated graphs of 500 to 1000 vertices. Once again, we randomly generate these graphs using the Erdős–Rényi model as before. Note that on these larger instances, it is computationally intractable to compute the optimal MDS, and hence the optimal solution sizes for these graphs are unknown. However, we present the results of using the GCN and other baseline heuristics in Figure <ref>. As shown in Figure <ref>, even for graphs larger than the instances on which the GCN-based architecture is trained, the GCN algorithm is able to outperform the Greedy algorithm in general. In Figure <ref>, we observe that the IG-GCN algorithm is generally able to outperform the standard IG algorithm, but this performance margin is smaller than the corresponding margin on smaller order graphs.
We also test the performance of these algorithms with graphs generated using the Barabási-Albert model in order to determine the extent to which the model of randomness used affects the performance of the GCN-based algorithms. While the Erdős–Rényi model is used to generate random binomial graphs, the Barabási–Albert model uses a preferential attachment mechanism to generate random scale-free networks. The results on these graphs are given in Figure <ref>. We find that in this case, the Greedy and IG algorithms are able to outperform the GCN and IG-GCN algorithms. This is to be expected since the Greedy and IG algorithms select vertices based on the number of non-dominated neighbors of a given vertex, making them more likely to perform well in the setting of scale-free networks.
Finally, we conduct a series of experiments on a multitude of real-world graph datasets to assess the practicality of the GCN and IG-GCN methods on graphs that are derived from real-world phenomena. We provide a summary of information on these datasets in Table <ref>, and the results are presented in Table <ref>. We aim to include a diverse range of datasets with varying graph sizes and settings, including biological networks, social networks, and computer vision. Our findings consistently demonstrate that the IG-GCN method achieves state-of-the-art performance, surpassing all other existing MDS algorithms across all datasets.
§ CONCLUSION
In this paper, we have presented a novel approach to address the NP-hard problem of computing minimum dominating sets. Leveraging the capabilities of graph convolutional networks (GCNs), we have introduced a data-driven methodology that surpasses conventional greedy or random heuristics. Our experimental results demonstrate that our GCN approach exhibits remarkable performance, yielding near-optimal solutions for both synthetic and real-world datasets. Notably, our model showcases exceptional generalization capabilities, extending its effectiveness to graphs of higher order compared to its training set. Furthermore, our research shows that the GCN model can effectively apply its learned knowledge to real-world graphs, despite being trained exclusively on synthetically-generated random graphs. This underscores the robustness and adaptability of our proposed methodology. Additionally, by integrating the GCN-based heuristics into the iterative greedy (IG) framework, we have achieved state-of-the-art performance in the computation of dominating sets. This breakthrough not only highlights the effectiveness of our approach but also paves the way for advancements in solving complex combinatorial optimization problems.
template/elsarticle-num-names
|
http://arxiv.org/abs/2306.01511v1
|
20230602125835
|
The Dynamic Persistence of Economic Shocks
|
[
"Jozef Barunik",
"Lukas Vacha"
] |
q-fin.GN
|
[
"q-fin.GN",
"econ.GN",
"q-fin.EC"
] |
[
W Mirza^1,2, A Torres-Sánchez^1,3[Present address:
Tissue Biology and Disease Modelling Unit, European Molecular Biology Laboratory, Doctor Aiguader 88, Barcelona (08003), Spain.],
G Vilanova^1 and Marino Arroyo^1,3,4
July 31, 2023
===================================================================================================================================================================================================================================
This paper presents a model for smoothly varying heterogeneous persistence of economic data. We argue that such dynamics arise naturally from the dynamic nature of economic shocks with various degree of persistence. The identification of such dynamics from data is done using localised regressions. Empirically, we identify rich persistence structures that change smoothly over time in two important data sets: inflation, which plays a key role in policy formulation, and stock volatility, which is crucial for risk and market analysis.
Keywords: persistence heterogeneity, wold decomposition, local stationarity, time-varying parameters
JEL: C14, C18, C22, C50
[
W Mirza^1,2, A Torres-Sánchez^1,3[Present address:
Tissue Biology and Disease Modelling Unit, European Molecular Biology Laboratory, Doctor Aiguader 88, Barcelona (08003), Spain.],
G Vilanova^1 and Marino Arroyo^1,3,4
July 31, 2023
===================================================================================================================================================================================================================================
§ INTRODUCTION
It is well documented that macroeconomic and financial variables have exhibited a very high degree of time variation over the past decades <cit.> as both stable and uncertain periods associated with different states of an economy were driven by different shocks. At the same time, an increasing number of authors argue that these variables are driven by shocks that influence their future value with heterogeneous levels of persistence <cit.>.[We use the term persistence to capture a property of a time series that is closely related to its autocorrelation structure. In particular, the degree of persistence gives us a precise description of how a shock will affect the series. A low degree of persistence indicates the transitory nature of shocks, which force the time series to return to its mean path. In contrast, when shocks push the time series away from the mean path, they are said to be highly persistent. A shock tends to persist for a long time.] A possibly non-linear combination of transitory and persistent responses to shocks will produce time series with heterogeneous persistence structures that remain hidden to the observer using traditional methods. Given this discussion, it is natural to ask to what extent economic data are driven by shocks that are both heterogeneously persistent and dynamic, and how we can infer such rich dynamics from the data.
Inferring time-varying persistence from data on important economic series such as inflation, consumption, economic growth, unemployment rates or various measures of uncertainty has crucial implications for policy making, modelling or forecasting. However, despite the progress made in exploring unit roots <cit.>, structural breaks <cit.>, or more complicated long memory or fractionally integrated structures that can exhibit large amounts of time persistence without being non-stationary <cit.>, there is still no clear consensus on how to explore such dynamic nature of data. The inability to identify the dependence from the data alone leads to a tendency to rely on assumptions that are difficult, if not impossible, to validate. To better understand and forecast economic time series, we need an approach that can precisely localise the horizons and time periods in which the crucial information occurs.
The aim of this paper is to provide a representation for a non-stationary time series that allows a researcher to identify and explore its rich time-varying heterogeneous persistence structures. We aim to identify localised persistence that will be useful for modelling and forecasting purposes. Our work is closely related to the recent strand of the literature that proposes to represent a covariance stationary time series as a linear combination of orthogonal components carrying information about heterogeneous cycles of alternative lengths <cit.>. While these methods are particularly well suited to studying the heterogeneously persistent structure of a time series, stable as well as uncertain times associated with different states of the economy imply a time-varying nature of responses to shocks that remains hidden when assuming stationary data. Thus, the localisation of persistence structures will open up new avenues for modelling and forecasting. A model that allows the persistence structure to change smoothly over time is essential, since it is unrealistic to assume that the stochastic future of a time series is stable in the long run. At the same time we observe non-stationary behaviour of data even in shorter time periods in a number of cases. Therefore, modelling and forecasting under the assumption of stationarity can be misleading.
Different degrees of persistence in economic variables are natural and can be reconciled with agents' preferences, which differ according to their horizon of interest. Economic theory suggests that the marginal utility of agents' preferences depends on the cyclical components of consumption <cit.>, and the literature documents frequency-specific investor preferences.
<cit.> and relates them to investment horizons in their risk attitudes <cit.>. Such behaviour can be observed, for example, under myopic loss aversion, where an agent's decision depends on the valuation horizon. Unexpected shocks or news have the capacity to alter such preferences and may therefore generate transitory and persistent fluctuations of different magnitudes.[For example, a shock that affects longer horizons may reflect permanent changes in expectations about future price movements. Such a shock may lead to a permanent change in a firm's future dividend payments <cit.>. Conversely, a shock that affects shorter horizons may suggest temporary changes in future price movements. For example, suppose the shock is only a change in an upcoming dividend payment. This would likely lead to a very short-term change, reflecting the transitory nature of the news.] Importantly, not many economic relationships remain constant over decades, years or even months, and the evolution of the economy with unprecedented declines in economic activity, such as the COVID pandemic or the recent severe impact of the Russian war with Ukraine, generates very different persistence structures. Output fluctuations may persist for a long time, but not forever and will eventually disappear <cit.>. The discussion calls for a new framework in which heterogeneity in decision making across horizons, horizon-specific risk aversion and the like are not based on the assumption of stationary data, but are truly time-varying.
To identify time-varying transitory and persistent components of a time series, we propose a time-varying extended world decomposition (TV-EWD) that works with localised heterogeneous persistence structures. Assuming stationarity of a small neighbourhood around a given fixed point in time, we allow time variation in the coefficients with the notion of locally stationary processes <cit.>. Our time-varying extended wold decomposition, which relaxes the stationarity assumption of <cit.>, then formalises the idea that a time series is represented by time-varying persistence structures. Our decomposition is informative about the duration of the fluctuation that is most relevant to the variability of the time series at a given point in time, and sheds light on potential economic mechanisms driving the time series under consideration at a given point in time. To the best of our knowledge, we are the first to study the time-varying degree of persistence in time series.[<cit.> notes that a large number of time series show the existence of local or temporary persistence.]
While such a decomposition is potentially useful for modelling, as it allows us to better characterise the dependence structures of the data, our results can also be used by the forecasting literature. As noted by <cit.>, we have seen very slow progress in the forecasting accuracy of economic time series over the past decades. The first reason is that the information is hidden under the noise and is unevenly distributed over different horizons. Second, economic time series are dynamic and very often non-stationary when we model them over a long period. The analysis proposed in this paper can accurately extract the relevant information and build a more accurate forecasting model.
The identification of the time-varying persistence structure has a number of advantages over traditional methods based on Wold decomposition. Traditional Wold decomposition, which underlies the vast majority of contemporaneous models, gives us aggregate information about the speed, horizon and intensity of shock persistence. It is a coarse and imprecise description that is insufficient to identify the precise structure of persistence in a given period. To capture the heterogeneity of persistence, it is necessary to consider the duration (propagation) of shocks at different levels of persistence and at different points of time.
In the two different empirical examples, we show that the persistence structure found in the data is not only highly heterogeneous, but also time-varying. We have chosen to study two very different data sets that are important to economists: inflation and stock volatility. At the same time, both datasets exhibit typical persistence features and are crucial to understand. While it is the properties of aggregate inflation that are ultimately of interest to policymakers, the characteristics and determinants of the behavioural mechanisms underlying price-setting are an important factor in the way inflation behaves over time. The persistence of inflation has direct implications for the conduct of monetary policy. Similarly, stock market volatility is of great interest as one of the key measures of risk and uncertainty. We show that even in periods of very high persistence, we can uncover less persistent sub-periods where the transient nature of shocks prevails. Our model, which can accurately identify such dynamics within the time-varying persistence structure, is then useful for identifying the dynamics driving the data and leading to improved forecasts.
The remainder of the paper is structured as follows. Section <ref> proposes a time-varying extended world decomposition based on a locally stationary process, discusses methodology, forecasting models based on such a decomposition, and estimation. Section <ref> examines the time-varying persistence of US inflation and the volatility of major US stocks. Section <ref> then concludes.
§ TIME VARIATION OF TIME SERIES COMPONENTS WITH DIFFERENT LEVELS OF PERSISTENCE
The most fundamental justification for time series analysis is Wold's decomposition theorem. It states that any covariance stationary time series can be represented as a linear combination of its own past shocks and moving average components of finite order <cit.>. This is an enormously important fact in the economic literature, useful to macroeconomists when studying impulse response functions, and central to tracing the mechanisms of economic shocks to improve policy analysis.
At the same time, this is only one of the possible representations of a time series, which is particularly suitable for cases where we can assume stationarity of the model. In other cases, where we cannot assume that the stochastic properties of the data are stable over time, and where the unconditional approach may be useful, the stationarity assumption may be misleading. It is important to recognise that other representations may capture deeper properties of the series, which may also change smoothly over time. As argued in the introduction, we want to explore properties of time variation as well as properties related to different levels of persistence in the time series.
The latter is made possible by the persistence-based Wold decomposition proposed by <cit.>, who show how to decompose stationary time series into the sum of orthogonal components associated with their own levels of persistence. These individual components have a Wold representation defined with respect to the scale-specific shocks with heterogeneous persistence. Here we aim to provide a persistence-based representation for a locally stationary process <cit.> and discuss how to decompose a locally stationary process into independent components with different degrees of persistence. With the proposed model we will be able to study the time variation of components with different degrees of persistence.
§.§ Locally stationary processes
While stationarity plays an important role in time series analysis over decades due to the availability of natural linear Gaussian modelling frameworks, many economic relationships are not stationary in the longer run. The state of the economy, as well as the behaviour of agents, is often highly dynamic, and the assumption of time-invariant mechanisms generating the data is often unrealistic.[An exception are nonstationary models where persistence is generated by integrated or cointegrated processes.] A more general nonstationary process can be one that is locally close to a stationary process at each point in time, but whose properties (covariances, parameters, etc.) gradually change in a nonspecific way over time. The idea that the process can only be stationary for a limited period of time and that the process is still valid for estimation is not new. So-called locally stationary processes were introduced in <cit.>.
More formally, assume an economic variable of interest follows a nonstationary process x_t depending on some time-varying parameter model. In this framework, we replace x_t by a triangular array of observations (x_t,T;t=1,…,T) where T is the sample size, and we assume that we observe x_t,T at time points t=1,…,T. Such nonstationary process x_t,T can be approximated locally <cit.> around each rescaled and fixed time point u ≈ t/T such that u∈[0,1], by a stationary process x_t(u). In other words, under some suitable regularity conditions, |x_t,T-x_t(u)| = 𝒪_p ( |t/T-u|+1/T). While stationary approximations vary smoothly over time as u ↦ x_t(u), locally stationary processes can be interpreted as processes which change their (approximate) stationary properties smoothly over time. The main properties of x_t,T are therefore encoded in the stationary approximations, and hence in the estimation, we will focus on quantities 𝔼[ g(x_t(u),x_t-1(u),…) ] with some function g(.) as a natural approximation of 𝔼[ g(x_t,T,x_t-1,T,…) ].
Crucially, a linear locally stationary process x_t,T can be represented by a time varying MA(∞)
x_t,T = ∑_h=-∞^+∞α_t,T(h) ϵ_t-h,
where coefficients α_t,T(h) can be approximated under certain (smoothness) assumptions (see Assumption <ref> in Appendix <ref>) with coefficient functions α_t,T(h) ≈α(t/T,h), and ϵ_t are independent random variables with ϵ_t = 0, ϵ_sϵ_t=0 for s t , |ϵ_t| < ∞. The construction with α_t,T(h) and α(t/T,h) looks complicated at first glance, but a function α(u,h) is needed for rescaling and to impose smoothness conditions while the additional use of α_t,T(h) makes the class rich enough to cover autoregressive models (see Theorem 2.3. in <cit.>) in which we are interested in later.
It is straightforward then to construct a stationary approximation (with existing derivative processes)
x_t(u) = ∑_h=-∞^+∞α(u,h) ϵ_t-h,
where at every fixed point of time u the original process x_t,T can be represented as a linear combination of uncorrelated innovations with time-varying impulse response (TV-IRF) functions α(u,h). Note the process is assumed to have zero mean μ(t/T) so far. While this may be unrealistic in number of applications, we will return to this assumption later in the estimation.
§.§ Time-Varying Extended Wold Decomposition
Having a representation that allows for time variation of the impulse response function, we further introduce a localised persistence structure. We use the extended Wold decomposition of <cit.>, which allows us to decompose the time series into several components with different levels of persistence. <cit.> and <cit.> show that the decomposition brings substantial benefits in understanding the persistence dynamics of economic time series and improves forecasting performance, as many economic time series exhibit a heterogeneous persistence structure (across horizons, scales).
Importantly, we argue that in addition to recovering the heterogeneous persistence structure of a typical economic time series, we need to localise it. Localisation, together with persistence decomposition, can dramatically improve our understanding of dynamic economic behaviour by allowing the persistence structure to change smoothly over time. In turn, models built with such an understanding can bring significant forecasting benefits, as we will show later with empirical examples.
Specifically, we propose a model that uses locally stationary processes to capture the dynamics of heterogeneous persistence. Knowing that we can express the locally stationary process using the TV-MA(∞) representation, we can adapt the Extended Wold decomposition proposed by <cit.> under alternative assumptions and localise the decomposition. The proposition <ref> formalises the main result and proposes the Time-Varying Extended Wold Decomposition (TV-EWD) model.
If x_t,T is a zero mean, locally stationary process in the sense of Assumption <ref> in Appendix <ref> that has a representation x_t,T = ∑_h=-∞^+∞α_t,T(h) ϵ_t-h, then it can be decomposed as
x_t,T=∑_j=1^+∞∑_k=0^+∞β_t,T^{j}(k) ϵ_t-k2^j^{j},
where for any j ∈ℕ, k ∈ℕ
β_t,T^{j}(k)= 1/√(2^j)[ ∑_i=0^2^j-1-1α_t,T(k2^j+i) - ∑_i=0^2^j-1-1α_t,T(k2^j+2^j-1+i) ],
ϵ_t^{j} = 1/√(2^j)( ∑_i=0^2^j-1-1ϵ_t-i - ∑_i=0^2^j-1-1ϵ_t-2^j-1-i),
where coefficients β_t,T^{j}(k) can be approximated under Assumption <ref> in Appendix <ref> with coefficient functions β_t,T^{j}(k) ≈β^{j}(t/T,k), ϵ_t are independent random variables with ϵ_t = 0, ϵ_sϵ_t=0 for s t , |ϵ_t| < ∞, and ∑_k=0^∞(β_t,T^{j}(k))^2 < ∞ for all j.
Follows directly from the properties of locally stationary processes <cit.> and extended Wold decomposition <cit.>.
Proposition <ref> formalizes the discussion about representation of a time series that offers decomposition to a j uncorrelated persistence components that can smoothly change over time. Specifically it allows to construct a stationary approximation (with existing derivative processes) to the process x_t,T with time-varying uncorrelated persistent components
x_t^{j}(u)=∑_k=0^+∞β^{j}(u,k) ϵ_t-k2^j^{j},
and always reconstruct the original process as
x_t(u)=∑_j=1^+∞ x_t^{j}(u),
In other words, we are able to decompose the time series into uncorrelated components with different level of persistence at any fixed point of time. Further note that ϵ_t^{j} is a localized MA(2^j-1) with respect to fundamental innovations of x_t,T, and β^{j}(u,k) is the time-varying multiscale impulse response associated with scale j and time-shift k 2^j at a fixed point of time approximated by u.
The decomposition hence allows us to explore the time-varying impulse responses at different persistence levels. A scale-specific impulse response provide exact information how a unit shock to the system propagates in various horizons at a given point of time. For example, in case of daily data, the first scale, j=1, describe how a unit shock dissipates in 2 days, for the second scale, j=2- days and so on.
§.§ Obtaining time-varying persistence structures from data
While the proposed approach will identify localized time-varying persistence structure in the time series, next step will be to use it to build a parametric model that can be used to improve forecasts. The first step will be to obtain the quantities from the previous section.
In light of the assumptions that underpin the model, we conjecture that an economic variable of interest follows a time-varying parameter autoregressive (TVP-AR) model with p lags
x_t,T=ϕ_0(t/T)+ϕ_1(t/T)x_t-1,T+…+ϕ_p (t/T)x_t-p,T + ϵ_t,
that has a representation given in Proposition <ref> and can be under appropriate conditions approximated locally by a stationary process x_t,T≈ x(u) for a given t/T ≈ u with ϕ_i(t/T) ≈ϕ_i(u). To obtain the decomposition, we need to identify the time-varying coefficient estimates Φ(t/T)=(ϕ_1(t/T),…,ϕ_p(t/T))' on the centered data x_t,T = x_t,T-ϕ_0(t/T). This is particularly important in some datasets that display clear time trend, while it can be negligible in others. Still while our model assumes zero mean process, we need to take this step.
§.§.§ Local linear estimation
We estimate the coefficient functions ϕ_i(t/T) by the local linear method. The local linear methods of estimation has been employed in the nonparametric regression estimation due to its attractive properties such as efficiency, bias reduction, and adaptation of boundary effects <cit.>. Assuming each ϕ_i(t/T) has a continuous second-order derivative in the interval [0,1], it can be approximated around u by a linear function through the first-order Taylor expansion
ϕ_i(t/T) ≈ϕ_i(u) + ϕ'_i(u)(t/T-u),
where ϕ'_i(u)=∂ϕ_i(u)/∂ u is its first derivative. Based on the local approximation of the model <ref>, minimising the locally weighted sum of squares estimate the parameters Φ(u)={ϕ_1(u),…,ϕ_p (u)}'
{Φ(u),Φ'(u)} = (θ,θ')∈ℝ^2argmin∑^T_t=1[x_t,T - U_t,T^⊤θ-(t/T-u )U_t,T^⊤θ' ]^2 K_b(t/T-u)
,
where U_t,T=(x_t-1,T,x_t-2,T,…,x_t-p,T)^⊤, and K_b(z) = 1/b K(z/b) is a kernel function with b=b_T>0 being bandwidth satisfying the conditions that b→ 0 and T b →∞ as T →∞. Note that b controls the amount of smoothing used in the local linear estimation. Roughly, we fit a set of weighted local regressions with an optimally chosen window size chosen by bandwidth b discussed below. The estimator has nice general expression that can be obtained by elementary calculations <cit.>, and estimates of coefficients are asymptotically normally distributed under some regularity conditions.
Note we use centered data x_t,T = x_t,T-ϕ_0(t/T) obtained as
{ϕ_0(u),ϕ'_0(u)} = (μ, μ')∈ℝ^2argmin∑^T_t=1[x_t,T - μ - μ' (t/T-u )]^2 K_b(t/T-u),
As is well-known, the local linear estimator is sensitive to the choice of the bandwidth b, and thus it is critical to choose an appropriate bandwidth in the applications. Here we follow the commonly used cross validation bandwidth choice for the time series case <cit.>
Finally, after obtaining the time-varying coefficients, we express the time-series as a (local ) Wold's MA representation with innovation process ϵ_t (see Eq. <ref>). Then, we use result of the Proposition (<ref>) to obtain the horizon specific impulse response coefficients associated with scale (horizon) j and time-shift k2^j. The decomposition needs to be truncated at a finite number of scales J and observations T. Hence a finite version of the time-varying extended Wold decomposition at a given point of time u is considered
x_t,T=∑_j=1^Jx_t,T^{j} + π_t^{J}=∑_j=1^J∑_k=0^N-1β^{j}(u,k) ϵ_t-k2^j^{j} + π_t^{J}(u),
where ϵ_t^{j} = 1/√(2^j)( ∑_i=0^2^j-1-1ϵ_t-i - ∑_i=0^2^j-1-1ϵ_t-2^j-1-i), and π_t^{J} is a residual component at scale J, defined as π_t^{J}(u) = ∑_k=0^+∞γ_k^{J}(u) ϵ_t,T^{J} and ϵ_t^{J}=1/√(2^J)∑_j=0^2^J-1ϵ_t-i, γ_k^{J}(u)=1/√(2^J)∑_j=0^2^J-1α(u,k2^J+i), the estimate of scale-specific coefficients β^{j}(u,k) are computed as β^{j}(u,k)= 1/√(2^j)( ∑_i=0^2^j-1-1α(u,k2^j+i) - ∑_i=0^2^j-1-1α(u,k2^j+2^j-1+i) ). The residual component π_t^{J}(u) is usually negligible so we do not consider it in the estimation. For more details see <cit.>.
§.§.§ Forecasting models with time-varying persistence
One of the key advantages of our model is that it captures a smoothly changing persistence structure that can be explored in forecasting. In a number of cases, it may be unrealistic to assume that the stochastic structure of time series is stable over longer periods. Moreover, non-stationarity may also be observed in shorter time series and forecasting under the assumption of stationarity may be misleading. A common approach to dealing with non-stationarity is to assume a model with smoothly changing trend and variance but a stationary error process <cit.>. While a number of authors consider forecasting in the locally stationary setting <cit.>, our approach extends these models by exploring the smoothly changing persistence structures of the data.
Our aim is to determine an h-step-ahead predictor for the unobserved x_T+h,T from the observed x_1,T,…,x_T,T data. Having the β^{j}(u,k), ϵ_t^{j} and ϕ_0(u) we can decompose original time series x_t,T to a deterministic trend and orthogonal persistence components x_t,T^{j} and estimate the weights w^{j} that identify importance of specific horizons in the time series as
x_t,T=ϕ_0(t/T) + ∑_j=1^J w^{j}x_t,T^{j}+η_t,T.
Working with stationary representation of the process, conditional h-step-ahead forecasts can be obtained directly by combination of the trend forecast and weighted forecast of scale components following <cit.>
_t[x_T+h,T]=_t[x_T+h,T^{0}] + ∑_j=1^Jw^{j}_t[x_T+h,T^{j}]
where conditional expected value of the trend _t[x_T+h,T^{0}] is forecasted as TV-AR(1)[The process is forecasted with the local linear estimator in <ref>, with the Epanechnikov kernel having the width denoted as the kernel width - 2 in the subsequent forecasting excercisses.]
and for the conditional expectation of the scale components _t[x_t+1,T^{j}] we use forecasting procedure provided by <cit.>.
§ TIME-VARYING PERSISTENCE IN DATA
The proposed approach is useful for any time series that can be expected to change its persistence structure over time. Here we aim to demonstrate the importance of identifying the persistence structure on two different and important datasets. Both time series are quite different in nature, but share the common feature of a smoothly changing persistence structure.
In the first example, we examine the time-varying persistence structure of inflation, which is one of the most important macroeconomic time series. While it is the properties of aggregate inflation that are ultimately of interest to policymakers, an important factor underlying the behaviour of inflation over time is the characteristics and determinants of the behavioural mechanisms underlying price setting. The persistence of inflation has direct implications for the conduct of monetary policy. While time-varying models are used in the literature <cit.> to capture the time variation, a number of authors also consider the decomposition of inflation into transitory and permanent components <cit.>. Here we will build a more flexible model that explores the time-varying persistence structure of inflation and builds a more precise model.
In the second example, we will look at the volatility of stocks. Similar to inflation, stock market volatility is of great interest as one of the key measures of risk and uncertainty. The study of its heterogeneous persistence structure, which evolves dynamically over time, will be useful to a wide audience.
§.§ Time-varying persistence in the U.S. inflation
The data we use is the Personal Consumption Expenditures (PCE) price index[The Personal Consumption Expenditures price index measures the prices that US consumers pay for goods and services. The change in the PCE price index captures inflation or deflation across a wide range of consumer expenditures.] available on the Federal Reserve of St-Louis website[<https://fred.stlouisfed.org>] as a proxy for US inflation. Our data contain 781 monthly observations over the period from January 1959 to February 2023, and we look at the logarithmic change in the index.
Inflation is an interesting time series for our analysis because the shocks that drive inflation have varying degrees of persistence and tend to change over time. Inflation is driven by different shocks in stable periods than in turbulent periods such as the COVID-19 crisis. Such smoothly changing persistence structure of inflation remains hidden to the observer when using classical time series tools such as impulse response functions.
Figure <ref> illustrates this using our TV-EWD. Specifically, the plot shows the ratio of β^j(t/T,k) to the sum across scales ∑_j β^j(t/T,k) with j scales representing 2,4,8,16,32 month persistence of shocks at the first (k=1)horizon k2^j. That is we look at relative importance of the
information at 2^j horizon in multiscale impulse response function.
At each time period, we identify the persistence structure of the shocks affecting the US inflation series. There are periods where most of the shocks have a transitory duration of 2 or 4 months. For example, transitory shocks of up to two months had the largest share of information in the years 1959-1964, 1968-1970, 1994-2001. In contrast, the years 1966-1967, 1976-1978 and 2008-2010 were mainly driven by more persistent shocks of up to 8 months. It is also interesting to note that the persistence structure is very different during several different crises, which are marked by NBER recession periods in the plot. During the recession from November 1973 to March 1975, inflation was mainly driven by shocks lasting 16 and 32 months and was therefore very persistent.
We see that the persistence structure of the inflation time series is rich and changes smoothly over time. Next, we explore how this precise identification helps in modelling and forecasting US inflation.
§.§.§ Forecasting Inflation
The exploratory analysis in the previous section shows that the persistence structure of US inflation is rich and varies smoothly over time. We aim to explore this feature in order to propose a forecasting model based on the precisely identified time-varying persistence. To evaluate the performance of our model, we use the unconditional AR(3) model, the extended Wold decomposition model of <cit.>, and two time-varying autoregression models. In this way we will see how persistence decomposition improves on the usual time-varying models, and how time variation improves on persistence decomposition.[Both TV-HAR and TV-AR(3) use the local linear estimator with kernel width 0.3] For TV-EWD estimation and forecasting, we use the procedure described in section <ref>, with J=5 scales, two autoregressive lags model, kernel width of 0.6 for the trend and 0.2 for the moving average parameters. We divide the observed time series into two parts:
x_1,T,…,x_m,T_in-sample,x_m+1,T,…,x_T,T_out-of-sample,
where the in-sample data is used to fit the models and then we compare the out-of-sample predictive performance using the root mean square error (RMSE) and mean absolute error (MAE) loss functions. Using the first 645 observations for the in-sample, we are left with 136 out-of-sample observations. The forecast horizons considered are h=[1,2,6,12] months ahead. The results of the forecast, expressed as the mean of the loss functions relative to the benchmark AR(3) model, are shown in table <ref>.
The results show that the TV-EWD model provides the best forecasting performance at all forecasting horizons. This advantage increases as the forecast horizon lengthens. This suggests that the accurate identification of the rich persistence structure of the inflation time series is important for longer-term forecasting. Interestingly, this advantage is also strong for the non-time-varying model, EWD. As this model uses the same persistence levels (horizons) as our TV-EWD, we can conclude that the identification of the persistence structure is more important than the time-varying ability in the case of long-run forecasts. Importantly, the results also show that identifying the persistence structure further improves forecasts.
§.§ Persistence structure of volatility
The second important time series with a potentially interesting structure for our analysis is volatility. Volatility is one of the key measures in finance as it captures fluctuations in asset prices and hence their risk. We use daily data on volatility[Realised volatility is computed as the sum of the squared logarithmic 5-minute returns for each day of the sample.] for all stocks listed in the S&P 500 index from 5 July 2005 to 31 August 2018 from TickData, and thus we work with 3278 days of the 496 stock returns.
Again, we start by illustrating the persistence structure of data. Since our sample contains 496 stocks, we have chosen to look at the first available (in alphabetical order), which is the company Agilent Technologies Inc. Note that we have looked at other stocks and the persistence structure is similarly rich to the one we are discussing. Figure <ref> plots the average β^j(t/T,k)/∑_j β^j(t/T,k) ratio j scales representing 2,4,8,16,32 days persistence of shocks for each year of the sample for the first horizon (k=1) of a multiscale impulse response function. That is, each year we can see the average contribution of the shocks to the volatility series. The reason we look at averages is that the daily sample contains rich dynamics that are difficult to visualise, and at the same time the aggregate information for one year strongly supports our objective.
Specifically, we can again see some periods that are mostly driven by transitory shocks of up to 4 days, such as 2005, 2010 or 2014, as well as periods that are driven by more persistent shocks, such as 2008, 2011 or 2016-2018.
Overall, we can see how rich the persistence dynamics of the volatility series are. While it is important to capture the time variation of the dependence structures in the series, it is also crucial to capture the smoothly changing persistence.
§.§.§ Forecasting volatility
Finally, we use the time-varying persistence structure we identify to build a more accurate forecasting model for volatility. We compare the out-of-sample forecasting performance of our TV-EWD model with the popular heterogeneous autoregressive (HAR) model of <cit.>, then the Extended Wold Decomposition (EWD) of <cit.>, two time-varying parameter alternatives, TV-AR(3), and TV-HAR [Both TV-HAR, TV-AR(3) use the local linear estimator with kernel width 0.3.] for the realised volatilities of all S&P 500 constituents available over the sample. The model set benchmarks both the time variation and the persistence structure, and thus we will be able to see how our model improves the forecasts. We estimate the model parameters on the information set containing the first 1000 observations and save the rest for 1, 5 and 22 day out-of-sample tests. As we are exploring the changing behaviour of the data, we also look at different time periods to see how sample specific the results are. The richer the localised structure in the data, the larger the gains we expect. Therefore, along with the aggregate results for the entire out-of-sample period from August 2009 to August 2018, we look at two specific periods, August 2009 to August 2012 at the beginning of the out-of-sample period, and then August 2016 to August 2017.
For TV-EWD estimation and forecasting we use the procedure described in section <ref> with J={5,5,7} scales, {2,5,15} autoregressive lags for h={1,5,22} forecasts respectively. Note that the choice of higher order lags in the autoregression naturally improves forecasts with increasing horizons. The kernel width of 0.2 minimises the mean square error of the forecasts. We compare the forecast performance using the MAE (mean absolute error) and RMSE (root mean square error) loss functions relative to the benchmark HAR model.
As the results are obtained on the large sample of 496 stocks, we concentrate the results in Table <ref>, which reports the median estimates for all stocks and is accompanied by box plots showing the errors for all stocks in Figures <ref> (MAE) and <ref> (RMSE). Focusing on the results in Table <ref>, we can see that TVP-EWD outperforms all other models over both forecast horizons and different samples. This is particularly strong as this result holds for the median of the errors computed for 496 stocks in the sample, except for the RMSE for the first period August 2009 - August 2012, where TV-HAR produces consistently better forecasts. Figures <ref> (MAE) and <ref> (RMSE), which show more granular results for all stocks in box plots, confirm that TV-EWD produces a much better forecast than all other models for most of the stocks considered, across both horizons and samples considered.
Looking more closely at the results, it is important to note that we are comparing several different approaches. First, a popular HAR model captures the unconditional persistence structure with 22 lags in the autoregression, while EWD improves the results at longer horizons by identifying more precise persistence structures. This result is consistent with the findings of <cit.>, although they use only single time series, and our result holds for a large cross-section of stock volatilities.
Second, and more importantly, adding time variation to autoregressive models significantly improves the results, as time-varying parameters capture the dynamics in the data. In particular, TV-HAR significantly improves forecasts. Finally, when the persistence structure is allowed to vary smoothly over time by our TV-EWD model, we document further improvements in forecasts. The ability to appropriately incorporate changing persistence structure in the data gives TV-EWD an advantage especially at longer horizons.
It is also interesting to note that in the much quieter period from August 2016 to August 2017, where we do not find a very heterogeneous persistence structure, the complex TV-EWD model performs similarly to both HAR and TV-HAR models in terms of RMSE, although it still has the best results in terms of MAE.
§ CONCLUSION
A representation that allows for smoothly changing persistence structures in economic data has been constructed to study the dynamic persistence structures of important macroeconomic and financial data and to improve their forecasting. The model provides valuable information about the fundamental behaviour of the time series, which can be used to construct more precise models and forecasts.
chicago
§ APPENDIX: LOCALLY STATIONARY PROCESSES
(Locally Stationary Processes) <cit.>: Let the sequence of stochastic processes x_t,T, (t=1,⋯,T) be called a locally stationary process if x_t,T has a representation
x_t,T = ∑_h=-∞^+∞α_t,T(h) ϵ_t-h
satisfying the following conditions:
sup_t,T=|α_t,T (h) |≤K/l(h),
where l(h) for some κ>0 is defined as:
l(h):=
1 | h|≤1
| h|log^1+κ| h| | h| >1
and K is not dependent on T, and there exist functions α(·,h):(0,1]→ with
sup_t=1,…,T=|α(t/T,h) |≤K/l(h),
sup_h ∑^n_t=1|α_t,T(h) - α( t/T,h ) |≤ K,
V(α(·,h))≤K/l(h),
where V(·) denotes the total variation on [ 0,1], ϵ∼ iid, ϵ_t ≡ 0, ϵ_t^2 ≡ 1. We also assume that all moments of ϵ_t exist.
Let the sequence of stochastic processes x_t,T, (t=1,⋯,T) be called a locally stationary process if x_t,T has a representation
x_t,T=∑_j=1^+∞∑_k=0^+∞β_t,T^{j}(k) ϵ_t-k2^j^{j},
satisfying the following conditions ∀ j:
sup_t,T=|β_t,T^{j} (h) |≤K/l(h),
where l(h) for some κ>0 is defined as:
l(h):=
1 | h|≤1
| h|log^1+κ| h| | h| >1
and K is not dependent on T, and there exist functions β^{j}(·,h):(0,1]→ with
sup_t=1,…,T=|β^{j}(t/T,h) |≤K/l(h),
sup_h ∑^n_t=1|β^{j}_t,T(h) - β^{j}( t/T,h ) |≤ K,
V(α(·,h))≤K/l(h),
where V(·) denotes the total variation on [ 0,1], ϵ∼ iid, ϵ_t ≡ 0, ϵ_t^2 ≡ 1. We also assume that all moments of ϵ_t exist.
|
http://arxiv.org/abs/2306.05126v1
|
20230608115058
|
Mapping Brains with Language Models: A Survey
|
[
"Antonia Karamolegkou",
"Mostafa Abdou",
"Anders Søgaard"
] |
cs.CL
|
[
"cs.CL"
] |
Formalizing, Verifying and Applying ISA Security Guarantees as Universal Contracts
Dominique Devriese
==================================================================================
Over the years, many researchers have seemingly made the same observation: Brain and language model activations exhibit some structural similarities, enabling linear partial mappings between features extracted from neural recordings and computational language models. In an attempt to evaluate how much evidence has been accumulated for this observation, we survey over 30 studies spanning 10 datasets and 8 metrics. How much evidence has been accumulated, and what, if anything, is missing before we can draw conclusions? Our analysis of the evaluation methods used in the literature reveals that some of the metrics are less conservative. We also find that the accumulated evidence, for now, remains ambiguous, but correlations with model size and quality provide grounds for cautious optimism.
§ INTRODUCTION
Advances in neuroimaging technologies have made it possible to better approximate the spatiotemporal profile of the computations responsible for language in the brain <cit.>. At the same time, advances in natural language processing have produced language models (LMs) with high performance in many tasks <cit.>.
This progress has motivated scientists to start using state-of-the-art LMs to study neural activity in the human brain during language processing <cit.>. Conversely, it has also prompted NLP researchers to start using neuroimaging data to evaluate and improve their models <cit.>.
At the conceptual core of these studies lies the suggestion that representations extracted from NLP models can (partially) explain the signal found in neural data.
These representations can be based on co-occurrence counts <cit.> or syntactic and discourse features <cit.>. Later studies use dense representations such as word embeddings <cit.>
and recurrent neural networks to extract contextual stimuli representations <cit.>. More recently, transformer-based architectures have been shown to align even better with neural activity data <cit.>.
Such work shows that LMs can be trained to induce representations that are seemingly predictive of neural recordings or features thereof. However, pursuing the literature, it quickly becomes clear that these papers all rely on different experimental protocols and different metrics <cit.>. So questions are:
How much evidence has really been accumulated in support of structural similarities between brains and LMs? And more importantly, what exactly, if anything, drives this alignment, and what are we to understand from it? After gathering all the studies, we examine their evaluation metrics and their interrelationships, providing discussions on the corresponding findings.
Contributions Our study provides four major contributions for the wider NLP audience: (a) a detailed review of the literature on mappings between fMRI/MEG recordings and representations from language models;
(b) an overview of the datasets and mapping methods;
(c) an analysis of the evaluation setups that have been used to link neural signals with language models and how they relate; (d) a discussion of what drives this representational alignment and what we, as a field, can make of it going forward.
Terminology First, a brief note on terminology:
Neural response measurements refer to recordings of the brain activity of subjects reading or listening to language. We focus on
(a) functional magnetic resonance imaging (fMRI), which measures neuronal activity via blood oxygenation level-dependent contrast and has a high spatial resolution but poor temporal resolution (3–6s) and (b) magnetoencephalography (MEG), which involves the measurement of the magnetic field generated by the electrical activity of neurons in the cortex, providing a more accurate resolution of the timing of neuronal activity.
Voxels refer to the smallest unit of data
in a neuroimage, being the three-dimensional equivalent of a pixel in two-dimensional images <cit.>.
Finally, we use brain decoding to refer to predicting stimuli from brain responses (i.e.
reading the brain).
Brain encoding will then refer to predicting brain responses from stimuli. Whereas decoding models serve as a test for the presence of information in neural responses, encoding models can be interpreted as process models constraining brain-computational theories
<cit.>.
§ DATASETS
To infer a mapping between language models and brains, researchers rely on datasets in which brain activity is recorded in response to linguistic stimuli. In some studies, the stimuli are single words <cit.>
or sentences displayed on a screen <cit.>. In others, participants read longer stories <cit.> or listened to speech or podcasts <cit.>. Table <ref> lists publicly available datasets that have been used in the context of mapping language models to and from recordings of brain response. Differences between the datasets –the number of participants, the equipment, the experimental setup, pre-processing steps, and probabilistic corrections – should lead us to expect some variation in what researchers have concluded <cit.>.
§ HOW TO PREDICT BRAIN ACTIVITY?
In this section, we survey work in which neural responses are predicted from linguistic representations. Such work typically aims to shed light on how language functions in the brain.
One of the earliest studies exploring the mapping between brain and language representations is by <cit.>, who trained a linear regression model on a set of word representations extracted from 60 nouns using 115 semantic features based on co-occurrence statistics, to predict the corresponding fMRI representations of the same nouns. They use pair-wise matching accuracy evaluation, extracting two words w and w' for evaluation, and showed that the predicted fMRI for a word w was closer to the real fMRI image for w than to the real fMRI image for w', at above-chance levels.
<cit.> also report percentile rank results, ranking predicted fMRI images by similarity with the real image of w. We discuss how the metrics relate in 6.
The dataset of <cit.>is also used by <cit.>, who extract linguistic features from part-of-speech taggers, stemmers, and dependency parsers, showing that dependency parsers are the most successful in predicting brain activity. They also use leave-2-out pair-matching as their performance metric.
Later on, <cit.> moved on to predicting brain activation patterns for entire sentences rather than for isolated words. They recorded fMRI neural response measurements while participants read a chapter from Harry Potter and the Sorcerer’s Stone, then extracted a set of 195 features for each word (ranging from semantic, syntactic properties to visual and discourse-level features) to train a comprehensive generative model that would then predict the time series of the fMRI activity observed when the participants read that passage. Leave-2-out pair-matching accuracy is used for evaluation.
<cit.>, in contrast, use fMRI recordings of participants listening to spoken narrative stories, representing each word in the corpus as a 985-dimensional vector encoding semantic information driven by co-occurrence statistics. They train per-voxel linear regression models and evaluate their predicted per-word fMRI images by their per-voxel Pearson correlation with the real fMRI images, showing that 3-4 dimensions explained a significant amount of variance in the FMRI data.
<cit.> are among the first to use neural language models, using recurrent models to compute contextualized embeddings, hidden state vectors of previous words, and word probabilities. They run their experiments of MEG recordings of participants reading Harry Potter, obtained in a follow-up study to <cit.>. From the three sets of representations, they then train linear regression models to predict the MEG vectors corresponding to each word, and the regression models are then evaluated by computing pair-matching accuracy.
Similarly, <cit.> evaluates static word embeddings on the data from <cit.>, learning linear transformation from word embeddings into an fMRI vector space. The predictions are evaluated through mean squared error (MSE).
<cit.> evaluate recurrent language models
against the fMRI dataset from <cit.>. Their findings show that contextual language model representations align significantly better (to brain response) compared to static word embedding models. Their evaluation metric is the total sum of explained variance[The squared Pearson correlation coefficient. We will not distinguish between studies using Pearson correlation and studies using explained variance. See Appendix <ref>.]
Following this, <cit.> use attention-based transformer language models for brain mapping. They finetune BERT <cit.>to predict neural response measurements from the Harry Potter dataset, showing that the fine-tuned models have representations that encode more brain-activity-relevant language information than the non-finetuned models. They rely on pair-matching accuracy as their performance metric.
As in <cit.>, <cit.> map static word embeddings into the vector space of the neural response measurements (fMRI). They introduce a new dataset of such measurements from subjects listening to natural stories.
They rely on explained variance as their performance metric.
<cit.> evaluate word and sequence embeddings from 4 recurrent and attention-based transformer language models, using the Harry Potter fMRI dataset. They evaluate models across layers, context lengths, and attention types, using pairwise matching accuracy as their performance metric. In a later study, <cit.> induce compositional semantic representations of "supra-word meaning" which they then use to predict neural responses across regions of interest, evaluating their models using Pearson correlation.
Also using the Harry Potter data, <cit.> evaluate five models, one static and four contextualized, relying on a variant of representational similarity analysis <cit.>.
The results suggest that models provide representations of local contexts that are well-aligned to neural measurements. However, as information from further away context is integrated by the models, representations become less aligned to neural measurements.
In a large-scale study, <cit.> examine the relationships between 43 diverse state-of-the-art neural network models (including embedding models, recurrent models, and transformers) across three
datasets (two fMRI, one electrocardiography). They rely on a metric they term Brain Score which involves normalising the Pearson correlation by a noise ceiling. Their results show that
transformer-based models perform better than recurrent or static models, and larger models perform better than smaller ones.
Similarly, in <cit.>, the <cit.> fMRI and MEG datasets are used to compare a variety of transformer architectures. They study how architectural details, training settings, and the linguistic performance of these models independently account for the generation of brain correspondent representations. The results suggest that the better language models are at predicting words from context, the better their activations linearly map onto those of the brain.
<cit.> evaluate three static and five attention-based transformer models, in combination with four fine-tuning tasks and two machine translation models.
They train linear regression models to evaluate their word-level representations
against a new fMRI dataset from participants listening to podcast stories. They find a low-dimensional structure in language representations that can predict brain responses. In a similar setting, <cit.> examine why some features fit the brain data better arguing that the reason is that they capture various linguistic phenomena.
<cit.> evaluate syntactic features in conjunction with BERT representations, finding that syntax explains additional variance in brain activity in various parts of the language system, even while controlling for complexity metrics that capture processing load.
In a series of studies <cit.> investigate GPT2's activations in predicting brain signals using the <cit.> dataset.
Their evaluation metric is Brain Score <cit.>.
To determine which factors affect the brain encoding <cit.> examine the impact of test loss, training corpus, model architecture, and fine-tuning in various models using the <cit.> dataset. They evaluate model performance using Pearson Correlation.
<cit.> study the impact of context size in language models on how they align with neural response measurements. They use the <cit.> dataset and evaluate recurrent and attention-based transformer architectures. In a later study, <cit.> use the <cit.> dataset and evaluate BERT-base models (fine-tuned for various NLP tasks).
They showed that neural response predictions from ridge regression with BERT-base models fine-tuned for coreference resolution, NER, and shallow syntactic parsing explained more variance for <cit.> response measurements.
On the other hand, tasks such as paraphrase generation, summarization, and natural language inference led to better encoding performance for the <cit.> data (audio).
Using the same dataset, in <cit.> it is shown that the presence of surface, syntactic, and semantic linguistic information is crucial for the alignment across all layers of the language model.
They use pairwise matching accuracy and/or Pearson correlation as their performance metrics in these studies.
<cit.> extract feature representations from four attention-based transformer models. They evaluate the impact of fine-tuning on the BookSum dataset <cit.>. All models are used to predict brain activity on the Harry Potter data. Pairwise matching accuracy and Pearson correlation are their performance metrics.
<cit.> focus more narrowly on variants of GPT-2, showing that improvements in alignment with brain recordings are probably not because of the next-word prediction task or word-level semantics, but due to multi-word semantics. Their reported metric is Pearson correlation.
Intermediate summary The above studies differ in many respects. Several metrics are used: pairwise-matching accuracy,[Some papers <cit.> use a variant of pairwise-matching accuracy, in which the model has to discriminate between two averages of 20 random predicted neural response measurements. We do not distinguish between the two variants.] Pearson correlation (or Brain Score), mean squared error, and representational similarity analysis. Even studies that report the same performance metrics are not directly comparable because they often report on results on different datasets and use slightly different protocols, e.g., <cit.> and <cit.>. <cit.> compare various encoding experiments and receive very diverse results for different evaluation metrics. The diversity of metrics and data renders a direct comparison difficult. To remedy this, we consider how the metrics compare in 6.
§ HOW TO PREDICT LINGUISTIC STIMULI?
Decoding models work in the other direction and aim to predict linguistic features of the stimuli from recordings of brain response.
<cit.> introduce a decoder that predicts stimuli representation of semantic features given fMRI data.
They introduce a novel dataset of neural responses aligned with annotation of
concrete and abstract
semantic categories (such as pleasure, ignorance, cooking etc.).
They evaluate static word embeddings by applying ridge regression to predict per-word fMRI vectors. A separate regression model is trained per dimension, allowing for dimension-wise regularization. The model is evaluated in terms of pairwise matching accuracy, but also in terms of percentile rank, adapted to the decoding scenario.
<cit.>
also train linear regression models which map from the response measurements in <cit.>, but to representations of the same sentences produced by the BERT language model finetuned on different natural language understanding tasks. The regression models are evaluated using two metrics: mean squared error and average percentile rank.
Their results show that fine-tuning with different NLU objectives leads to worse alignment and that, somewhat surprisingly, the only objective which does lead to better alignment is a scrambled language modeling task where the model is trained to predict scrambled sentences.
<cit.> re-examine the work of <cit.> using various metrics (pairwise matching accuracy, percentile rank, cosine distance, R^2, RSA), comparing decoder models (ridge regression, perceptron, and convolutional neural networks).[Only the former two are linear and relevant for this meta-study.] They show that positive results are only obtained using pairwise matching accuracy.
<cit.> investigate whether aligning language models with brain recordings can be improved by biasing their attention with annotations from syntactic or semantic formalisms. They fine-tune the BERT models using several syntacto-semantic formalisms and evaluate their alignment with brain activity measurements from the <cit.> and <cit.> datasets. Their results – obtained using Pearson correlation as performance metric – are positive for two in three formalisms.
<cit.> propose a new evaluation method for decoding, a so-called cross-modal cloze task. They generate the data for the task from the neural response measures in <cit.> and <cit.>. The task itself amounts to a cloze task in which the context is prefixed by the fMRI image of the masked word. They evaluate models using precision@k. Note how this task is considerably easier than linearly mapping from language model representations into fMRI images, and precision@k results therefore cannot be compared to those obtained in other settings. Their best precision@1 scores are around 0.3, but only marginally (0.03) better than a unimodal LM.
Finally, <cit.> try a more realistic setup by predicting language from fMRI scans of subjects not included in the training. They use the <cit.> dataset and evaluate the regression models based on pairwise accuracy and precision@k (or top-k accuracy). They propose evaluating with direct classification as a more demanding setup to evaluate and understand current brain decoding models.
Intermediate summary Decoding studies also differ in many respects. Several metrics are used: pairwise-matching accuracy, Pearson correlation, percentile rank, cosine distance, precision@k, and representational similarity analysis; and several datasets are used. <cit.> criticize the evaluation techniques of decoding studies and suggest adopting task and mechanism explicit models. It is of particular interest to our study that both <cit.> only report positive results for pairwise matching accuracy compared to other metrics. This suggests pairwise matching accuracy is a less conservative metric (and maybe less reliable).
§ PERFORMANCE METRICS
We present the evaluation metrics used in the above studies and discuss how they relate.
See Table <ref> for a summary of metrics and corresponding studies.
<cit.> introduce pairwise matching accuracy. Because of their small sample size, they use a
leave-2-out cross-validation, which later work also adopted. The metric is a binary classification accuracy metric on a balanced dataset, so a random baseline converges toward 0.5. Many studies have relied on this metric, both in encoding and decoding (see Table <ref>).[The method is often referred to as 2v2 Accuracy. The variant that averages across 20 images, is then referred to as 20v20 Accuracy.]
Pearson correlationPearson correlation is another widely used metric in the studies surveyed above, measuring the linear relationship between variables, and providing insight into the strength and direction of their association. <cit.>, compute Pearson correlation between predicted and actual brain responses using Gaussian random vectors to test statistical significance. Resulting p-values are corrected for multiple comparisons within each subject using false discovery rate (FDR) <cit.>. Others have used Bonferroni correction <cit.> or block-wise permutation test <cit.> to evaluate the statistical significance of the correlation <cit.>. Some report R^2 (explained variance) instead of or in addition to correlation coefficients <cit.>.
Others have adopted a more elaborate extension of Pearson correlation, namely BrainScore <cit.>. BrainScore
is estimated on held-out test data, calculating Pearson’s correlation between model predictions and neural recordings divided by the estimated ceiling and averaged across voxels and participants.
Percentile rank was first used for encoding <cit.>, but can also be used for decoding <cit.>. In encoding, the predicted brain image for w is ranked along the predicted images for a set of candidate words w' by their similarity to the real (ground truth) image for w. The average rank is then reported. For decoding, they rank word vectors rather than neural response images. Note the similarity metric is unspecified, but typically cosine distance is used.
Mean squared error, the average of the squared differences between word vectors and neural responses, was first used for encoding in <cit.> on a held-out test split. It was also used by <cit.>.
Representational similarity analysis
(RSA) was introduced in <cit.> as a non-parametric way to characterize structural alignment between the geometries of representations derived from disparate modalities. RSA
abstracts away from activity patterns themselves and
instead computes representational similarity
matrices (RSMs), which characterize the information carried by a given representation method
through global similarity structure. A rank correlation coefficient is computed between RSMs derived from the two spaces, providing a summary statistic indicative of the overall representational alignment between them. Being non-parametric, RSA circumvents many of the various methodological weaknesses (such as over fitting, etc.). <cit.>, <cit.>, and <cit.> apply (variations of) RSA to investigate the relations between different model components, and then to study the alignment of these components with brain response.
Cosine similarity was used in <cit.> to select between the candidate images in pairwise matching accuracy, as well as in percentile rank and RSA, but the raw cosine similarities between predicted and real images or embeddings can also be used as a metric.
<cit.> use this metric to quantify how close the predicted word vectors are to the target. Finally, <cit.> use precision@k, a standard metric in other mapping problems, e.g., cross-lingual word embeddings <cit.>.
Comparisons Most metrics are used to evaluate both encoding and decoding models (pairwise matching accuracy, Pearson correlation, percentile rank, MSE, RSA, cosine distance).
Results for two of the most widely used metrics – pairwise matching accuracy[When discriminating averages over 20 images <cit.>, scores are naturally lower.] and percentile rank – tend to be around 0.7–0.8 with generally better results for more recent architectures and larger LMs. To draw conclusions across studies relying on different metrics, we need to investigate which metrics are more conservative, and how different metrics relate.
Pairwise matching accuracy vs. Pearson correlation
It seems that pairwise matching accuracy tends to increase monotonically with Pearson correlation. Consider three sets of distances over corresponding point sets, A, B, and C. If A and B are more strongly linearly correlated than A and C, under an optimal linear mapping Ω (minimizing point-wise squared error distance), 𝔼[(a-bΩ)^2]>𝔼[(a-cΩ)^2]. Even in this conservative setting in our synthetic experiments in Appendix <ref>, the correlation between matching accuracy and percentile rank was very high, ~0.9.
Pairwise matching accuracy vs. percentile rank Both metrics have random baseline scores of 0.5, and
they will converge in the limit. If a has a percentile rank of p in a list 𝒜, it will be higher than a random member of 𝒜 p percent of the time. In our experiments in Appendix <ref>, the correlation converges toward 1.0, with values consistently higher than 0.8 for N=100.
Pairwise matching accuracy vs. precision@k are also positively correlated. Perfect score in one entails perfect score in the other, but precision@k can of course be very small for very high values of pairwise matching accuracy (especially if the set of candidate words is big). Conversely, we can have saturation for high values of k, because matching accuracies higher than n-k/n will mean near-perfect precision@k scores. In practice, precision@k (for low values of k) will be much more conservative, however. The correlation coefficient for N=100 (see Appendix <ref>) tends to lie around 0.7.
Relative strength Pairwise Matching Accuracy is a relatively permissible performance metric. To see this, consider the scenario in which all target words can be divided into two equal-sized buckets based on word length (number of characters). Say the neural responses capture nothing but this binary distinction between long and short words, but do so perfectly. Moreover, our mapping method, e.g., linear regression, learns this from training data. Now, from this alone, the pairwise matching accuracy will converge toward μ=0.75, since our model will do perfectly (1.0) on half of the data, and exhibit random performance (0.5) on the other half. If the neural responses tracked word length (and not just the distinction between short and long words), performance would be even better. In other words, Pairwise Matching Accuracy scores around 0.7-0.8 (observed in the studies above) may only reflect very shallow processing characteristics. The fact that <cit.> only observed good results with this metric, led them to adopt a rather critical stance, for good reasons.
Other metrics are clearly more conservative. For a set of n candidate words, a random mapping will induce a precision@1-score of 1/n. While hubs may inflate scores for larger values, the metric is extremely conservative for small values of k. However, only <cit.> use this metric, and they modify the experimental protocol substantially, making the task much easier by providing additional input to a non-linear model. The small improvement from adding neural response input is interesting, but could potentially be explained by shallow processing characteristics.
They argue that analogy testing would provide a better evaluation protocol:
one would ideally use standard metrics such as semantic relatedness judgment tasks, analogy tasks, etc. [but] this is not possible due to the limited vocabulary sizes of the available brain datasets
Such evaluation is possible on small scale, though, and increasingly larger fMRI datasets are becoming available (see above). <cit.> have identified analogical reasoning in fMRI brain activation spaces. The analogies are computed using vector offset and probe the systematicity of how semantic relations are encoded. If a model encodes the capital-of relation systematically, we can retrieve the capital of Germany by subtracting the fMRI vector for 'Paris' from the sum of our the fMRI vectors for Germany and France. This is the same kind of analogical reasoning found in language models <cit.>. <cit.> show that the more language models satisfy analogies, the more isomorphic they are.
So far, it seems that, with the possible exception of <cit.>, there is little evidence for structural similarities, beyond what could be induced by shallow processing characteristics, but what about all the studies that report strong Pearson correlations? Per-voxel correlation coefficients are low on average, but across the above studies, typically only around 4-40% of the voxels exhibit significant correlations <cit.>. Since these correlations have been replicated across different datasets, they are generally not disputed, but could still reflect rather shallow processing characteristics.
On a more positive note, several studies show that larger (and better) language models align better with neural response measurements <cit.>. This suggests that language models in the future may align even better with such measurements, possibly reflecting properties of deep processing. Such correlations with model quality and size are positive, making the results reported above more credible.
Generally, the conclusions we can draw from the above studies are somewhat vague. There are two reasons for this: (i) Past studies have relied on permissible (pairwise matching accuracy) and ambiguous (Pearson correlation) performance metrics; and (ii) past studies have relied on small-sized datasets. We believe that this calls for a meta-analysis of the above studies. To provide grounds for such a meta-analysis, we have in this section taken steps to compare the metrics used in these studies. We leave it for future work to explore various ways effect sizes can be computed across these studies.
§ DISCUSSION
Many studies, summarized above, aim to compare language model representations with neural response measurements using linear mapping models. Our main reason to focus on linear mapping models is that they quantify the degree of structural similarity (isomorphism).
Overall, results suggest that structural similarities between language models and neural responses exist. Furthermore, there is good evidence that alignment has correlated positively with model quality and model size, suggesting a certain level of convergence as language models improve.
What drives alignment?
Is alignment driven by deep processing characteristics or by shallow textual characteristics?
Classical candidates for shallow ones would be word length, frequency, regularity, and part of speech. <cit.>, for example, only controlled for part of speech.
Some authors have presented results to suggest that alignments are driven by syntactic or semantic factors <cit.>, whereas others have claimed some similarities reflect semantic phenomena <cit.>. Others suggest that alignments reflect deeper similarities between model objectives and predictive processing in human brains <cit.>, but see <cit.> for a critical discussion of such work.
Linguistically-transparent
models that allow for a principled decomposition of a model’s
components into smaller linguistically meaningful units and models that move towards possible neurobiological implementations of neural computation are likely to be key for answering this question <cit.>. Given the plethora of interpretability methods recently developed, however, we believe that even models which are not intrinsically interpretable can be
useful toward this goal.
Do some models align better? Most studies observe that better and larger, contextual models align better with neural responses <cit.>. Other improvements include
fine-tuning on specific tasks <cit.>.
<cit.> outline the impact of model training choices.
What metrics? The inconsistent use of performance metrics makes it hard to compare and interpret the results reported in the literature <cit.>. We have shown that some metrics are perhaps too permissible to detect structural similarities between language models and neural responses. We have argued that precision@k is more conservative than most other metrics. <cit.> have proposed using analogy scores.
In the limit (given sufficient analogies), perfect analogical accuracy implies isomorphism <cit.>. So do perfect precision@1 and perfect RSA scores. We, therefore, propose giving priority to these performance metrics, not to conflate shallow processing characteristics with deeper, more semantic properties.
Meta-analysis? Proper meta-analysis is currently hindered by the use of different metrics, but we have taken steps to relate these.
§ CONCLUSIONS
We surveyed work on linear mappings between neural response measurements and language model representations, with a focus on metrics. In particular, we surveyed a broad range of 30 studies spanning across 10 datasets and 8 metrics. By examining the metrics, and relating them to one another, we attempt to critically assess the accumulated evidence for structural similarity between neural responses and language model representations. We find that similarities with existing models are limited to moderate, and there is a possibility they might be explained by shallow processing characteristics since there is no standardised methodology for employing controls, but also that positive correlations with model quality and size suggest that language models may exhibit deeper similarities with neural responses in years to come.
§ LIMITATIONS
This work focuses on a specific view of the whole neuro-computational modeling field. We exclude specific angles of research such as non-linear models <cit.> since we want to evaluate the accumulated evidence for structural similarity (isomorphism) between neural responses and language models. <cit.> mention several advantages of using linear mapping models, they are more interpretable and more biologically plausible. They also provide an insightful discussion on mapping model choice, emphasizing the importance of estimating models' complexity over categorizing them as purely linear or nonlinear.
Another limitation is that we do not include speech models <cit.> that have been used to map brain representations mostly due to coherency and page-limit restrictions.
The survey is also limited to fMRI and MEG data instead of other modalities for two many reasons: (i) fMRI and MEG are used as a combination in many studies <cit.>,
and (ii) they offer high spatial resolution
and signal reliability (fMRI) and better temporal and spatial resolution (MEG), making them suitable for NLP <cit.>.
For a survey in encoding and decoding models in cognitive electrophysiology, see <cit.>.
§ ETHICS STATEMENT
The use of publicly available data in this survey ensures compliance with ethical guidelines while acknowledging the initial consent provided by the participants for data capture and sharing. Participants' consent is a crucial ethical consideration in the collection and sharing of fMRI and MEG data, and the preservation of legal and ethical rights should always be prioritized. By upholding ethical principles, researchers can responsibly contribute to the field of brain encoding and decoding, advancing our understanding of neural processes without compromising individual rights and privacy. Researchers should ensure secure storage, anonymization, and limited access to sensitive neuroimaging data, adhering to data protection regulations and guidelines.
Furthermore, it is essential to prioritize the dissemination of research findings in a responsible manner, with clear and accurate communication that respects the limits and uncertainties of scientific knowledge. Openness and transparency in reporting methods, results, and interpretations contribute to the overall integrity of the research field. Additionally, fostering a culture of collaboration, respect, and acknowledgment of the contributions of participants, colleagues, and the wider scientific community promotes ethical conduct and responsible research practices in brain encoding and decoding. By adhering to these ethical principles, researchers not only advance scientific knowledge but also build public trust, enhance the societal impact of their work, and ensure the long-term sustainability and progress of the field.
§ ACKNOWLEDGEMENTS
This work is supported by the Novo Nordisk Foundation. Antonia Karamolegkou was supported by the Onassis Foundation - Scholarship ID: F ZP 017-2/2022-2023’.
acl_natbib
§ APPENDIX
§.§ Metric Correlations
We used the following synthetic experiment to estimate the correlations between some of the most widely used performance metrics:
(i) Generate n random numbers and sort them to produce the list 𝒜.
(ii) Sample n/10 items ℬ of 𝒜 at random.
(iii) For ϵ∈{1/100,…,100/100}, evaluate μ_b∈ℬ for ⟨ b,ϵ· b⟩ for all metrics.
In other words, for a noise level ϵ, we evaluate predicted images or word vectors ϵ· b against true images or word vectors b relative to a set of target images/vectors of 99 candidate words. This experiment is easily repeated to estimate reliable coefficients.
§.§ Correlation - Explained Variance
In this study, we do not distinguish between studies using Pearson correlation and studies using explained variance.
Pearson Correlation can be defined as:
r = ∑_i=1^n (x_i - x)(y_i - y)/√(∑_i=1^n (x_i - x)^2)√(∑_i=1^n (y_i - y)^2)
where:
* r is the Pearson correlation coefficient between variables X and Y
* x_i and y_i are individual data points for variables X and Y
* x and y are the means of variables X and Y
* n is the sample size.
The proportion of variance explained by the correlation is represented by r^2. The correlation coefficient (r) measures the strength and direction of the linear relationship, while the coefficient of determination (R^2=r^2) represents the proportion of the variance explained by the independent variable(s) in the dependent variable.
|
http://arxiv.org/abs/2306.03073v1
|
20230605175007
|
Significance Bands for Local Projections
|
[
"Atsushi Inoue",
"Òscar Jordà",
"Guido M. Kuersteiner"
] |
econ.EM
|
[
"econ.EM",
"stat.AP",
"62P20, 62M10, 91B84"
] |
Significance Bands for Local Projections
The views expressed in this paper are the sole responsibility of the authors and to not necessarily reflect the views of the Federal Reserve Bank of San Francisco or the Federal Reserve System.
Atsushi Inoue
Vandebilt University (mailto:[email protected]@vaderbilt.edu).
Òscar Jordà
Federal Reserve Bank of San Francisco; and Department of Economics,
University of California, Davis (mailto:[email protected]@sf.frb.org; mailto:[email protected]@ucdavis.edu) and CEPR.
Guido M. Kuersteiner
Department of Economics,
University of Maryland (mailto:[email protected]@econ.umd.edu).
July 31, 2023
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
An impulse response function describes the dynamic evolution of an outcome variable following a stimulus or treatment. A common hypothesis of interest is whether the treatment affects the outcome. We show that this hypothesis is best assessed using significance bands rather than relying on commonly displayed confidence bands. Under the null hypothesis, we show that significance bands are trivial to construct with standard statistical software using the LM principle, and should be reported as a matter of routine when displaying impulse responses graphically.
JEL classification codes: C11, C12, C22, C32, C44, E17.
Keywords: local projections, impulse response, instrumental variables, significance bands, wild block bootstrap.
arabic
§ INTRODUCTION
Practitioners routinely display impulse response estimates surrounded by confidence bands to graphically illustrate the uncertainty of the estimated response coefficients given the sample. These bands are often calculated using point-wise inference, such as when one inverts the standard Wald t-statistic. However, it is well-known that such bands—whether constructed using classical, bayesian, bootstrap or other simulation methods—cannot be directly used to assess joint hypotheses of whether the coefficients are zero or not <cit.>. Nevertheless, they are commonly used as a back-of-the-envelope check of the statistical significance of the impulse response—the null hypothesis that an intervention or treatment generates no response in the outcome.
This is problematic. Impulse response coefficients are highly correlated. In a small sample, these coefficients may be individually imprecisely estimated while at the same time following a joint trajectory that is clearly different from zero in the statistical sense. The problem is similar to that in near collinear regression—where individual t-statistics are no different from 0, but an F-test clearly is. A typical example is presented in <ref>. The figure shows the response of 100 × the log of the consumer price index (CPI) in the U.S in response to a Romer monetary shock <cit.>. Note that the (1 and 2 standard error) confidence bands displayed include 0 at all horizons and thus, one is tempted to conclude that a monetary shock has no impact on inflation. However, the test of the joint null that all response coefficients are zero is easily rejected, with a p-value of 1.97e-17. We shall argue that there is a more formal and simpler way to graphically display the natural null hypothesis of whether an impulse response differs from zero by using significance bands and local projections <cit.>, or non-linear impulse responses as in <cit.>. For brevity we only consider the linear case in this paper.
Under the null hypothesis and using the Lagrange multiplier (LM) principle, inference is greatly simplified.[Imposing the null of a zero impulse response in a vector autoregression (VAR) is cumbersome as responses are highly nonlinear functions of VAR coefficients.] In general settings, we show that inference is independent of the impulse response horizon. Moreover, we provide analytic formulas that are trivial to implement with standard statistical software. In addition, we discuss bootstrap methods that make fewer assumptions on the data generating process. Monte Carlo evidence shows that these methods provide the desired level of probability coverage.
The intuition for our procedures is similar to that for the significance bands common in correlogram plots. In time series analysis, the asymptotic ± 1.96/√(n) bands displayed in correlograms are a special case of our significance bands. Under the null hypothesis of no serial correlation, the data are a white noise process. In small samples, its autocorrelations will not be exactly zero, but will attain values inside the significance bands if the null is true. A correlogram is, in fact, an impulse response for an AR(1) model. Whenever an autocorrelation surpasses the ± 1.96/√(n) barrier, that coefficient can be deemed to be different from zero, and one or more rejections would be enough to reject the white noise null.
Should a practitioner then display confidence or significance bands? We argue for both. Each serves a different purpose. Thought of as the interval where the most probable values of the response are to be found, it is natural to plot confidence bands at one standard deviation values, as is common practice, since this represents a natural compromise between probability coverage and interval width.
In contrast, significance bands should be displayed at conventional probability levels, that is at a 90% or 95% coverage. The reason is that the significance band is being used to evaluate a scientific hypothesis that is central to almost every empirical analysis: does an intervention/treatment generate a statistically significant response/effect? The significance band is a visualization of this hypothesis test.
§ THE BASIC SET UP AND INTUITION
Suppose one is interested in estimating the following impulse response function:
ℛ_y(h) ≡ E(y_t+h|s_t = s_0 + δ; x_t) - E(y_t+h|s_t = s_0, x_t) for h = 0, 1, , H-1
where s is the impulse, intervention, or treatment variable, δ is the dose (how big the intervention is), s_0 is the initial value from which the effect of the treatment is being evaluated (in a linear model this will not matter, of course), and x_t is a vector of exogenous and pre-determined variables, including the constant, time trends, and lags of the outcome and intervention variables. The variable y is the outcome variable of interest.
Assume the researcher approximates <ref> using linear local projections, the most common application seen in the literature. Further, to make the notation more transparent and to take advantage of our linearity assumption, we can appeal to the Frisch-Waugh-Lovell theorem so that one can think of y_t+h and s_t as having been previously orthogonalized with respect to the rich set of controls in x_t. Hence y_t and s_t also have a zero mean. Moreover and for later use, we also assume that an instrument z_t for s_t is available and has been previously orthogonalized with respect to x_t as well. Thus, from this point forward, the notation can be interpreted to refer to these orthogonalized variables and we will not indicate so explicitly to keep the notational burden at a minimum.
Given this preliminary discussion, the local projections estimator of <ref> can be obtained from the instrumental variables regression:
y_t+h = s_t β_h + u_t+h for h = 0, 1, , H-1; t=1, , T
We assume that z_t, meets the usual conditions for relevance, lead-lag exogeneity <cit.>, and the exclusion restriction. That is:
* Relevance: E(s_t z_t) 0.
* Lead-lag exogeneity: E(u_t+h z_t) = 0 ∀ h.
* Exclusion restriction: E(y_t+h z_t|s_t) = 0.
Note that depending on the setting, z_t may include s_t itself, such as when s_t is an observable shock, and then the discussion returns to a more traditional OLS setting. Or if s_t is conditional x_t, sequentially exogenous. This would be the case in a recursive identification scheme. We further assume that y_t, s_t, and z_t are covariance stationary. This assumption is not necessary to ensure consistency of the local projection, but will make deriving our inferential procedures and the presentation in this section straightforward.
Based on this simple set up, the instrumental variable estimator for β_h can be written as:
√(T-h) (β̂_h - β_h) = (T-h)^-1/2∑_1^n z_t y_t+h/(T-h)^-1∑_1^n z_t s_t for h = 0,1,, H-1,
where we note that we will evaluate the statistic under the null H_0:β_h = 0. Under standard regularity conditions (made more precise below) and the instrumental variable assumptions for local projections, it is easy to see that:
1/T-h∑_1^n z_t s_t p→ E(z_t s_t) ≡γ_zs
Next, consider the numerator in <ref> evaluated at the null H_0: β_h = 0:
1/(T-h)^1/2∑_1^n z_t y_t+hd→ N(0, ω)
where ω is given by:
ω = Var( 1/(T-h)^1/2∑_1^n z_t y_t+h) ≈∑_j=-∞^∞ E(z_t y_t+h z_t-j y_t+h - j)
= ∑_j=-∞^∞ E(z_t z_t-j) E(y_t+h y_t+h-j)
=∑_j=-∞^∞γ_z,jγ_y,j
where the second equality follows from the lead-lag exogeneity assumption and the null hypothesis that β_h = 0 for h = 0,1, , H-1. We define γ_z,j and γ_y,j as the j^th autocovariances of z and y respectively. Importantly, note that ω is not a function of the horizon h.
Putting things back together, we can write <ref> under the null hypothesis as:
√(T-h) (β̂_h - 0) d→ N(0, σ^2); σ^2 = ∑_j=-∞^∞γ_z,jγ_y,j/γ_zs^2 = ω/γ_zs^2; ∀ h
From <ref> it is easy to derive a 1-α percent band around the zero null so that:
P[ζ_α/2 σ/√(T-h) < β̂_h < ζ_(1-α/2) σ/√(T-h)] =1 - α
where ζ_α/2 is the critical value of a standard normal variable at α/2 and for a standard normal, ζ_1 - α/2 = -ζ_α/2, as is well known. to construct feasible confidence intervals we need to replace σ with an estimate. The LM principle requires that σ be estimated using the conventional formula for HAC robust standard errors for the just identified two-stage least squares estimator, but evaluated at β_h = 0. This is accomplished by estimating ω with the long-run variance of η_t = z_t y_t+h, which is equal to s_η^2 = ∑_j=-∞^∞ E(η_t η_t+j).
When plotting a significance band of an impulse response up to H-1 periods, we are essentially conducting a joint hypothesis test. Intuitively, the more horizons considered, the more likely it is to spuriously reject the null when the null is true in a finite sample. A simple way to address this issue is with a Bonferroni adjustment as proposed in <cit.> so that the significance bands for each β̂_h become:
[ ζ_α/2Hσ/√(T-h), ζ_1 -α/2Hσ/√(T-h)].
The joint probability that the estimated impulse response lies within the confidence band is given by:
P ( ⋂_h=0^H-1{ζ_α/2H σ/√(T-h) < β̂_h < ζ_(1-α/2H) σ/√(T-h)}) ≥ 1 - α
where the inequality holds in large samples and when the null hypothesis of a zero response is true. Similarly, the test of the joint hypothesis that all response coefficients are zero rejects when:
β̂_h ∉[ ζ_α/2Hσ/√(T-h), ζ_1 -α/2Hσ/√(T-h)]
for at least one h. By the same argument, it follows that the size of such a test is not more than α in large samples.
A simple example provides further intuition and a connection to well-known results. In the special case where z = s, and y and s are serially uncorrelated, this expression simplifies even further to:
σ^2 = γ_y,0/γ_s,0
Thus, when y = s = z and y is a white noise and hence γ_y,0 = γ_s,0 so that σ^2 = 1, the local projection estimator is simply an estimator of the autocorrelation function. Hence, applying the same derivations as in <ref>, it is easy to see that one recovers the well known[Not Barlett corrected.] bands for the autocorrelogram of y. Specifically, focus on h = 1 in the special case that y is a white noise but one estimates an AR(1) model:
√(n) (ρ̂- 0) d→ N(0, 1).
This is the well known case where the 95% asymptotic significance bands in a correlogram are calculated as ± 1.96 × 1/√(n) and provides a nice window into our proposed procedures. Importantly, notice that the bands do not depend on the horizon (in fact, they also do not depend on the variance in this special case). Whenever an autocorrelation coefficient exceeds the band, the interpretation is that said coefficient can be deemed to be different from zero. This, of course, means that the hypothesis that the impulse/treatment has no effect on the outcome can be rejected.
§.§ Practical implementation
Constructing significance bands in practice based on the results from the previous section is straightforward and can be implemented using standard statistical software. The online appendix contains a STATA example to illustrate this point and corresponds to the figures displayed in the paper. The basic steps can be summarized as follows:
Significance bands using asymptotic approximations
* Calculate the sample average of the product s_t z_t. Call this γ̂_sz.
* Construct the auxiliary variable η_t = y_t z_t and regress η_t on a constant. The Newey-West estimate of the standard error of the intercept coefficient is an estimate of s_η̂.
* An estimate of σ/√(T-h), call it ŝ_β_h, is therefore:
ŝ_β_h = ŝ_η̂/γ̂_sz
* Construct the significance bands as:
[ζ_α/2Hŝ_β_h, ζ_1 - α/2Hŝ_β_h]
A bootstrap procedure is equally easy to construct. Note that we do not take a position on the data generating process (DGP). Therefore, we apply the bootstrap directly to step 2 of the previous construction of the significance band. Because of the time series dependence and the possible existence of heteroscedasticity, we will use a wild-block bootstrap <cit.>. The online appendix provides the STATA implementation, which only requires a few lines of code. Thus, the entire procedure can be described as follows:
Significance bands using the Wild-Block Bootstrap
* Calculate the sample average of s_t z_t. Call this γ̂_sz.
* Construct the auxiliary variable η_t = y_t z_t and regress η_t on a constant. The Wild Block bootstrap estimate of the standard error of the intercept coefficient is an estimate of s_η̂.
* An estimate of σ/√(T-h), call it ŝ^b_β_h, is therefore:
ŝ^b_β_h = ŝ^b_η̂/γ̂_sz
* Construct the significance bands as:
[ζ_α/2Hŝ^b_β_h, ζ_1 - α/2Hŝ^b_β_h]
Using these procedures, we can now revisit <ref>. <ref> presents the original impulse response figure but with significance bands constructed using the asymptotic approximation (in blue) and using the bootstrap (in red). As the figure shows, the result from using either procedure are virtually identical. Based on the significance bands displayed in the figure, we would conclude that there is essentially no response of inflation to monetary policy for the first year and a half, but thereafter, there is ample evidence that the response is non-zero, consistent with the significance joint hypothesis test p-value of 1.97e-17. The fact that the significance band is tighter than the confidence band is specific to this example and not a general feature of the relationship between significance and confidence bands.
§ MONTE CARLO EVIDENCE
This section presents a couple of simple experiments in graphical form to assess the calculation of significance bands using both the asymptotic approximation and the wild block bootstrap procedures discussed in the previous section. The data are generated as follows:
y_t = β s_t + 0.75 y_t-1 + u_yt
s_t = 0.5 s_t-1 -0.25 y_t-1 + z_t + u_st
z_t = u_zt u_yt, u_st, u_zt ∼ N(0, 1); β∈{0, 0.25, 0.50, 0.75 }
This simple system encapsulates several features. First, the treatment variable, s_t, affects the outcome, y_t, contemporaneously. The outcome is itself serially correlated with a coefficient 0.75. The idea is to have internal propagation dynamics. Next, the intervention responds to feedback from the value of the outcome in the previous period, but also has some internal propagation dynamics. In addition, movements in the intervention are caused by the exogenous variable z_t, which will act as our instrumental variable. Finally, the coefficient β, which captures the effect of the treatment on the outcome, has values between 0 and 0.75. When β = 0 we have the null model with which to assess the size of the test. Increasing the value of β allows us to assess the power of the significance bands.
We generate samples of 100, and 500 observations with 500 burn-in observations that are discarded to avoid initialization problems. For each sample size and for the different values of β we generate 1,000 Monte Carlo replications. The implementation of the Wild Block bootstrap is based on 1,000 bootstrap replications as well. For the Newey-West step as well as for the block size in the bootstrap, we use 8 lags. <ref> displays the results for sample sizes of 100 and 500 observations.
The figure summarizes quite a bit of information. The shaded bands around the mean estimate of the impulse response showcase the 25^th and the 975^th largest values for each coefficient estimate in the Monte Carlo simulation. The dashed lines correspond to the significance bands. Both Newey-West and the bootstrap procedures (using 8 lags) generate nearly indistinguishable values so the differences cannot be seen with the naked eye. For each Monte Carlo exercise, we construct rejection rates for each type of band constructed. The rate is calculated as the share of replications where one or more impulse response coefficients exceed the significance bands.
Several results deserve comment. First, consider the size of the test. We have chosen a rather conservative strategy with a window of size 8 both for Newey-West and for the block-size in the implementation of the bootstrap. As a result, with a small sample of 100 observations, the size is about 10% instead of the nominal 5%, though with 500 observations the size is close to 4%. However, even with this conservative choice, the power of the test is respectable with a sample size of 100, improving from about 25% when β = 0.25 to about 95% when β = 0.75. These numbers jump with 500 observations with about 95% for β = 0.25 and 100% even for β = 0.5.
§ CONCLUSION
Significance bands should be displayed alongside confidence bands for local projections as a matter of routine. While confidence bands inform the reader about the estimation uncertainty of each coefficient, significance bands inform the reader about the significance of the impulse response itself. Formal tests of significance can and should be calculated, but these tests require estimation of local projections as a system. There are many settings where this is impractical. However, significance bands can be constructed with univariate regression using standard statistical software.
authordate1
|
http://arxiv.org/abs/2306.09266v1
|
20230615163951
|
A9 Intersection Dataset: All You Need for Urban 3D Camera-LiDAR Roadside Perception
|
[
"Walter Zimmer",
"Christian Creß",
"Huu Tung Nguyen",
"Alois C. Knoll"
] |
cs.CV
|
[
"cs.CV"
] |
oldmaketitlemaketitle
Zero-Shot Anomaly Detection with Pre-trained Segmentation Models
Matthew Baugh10000-0001-6252-7658 James Batten10000-0002-8028-5709 Johanna P. Müller20000-0001-8636-7986 Bernhard Kainz1,20000-0002-7813-5023
July 31, 2023
=================================================================================================================================================
empty
empty
Intelligent Transportation Systems (ITS) allow a drastic expansion of the visibility range and decrease occlusions for autonomous driving. To obtain accurate detections, detailed labeled sensor data for training is required. Unfortunately, high-quality 3D labels of LiDAR point clouds from the infrastructure perspective of an intersection are still rare. Therefore, we provide the A9 Intersection Dataset, which consists of labeled LiDAR point clouds and synchronized camera images. Here, we recorded the sensor output from two roadside cameras and LiDARs mounted on intersection gantry bridges. The point clouds were labeled in 3D by experienced annotators. Furthermore, we provide calibration data between all sensors, which allow the projection of the 3D labels into the camera images and an accurate data fusion. Our dataset consists of 4.8k images and point clouds with more than 57.4k manually labeled 3D boxes. With ten object classes, it has a high diversity of road users in complex driving maneuvers, such as left and right turns, overtaking, and U-turns. In experiments, we provided multiple baselines for the perception tasks. Overall, our dataset is a valuable contribution to the scientific community to perform complex 3D camera-LiDAR roadside perception tasks. Find data, code, and more information at https://a9-dataset.comhttps://a9-dataset.com.
Dataset, 3D Perception, Camera, LiDAR, Intelligent Transportation Systems, Autonomous Driving
N@m0pt@
§ INTRODUCTION
The roadside deployment of high-tech sensors to detect road traffic participants offers significant added value for intelligent and autonomous driving. This technology allows the vehicle to react to events and situations that are not covered by the vehicle's internal sensor range. Thus, the advantage is the drastic expansion of the field of view and the reduction of occlusions. For this reason, we can observe a continuous increase in Intelligent Transportation Systems (ITS) world-wide. It is noticeable that cameras and increasingly LiDARS are used to create a live digital twin of road traffic <cit.>. To obtain accurate detections with such sensor systems, labeled sensor data is required for training.
Numerous datasets in the field of intelligent and autonomous driving have already been created. Datasets like <cit.> are taken from the vehicle perspective. In contrast, <cit.> are recorded from a very steep elevated view from a drone or a high building, so they are more suitable for trajectory prediction and tracking tasks. They are less suitable for 3D object detection because vehicles are far away and are only observed from above. Recently, a few datasets <cit.> have been acquired from an roadside perspective and are thus suitable for improving perception algorithms for ITS. However, some datasets have deficiencies in their labeling quality, which harm the training of the algorithms (e.g., censored image areas with filled rectangles), or they lack certain vehicle classes (e.g., missing trucks and buses), or the datasets are too small in terms of 3D box labels and attributes.
According to the work mentioned, it can be recognized that high-quality 3D box labels of LiDAR point clouds from the roadside perspective with a wide diversity of traffic participants and scenarios are still rare. Therefore, our A9 Intersection (A9-I) Dataset provides LiDAR point clouds and camera images from a road intersection. The 4.8k labeled point cloud frames, which were labeled by experts, contain complex driving maneuvers such as left and right turns, overtaking maneuvers, and U-turns. With its ten object classes, our dataset has a high variety of road users, including vulnerable road users. Furthermore, we provide synchronized camera images and the extrinsic calibration data between LiDARs and the cameras. These matrices allow the projection of the 3D box labels to the camera images. All in all, our A9-I offers synchronized 4.8k images and 4.8k point clouds with 57.4k 3D box labels with track IDs that were manually labeled. In this work, we show additional comprehensive statistics and the effectiveness of our dataset. Over and beyond, we would like to emphasize that A9-I is an extension of our previous debut the A9 Dataset <cit.>, which covers highway traffic scenarios. Thus, we extend the existing A9 Dataset with additional traffic scenarios on a crowdy intersection and scale it up from 15k labeled 3D box labels to 57.4k including vulnerable road users. In evaluation experiments, we provide multiple baselines for the 3D perception task of 3D object detection with a monocular camera, a LiDAR sensor, and a multi-modal camera-LiDAR setup. Last but not least, we offer our dataset in OpenLABEL format under the Creative Commons License CC BY-NC-ND 4.0 so that it can be widely used by the scientific research community.
In summary, our contributions are:
* A detailed and diverse dataset of 4.8k camera images as well as 4.8k labeled LiDAR point cloud frames. Thereby, we used two synchronized cameras and LiDARS, which cover an intersection from an elevated view of an ITS.
* Extrinsic calibration data between cameras and LiDARs allow an early and late fusion of objects.
* We provide an extensive A9-Devkit to load, transform, split, evaluate and visualize the data.
* 57.4k high-quality manually labeled 3D boxes with 273k attributes for both LiDARS resulting in 38k 3D box labels after data fusion.
* Comprehensive statistics and analysis of the labels, number of points, occlusions, and tracks on the dataset, and the distribution of ten different object classes of road traffic.
* Multiple baselines for the 3D perception task of 3D object detection with a monocular camera, a LiDAR sensor, and a multi-modal camera-LiDAR setup.
§ RELATED WORK
As part of the development in the field of autonomous driving and intelligent vehicles, the number of datasets is increasing rapidly. The most popular datasets in this field are KITTI <cit.>, nuScenes <cit.>, Cityscapes <cit.>, and Waymo Open dataset <cit.>. Except for the Cityscapes, the datasets provide labeled camera images and LiDAR point clouds. These datasets are used to train perception algorithms. Unfortunately, these valuable datasets only contain data from a vehicle's perspective. Therefore, this ego perspective is suboptimal for transfer learning. Networks trained on a dataset from the vehicle's perspective do not perform well on data obtained, e.g. from a roadside perspective.
Another sensor perspective is, for example, the elevated view. With this, the scene can ideally be viewed without occlusions. To achieve a high level of perception for this elevated view, training with appropriate datasets is necessary. The focus of the drone dataset family highD <cit.>, inD <cit.>, rounD <cit.>, and exiD <cit.> is the trajectory of road users in the city as well as in the freeway area. The datasets were recorded by a drone and provide a vast top-down view of the scene. The main limitation is the limited recording time in challenging weather conditions. To overcome this drone-related issue, the MONA <cit.> dataset provides data that was created with a camera mounted on a building. On the one hand, these datasets are ideal for trajectory research, because they were recorded from a very steep angle to the road. On the other hand, they are less suitable for 3D object detection, because of the missing 3D dimensions.
A dataset, which contains data from an elevated view of an ITS with an angle that is not too steep, is the DAIR-V2X <cit.>. The main focus of DAIR-V2X is the support of of 3D object detection tasks. It consists of 71k labeled camera images and LiDAR point clouds, 40% of which are from an roadside infrastructure. For this purpose, the dataset covers city roads, highways, and intersections in different weather and lighting conditions. Unfortunately, no exact statistics for this variation or exact sensor specifications are available. As a last point, the quality of the data is further compromised by filled rectangles over privacy-sensitive image areas (e.g., license plates), which can lead to problems during training for object detection. Another dataset from the roadside infrastructure perspective with a camera and LiDAR combination is the IPS300+ <cit.>. The dataset includes 14k data frames, with an average of 319 labels per frame. They used 1 LiDAR and 2 cameras as a stereo setup with a lens focal length of 4.57 mm. The dataset was recorded several times a day at one intersection and provides seven different object categories: car, cyclist, pedestrian, tricycle, bus, truck, and engineer car. According to the statistics, unfortunately, there is less representation in the classes of trucks and buses, so that the recognition of these classes will probably be poor. The Roadside Perception 3D dataset (Rope3D) <cit.> provides 50k images including 3D box labels from an monocular infrastructure camera at an intersection. The missing 3D information of the detected objects in the 2D camera image was added with a LiDAR, which was mounted on a vehicle. In total, the images contain over 1.5M labeled 3D boxes, 670k 2D bounding boxes, in various scenes at different times (daytime, night, dawn/dusk), different weather conditions (sunny, cloudy, rainy), and different traffic densities. Furthermore, the objects are divided into 13 classes with several attributes. Another roadside infrastructure dataset is LUMPI <cit.>, which was recorded at an intersection in Hanover, Germany. For this purpose, a total of 200k images as well as 90k point clouds were acquired. Three different cameras and five different LiDARs provide several field of views on the scene. Here, different sensor configurations were used for the recordings. The sensor perspective is from a vehicle as well as from the roadside infrastructure. Unfortunately, the number of labels and other detailed information about the labeled objects was not provided. A further contribution in the field of roadside infrastructure data for training perception algorithm is the A9-Dataset <cit.>. It is our preliminary work and includes 642 camera images and 456 LiDAR point clouds. In total, this dataset consists of 1k 3D box labels. The charm is that most camera images contain the same traffic scene from four different viewpoints. Here, we labeled 14k 3D boxes. Moreover, the frames contain 13.17 3D box labels in average. For this purpose, we supported the common classes of car, trailer, truck, van, pedestrian, bicycle, bus, and motorcycle in the domain of a highway. The main limitations in our previous work were firstly the small number of labeled LiDAR point clouds and secondly that we only had a simple highway scenario. For this reason, we present an extension to our dataset that addresses these weaknesses.
§ A9 INTERSECTION DATASET
In this section, we present the A9 Intersection Dataset. It is an extension of our previous work, the A9-Dataset <cit.>, which covers the highway domain. We describe the sensor setup at our intersection, the data selection and annotation process, and the data structure used. Last, this section contains comprehensive statistics and an introduction to our A9-Devkit.
§.§ Sensor Setup
The A9-I Dataset is recorded on the ITS testbed, which was established as part of the Providentia++ project <cit.>. Here, roadside sensors are set up on a gantry located at the intersection of Schleißheimer Straße (B471) and Zeppelinstraße in Garching near Munich, Germany. For this dataset, we use two cameras and two LiDARs with the following specifications:
* Camera: Basler ace acA1920-50gc, 1920×1200, Sony IMX174, glo. shutter, color, GigE with 8 mm lenses.
* LiDAR: Ouster OS1-64 (gen. 2), 64 vert. layers, 360 ° FOV, below horizon configuration, 120 m range, 1.5-10 cm accuracy.
The sensors are mounted side by side on the gantry, as shown in Figure <ref>. Here, the sensors detect the traffic in the center of the intersection from a height of 7 m. It is worth mentioning that the cameras and LiDARs are spatiotemporally calibrated. For the temporal calibration, we synchronized the sensors with a Network Time Protocol (NTP) time server, for the extrinsic calibration between the cameras and the LiDARs, we used a targetless extrinsic calibration method, which was inspired by <cit.>.
§.§ Data Selection and Annotation
We select the data based on interesting and challenging traffic scenarios like left, right, and U-turns, overtaking maneuvers, tail-gate events, and lane merge scenarios. Furthermore, we take highly diverse and dense traffic situations into account, so that we get an average of over 15 road users per frame. To cover diverse weather and light conditions in our A9-I Dataset, it consists of 25% nighttime data including heavy rain, and 75% daytime data with sunny and cloudy weather conditions. This enables a good performance of the detector even in challenging weather conditions.
We record camera data at 25 Hz and LiDAR data at 10 Hz into rosbag files. Then we extract the raw data and synchronize the camera and LiDAR frames at 10 Hz based on timestamps. Based on the raw data of the LiDAR point clouds, 3D box labels were created by experts. As all four sensors are cross-calibrated, we can also use these 3D box labels from the point cloud to evaluate monocular 3D object detection algorithms. Since the labeling quality of the test sequence is very important, it was reviewed multiple times by us. Here, we improve the labeling quality by using our preliminary proAnno labeling toolbox <cit.>.
§.§ Data Structure
Our dataset is divided into subsets S1 through S4, which contain continuous camera and labeled LiDAR recordings. Set S1 and S2 are each 30 seconds long and demonstrate a daytime scenario at dusk. A 120-second long sequence during daytime and sunshine can be found in sequence S3. Sequence S4 contains 30-second data recording at night and in heavy rain. The file structure is given below:
.1 a9-intersection-dataset.
.2 a9_dataset_r02_s01.
.3 pointclouds.
.4 s110lidarousternorth.
.5 timestampsensorid.pcd.
.4 s110lidaroustersouth.
.5 timestampsensorid.pcd.
.3 images.
.4 s110camerabaslersouth18mm.
.5 timestampsensorid.jpg.
.4 s110camerabaslersouth28mm.
.5 timestampsensorid.jpg.
.3 labels.
.4 s110lidarousternorth.
.5 timestampsensorid.json.
.4 s110lidaroustersouth.
.5 timestampsensorid.json.
.2 a9_dataset_r02_s02.
.2 a9_dataset_r02_s03.
.2 a9_dataset_r02_s04.
All labeled data is in OpenLABEL format <cit.>. OpenLABEL files are stored in .json format. One file contains all labeled objects of a single frame with 32-bit long unique identifiers (UUIDs), the position, dimensions, rotation, and the attributes like the occlusion level, the body color, the number of trailers, the specific object type, and the number of 3D points. Furthermore, a frame contains properties like the exact epoch timestamp, the weather type, the time of day, and the corresponding image and point cloud file names. In OpenLABEL the label files also contain the calibration data – intrinsic and extrinsic information.
We suggest a split into training (80%), validation (10%), and test set (10%). The test set is made up of a continuous sequence with track IDs, as well as randomly sampled frames from four different scenarios and daytimes. We sample frames using stratified sampling to create a balanced dataset among sensor types, weather scenarios, and day times. To prevent overfitting, we do not publish our test set labels.
§.§ Data Statistics
In total, we provide 4,800 labeled LiDAR point cloud frames sampled from four different sequences. Here, 57,406 3D objects (506 unique objects) were annotated with 273,861 object attributes. After fusing the labels from both LiDARs we get 38,045 registered 3D objects (482 unique objects) with 171,045 attributes. The following statistics refer to the fusion result with the complete dataset inclusive of training, validation, and test set. In <Ref>, we can see an overview of the registered 3D box labels.
A deep dive into distribution of the labels of our A9-I Dataset is provided in Figure <ref>. Here, the distribution of the ten object classes is shown. The vehicle class CAR is dominant, followed by the classes TRUCK, TRAILER, VAN, and PEDESTRIAN, which occur in roughly the same order of magnitude. The classes MOTORCYCLE, BUS, BICYCLE, EMERGENCY VEHICLES, and OTHER are present in a slightly smaller number. Since we have annotated the occlusion level for each 3D box label, we come to the result that 78.2% were classified as NOT_OCCLUDED, 16.1% as PARTIALLY_OCCLUDED, 0.8% as MOSTLY_OCCLUDED, and 4.9% were classified as UNKNOWN (not labeled). It can also be seen that most of the labeled frames contain between 15 and 20 labeled 3D boxes. In 100 frames, there are even between 45 and 50 labeled 3D objects. Furthermore, the A9-I includes significantly more variations in the maneuvers of road users at the intersection, as compared to our previous work <cit.>. We can see three peaks where vehicles are moving to the south, north and east direction of the intersection. Vehicles moving between south and north are indicated by the peaks around 90 and 270 degrees. The smaller peaks adjacent to the main peaks correspond to turning maneuvers, such as right or left turns.
The labels are based on the LiDAR point clouds. In Figure <ref>, we performed a detailed analysis of the points concerning the labeled classes, of the individual distances of the points concerning the labeled classes, and of the distribution of the points. Firstly, as expected, the correlation between the average number of points and the average size of the class can be observed. Here, the TRAILER class, which has the highest height, also has the highest average number of points, followed by the BUS class, which is the longest. Conversely, the PEDESTRIAN class, which has the smallest size, has the lowest average number of points. Second, in general, due to the elevated position of the LiDARs, the field of view only starts to have an effect from about 10 m onwards. Most classes have the highest number of points at a distance between 10 m to 30 m. Interestingly, the class TRAILER has the highest average number of points at a distance between 15 m and 20 m. With increasing distance, the average number of points is naturally declining. Lastly, all 3D box labels have a total of 2,797,112 points. According to the distribution of the number of points per 3D box label, most of the boxes have a maximum of about 50 points. However, the 3D box labels have on average 73.52 points per object.
In addition to the statistics about the labels and the underlying point clouds, we also analysed the calculated tracks, see Figure <ref>. We were able to determine these trivially since the same tracking ID was selected for each consecutive frame when marking the 3D box labels. The average track length in our A9-I Dataset is 24.18 m. Here, the class BUS is very dominant with an average track length of 75 m. The reason for this is because, firstly, the buses are very visible, and secondly, completely cross the intersection. All in all, the full dataset contains 506 unique objects (3D box labels) with a total track length of 12.23 km with a maximum track length of 162.87 m. Thus, our A9 Intersection Dataset can also be used to handle issues regarding tracking that are addressed by <cit.>.
§.§ A9-Devkit
To work with our A9-I Dataset, we provide the A9 Development Kit: https://github.com/providentia-project/a9-dev-kithttps://github.com/providentia-project/a9-dev-kit. It contains a dataset loader to load images, point clouds, labels, and calibration data. Furthermore, we provide a converter from OpenLABEL to multiple different dataset formats like KITTI, COCO, YOLO, and the other way round. We follow the .json-based OpenLABEL standard <cit.> from the ASAM organization for the label structure. Some pre-processing scripts transform and filter the raw point cloud .pcd ASCII data into binary data to reduce the file size and make it compatible with point cloud loaders. In addition, a point cloud registration module can merge multiple point clouds to increase the point density. Finally, we provide a data visualization module to project the point clouds and labels into the camera images.
§ EVALUATION
In our study, we conducted a comparative analysis of monocular camera and LiDAR 3D object detection with early and late fusion. In our first evaluation experiment, we used our MonoDet3D <cit.> 3D object detector that takes camera images as input. It transforms the 2D instance masks into 3D bottom contours by using extrinsic calibration data. Our augmented L-Shape-Fitting algorithm extracts the dimensions and calculates the rotation for each object. In our second experiment, we used PointPillars <cit.> and trained the model from scratch on all classes of our camera field of views Camera_south1, Camera_south2, and full. In the last experiment, we evaluate our multi-modal InfraDet3D <cit.> detector, which incorporates a late fusion approach, leveraging the Hungarian algorithm to establish correspondences between detections obtained from the MonoDet3D and PointPillars baselines. For all these experiments, we provide post-processing scripts in our A9-Devkit for early data fusion, and cropping the point cloud labels to fit the mentioned field of view.
We evaluated each detector on three difficulty levels Easy, Moderate, and Hard, see <Ref>. The Hard category contains objects with a distance over 50 m, objects that are mostly occluded, or objects that have less than 20 points within the 3D box. Partially occluded objects with a distance of 40 to 50 m, and 20 to 50 points are part of the Moderate category. Lastly, the Easy category contains objects that are not occluded, less than 40 m away, and contain more than 50 points. As a quantitative metric, we used the mean Average Precision (mAP) to evaluate the performance. The overall mAP is the average of Easy, Moderate, and Hard.
The advantage of using a monocular setup is a better detection of small objects such as pedestrians. On the other side, a LiDAR detector can detect objects during nighttime. The combination of LiDAR and the camera through late fusion techniques can significantly enhance the overall performance. In this work, we were able to confirm this assumption in our evaluation. We achieved the best detection results with the LiDAR_North modality and the InfraDet3D model in the Easy difficulty level. Interestingly, the early fusion approach with PointPillars consistently achieves the best performance in all subsets at Moderate difficulty level. The better performance of PointPillars and InfraDet3D over the MonoDet3D shows the strengths of the LiDAR sensor in comparison to a camera. Mostly, the late fusion of LiDAR and the camera provided better overall results than a single LiDAR detector. Moreover, the combination of early fusion between LiDAR sensors with camera sensors via late fusion, which combines the advantages of both sensor modalities, gives consistently robust results. A visual representation of the qualitative results is provided in Figure <ref>.
§ CONCLUSIONS
In this work we extended the A9 Dataset with labeled data of an intersection. We provided 3D box labels from elevated road side sensors. Two synchronized cameras and LiDARs were used to record challenging traffic scenarios. Our data was labeled by experienced experts. As all sensors were calibrated to each other, we can use the 3D bounding box point cloud labels to perform Monocular 3D object detection. In total, our dataset contains 4.8 k RGB images and 4.8 k LiDAR point cloud frames with 57.4 k high-quality labeled 3D boxes, partitioned into ten object classes of traffic participants. We offered a comprehensive statistics of the labels including their occlusion levels, the number of points grouped by class category and distance, and an extensive analysis of the labeled tracks. In our evaluation experiments, we provided three baselines for the perception task of 3D object detection: A camera, a LiDAR and a multi-modal camera-LiDAR combination. With these experiments, we were able to show the potential of our dataset for your 3D perception tasks.
For future work, we plan to create and publish more ground truth labels based on the presented camera images which can support more evaluation methods for our data fusion algorithm. Furthermore, the publication of further labeled sensor data with specific traffic scenarios, e.g. accidents, as well as the usage of other sensor modalities is also on our agenda.
§ ACKNOWLEDGMENT
This research was supported by the Federal Ministry of Education and Research in Germany within the project AUTOtech.agil, Grant Number: 01IS22088U. We thank Venkatnarayanan Lakshminarasimhan and Leah Strand for the collective work on the A9 Intersection (A9-I) Dataset.
IEEEtran
|
http://arxiv.org/abs/2306.08260v1
|
20230614055017
|
Geometric and Dynamic Properties of Entangled Polymer Chains in Athermal Solvents: A Coarse-Grained Molecular Dynamics Study
|
[
"Jiayi Wang",
"Ping Gao"
] |
cond-mat.soft
|
[
"cond-mat.soft"
] |
Preprint
[email protected]
Thrust of Advanced Materials, The Hong Kong University of Science and Technology (Guangzhou), Guangdong, China
Corresponding author
[email protected]
Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, China
Abstract: We used a coarse-grained model to study the geometric and dynamic properties of flexible entangled polymer chains dissolved in explicit athermal solvents. Our simulations successfully reproduced the geometrical properties including the scaling relationships between mean-square end-to-end distance <R_ee^2>, chain entanglement lengths N_e and concentration Φ. Specifically, we find that <R_ee^2>∼ N*Φ^-1/4,
N_e = 30.01Φ^-5/4+31.23. Dynamically, our model confirmed the ratio of the dynamic critical entanglement N_c and the geometric entanglement length N_e is constant, with N_c/N_e = 5∼ 6. To account for the local swelling effect for chains confined in athermal solvents, we treated the chains using the concept of blobs where each blob occupies a volume Ω_b, with length g. Direct MD simulations and scaling analysis showed that g ∼Φ^-25/36, Ω_b∼Φ^-5/4. Using these together with the concentration dependent packing length p ∼Φ^-5/12, we obtained a modified the Lin-Noolandi ansatz for concentrated flexible polymer chains in athermal solvents: G ∼Φ/(N_e / g) Ω_b∼Φ^-2.28. We demonstrate this modified ansatz agrees well with our coarse-grained numerical simulations.
Geometric and Dynamic Properties of Entangled Polymer Chains in Athermal Solvents: A Coarse-Grained Molecular Dynamics Study
Ping Gao 0000-0003-0625-2391
July 31, 2023
============================================================================================================================
§ 1. INTRODUCTION
The dynamics of polymer systems are significantly affected by entanglement caused by topological constraints between chains <cit.>. Several models have been proposed to describe the dynamic properties of entangled polymer melts <cit.>. However, there is a need to improve our understanding of polymers dissolved in athermal solvents regarding geometric and dynamics aspects. This is because the entanglement length N_e and the local environment of the polymer chains will change due to the swelling effect caused by the athermal solvents in contrast to the pure melt. These effects, including the evolution of the entanglements and the local swelling of polymer chains by athermal solvents, are not readily observable experimentally, necessitating numerical simulations.
Various theoretical models have been proposed to quantify polymer entanglements. Geometric analyses by de Gennes, Kavassalis, and Richard P. Wool resulted in the scaling expression N_e∼ N_c∼Φ^-5/4 <cit.>, where N_e denotes the geometric entanglement length, N_c represents the dynamical critical length of entanglement, and Φ is the polymer concentration. In terms of dynamics, the Rouse model predicts that the relaxation time of the entangled system is proportional to the square of chain length when the chain length is lower than N_c <cit.>. However, the Rouse model cannot be applied to solution systems due to the leakage assumption, which is not strictly satisfied. To address this issue, the Zimm model considers polymer chains and solvents as a whole <cit.>, and their motion satisfies the Stokes-Einstein relation <cit.>. The expression between relaxation time and chain length is obtained as τ∼ N^3v, where v is 0.588 in athermal solvents solution and N is the monomer number of the polymer chain. When the chain exceeds N_c, the long-lived entanglement effect occurs because of the topological constraints between chains. Edwards and de Gennes introduced the concept of a confining tube <cit.>, where the motion of the polymer chain is analogous to a snake wriggling inside a narrow tube, and deduced τ∼ N^3. The relaxation time acts as a bridge that relates the elastic response to the viscous response of entangled polymers. When t<τ , the polymer exhibits an elastic response to external forces, acting like a cross-linked network, while t>τ, the entanglements slip, resulting a viscous response. Regarding the elastic response of flexible polymer solutions, the Lin-Noolandi ansatz provides the relationship between the plateau modulus G , Φ and N_e: G ∼ k_BTΦ/(N_eΩ_0), where k_B is the Boltzmann constant, T refers to temperature, and Ω_0 is the volume of a monomer <cit.>. This relationship has not been verified for flexible polymers in athermal solvents where excluded volume effect may affect the chain stiffness.
In this study, we investigated the geometric and dynamic properties of flexible polymer chains dissolved in the explicit athermal solvents using large-scale MD simulations. To model the local swelling effects in such systems, we introduced the concept of swelling blobs and derived scaling relationships for the entanglement length N_e, the packing length p, the blob length g and volume Ω_b by combining MD simulation with the blobs scaling concept. We modified the Lin-Noolandi ansatz to incorporate the swelling blob scaling relationships, resulting a new scaling relationship: G ∼Φ/(N_e / g) Ω_b∼Φ^2.28. The modified scaling relationship matches closely to that observed in experiments <cit.>.
§ 2. SIMULATION METHOD
We modelled polymer chains using the bead-spring model <cit.>, which assumes that the polymer chain is composed of N sequentially connected beads (monomers). To enhance the simulation efficiency for large systems, our model only considers the repulsive non-bonded energy between monomers and the bonding energy connecting two adjacent monomers only. The chains are fully flexible so the the bending energy of two adjacent bonds is taken as zero.
The nonbonded interaction between monomers is described by the purely repulsive Weeks-Chandler-Andersen (WCA) potential <cit.>:
U_L J(r)={[ 4 ε_0[(σ/r)^12-(σ/r)^6]+ε_0 r ≤ 2^1 / 6σ; 0 r>2^1 / 6σ ]. (1)
where r is the distance between two monomers, ε _0 is the interaction strength, σ is the diameter of monomer. At high temperatures, the presence of attractive interactions between monomers has little effect on the dynamical properties of the polymer melts. Therefore, the WCA potential saves a large amount of computational time by retaining only repulsive interactions and achieves high accuracy. The finitely extensible nonlinear elastic (FENE) potential is used to simulate the bonding energy <cit.>:
U_F E N E(r)=-(k_F E N E R_0^2 / 2) ln[1-(r/R_0)^2] (2)
where k_FENE refers to the spring constant, r and R_0 represent the bond length and the maximum bond length, respectively. The chains were modelled using an algorithm for self-avoidance walk (SAW), and our algorithm was modified so that each chain not only avoided itself but also avoided other chains.
The modeling process is given in Fig. 1(a)-(c), after the construction of the initial model by the SAW method, a soft-core potential was used to avoid the singularity in the energy minimization process, making the energy surface smoother. Following the treatment of the soft potential, the full force field was applied to the system for the final energy minimization and system relaxation.To describe the solution of fully flexible chains in athermal solvents, we modelled the solution by taking the analogy with the Flory lattice model, i.e., each solvent molecule is treated as a lattice point, and a polymer chain of length N is treated as a connection of N lattice points <cit.>. In other words, we modelled the solutions to satisfy the following conditions:
ε _MM=ε _MS=ε _SS=ε _0 , σ_S=σ _M=σ (3)
where ε_MM, ε_MS and ε_SS refer to the interaction of monomer-monomer, monomer-solvent and solvent-solvent respectively, σ_S and σ_M describe the diameter of solvent beads and chain monomers. When equation (3) holds, the polymer chains can be fully swollen by the athermal solvents, and the chains in such system are expected to follow the scaling rule suggested by de Gennes <cit.>.
Here, m_monomer=1 g/mol, ε_0=1 kcal/mol and σ=5. As regards the FENE potential, we set k_FENE = 30 ε_0/σ and R_0=1.5 σ <cit.>. Since the periodic boundary condition is introduced in three dimensions, large-scale models are required so that the size of the box is greater than the root mean end-to-end distance <R_ee> of the polymer chain. With such a setup, the conformation of each chain will not be affected by the box boundaries, each of our models contains 10^6 beads and the total number of beads exceeds 10^7. Number density is set to 0.7/σ^3 and temperature is 2.0 ε_0/k_B, where k_B is the Boltzmann constant. The Nosé-Hoover thermostat is used to control the temperature with a timestep of 1 fs in the NVT ensemble <cit.>, with simulation time up to 500 ns until chains are fully relaxed. All simulations were carried out using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) software <cit.>.
§ 3. RESULT AND DISCUSSION
§.§ A.Entanglement length and chain relaxation dynamics - Pure Melt System
Two length scales, a geometric entanglement length, N_e, and a dynamic critical entanglement length, N_c, are commonly used to describe entangled polymer melts <cit.>. The relationship between N_c and N_e has been studied by de Gennes, Kavassalis and Richard P. Wool, and a constant ratio between N_c and N_e has been estimated to be between 9/4 to 27/4 for isotropic melt of flexible polymer chains <cit.>. In this work, we hope to establish this constant ratio through molecular dynamics simulations and verify whether it holds for fully flexible chains in the pure melt and in athermal solvents.
We generated models of monodisperse polymer melts with varying chain lengths, ranging from 100 to 1000 monomers per chain to cover the unentangled to entangled range. To determine the entanglement length, we used the Z1 method developed by Martin Kröger<cit.>, which yielded a constant value of N_e≅58, consistent with the previous literatures <cit.>. To calculate N_c, we computed the relaxation time for each model by analyzing the autocorrelation function of the end-to-end vector. The resulting relaxation time is plotted in Fig. 2, with the red and blue lines representing the unentangled and entangled states, respectively. Our analysis reveals a critical chain length N_c above which the proportionality constant between the logarithmic relaxation time of the melt and that of the chain length changes from 2.0∼2.3 to 3.0∼3.4. The critical length N_c was approximately 316 based on the figure, and the unentangled and entangled regions were consistent with the Rouse and tube models, respectively. Thus, the ratio of the critical dynamical entanglement length N_c over that of the geometric entanglement length N_e equals to 5.45. This result lies in the range of 9/4 to 27/4 as given by the geometric analysis of Wool <cit.>.
§.§ B.Entanglement length and chain relaxation dynamics - Solution System
To ensure the accuracy of our simulation model for fully flexible polymer chains in athermal solvents, we first validated our coarse-grained model by simulating the scaling laws for the mean square end-to-end distance <R_ee^2> and the entanglement length N_e at varying polymer volume fractions Φ. The scaling law for <R_ee^2> at concentrations well above the over lapping concentration Φ^*, proposed by de Gennes, is given by equation (4), as follows <cit.>:
<R_ee^2> ∼ N^1.0Φ^-0.25 (4)
Here N represents the polymer chain length and Φ is the polymer volume fraction. We investigated the dependence of dependence of <R_ee^2> on chain length and polymer concentration by MD simulations. Six models of monodisperse polymer chains were generated at N=50,100,200,250,500 and 1000,with " fixed polymer concentrations at Φ = 0.2 and 0.6 for each model. Additionally, ten models of monodispersed polymer chains were generated at N=1000, but varying polymer volume fractions from 0.1 to 1.0 at constant intervals of 0.1 to explore concentration dependence. All simulation range corresponds to the heavily overlapped systems as they are well above the critical concentration for chain overlap, Φ^*_max=0.068 when N=50. We plotted the calculated values of <R_ee^2> as a function of N and Φ^-0.25, in Figures 3(a)-(b), respectively. The data points represent the simulation values, while the lines were obtained by linear fitting of equation (4). The excellent agreement between the calculated values to the fitted lines given by equation (4) shows that our model can successfully describe the concentrated polymer solutions in athermal solvents.
For the entanglement length, using the Z1 method, we calculated the entanglement length N_e of N=1000 models at different concentrations and plotted the results in Fig. 4. The data points represent the simulated results, and the continuous line represents the fitting of N_e=A Φ^-1.25+B, where A and B are fitting constants. Whilst the scaling exponent agrees with the literature findings of N_e∼Φ^-5/4 for flexible polymer chains in good solvent systems <cit.>, a non-zero constant B was observed in this study. This is because that polymer chains are unentangled at concentrations below the overlapping concentration Φ^*.
After demonstrating the accuracy of our coarse-grained models in predicting the geometric scaling laws for concentrated polymer solutions in athermal solvents, we proceeded to simulate the dynamic relaxation behavior of these polymer solutions of monodisperse polymer chains at N = 1000. We computed the relaxation time constant by analyzing the auto-correlation function of the end-to-end vector. Fig. 5 (a) plots the simulated logarithmic relaxation time versus logarithmic polymer concentrations, with the scattered points representing the simulations and the continuous lines fitted by least square linear fitting. The best fit to the data shows a slope change at a critical concentration Φ_c=0.3.
From the scaling relationship established in Fig. 4, we can deduce this corresponds to N_e=182. Assuming this corresponds to the long-lived entanglement effect, we may deduce the dynamic critical entanglement length N_c=1000, giving a ratio of N_c/N_e =5.49 = 5∼6. This ratio is nearly the same as that simulated for the pure melt system presented in the previous section. Therefore, we may conclude that both the pure melt and fully flexible polymer chains in athermal solvents show similar ratios between geometric and dynamic entanglement lengths.
To further validate our analysis, we computed the relaxation time constant of the polymer solution by computing the diffusion time constant. The relaxation time corresponds to the time required for a polymer chain displacement over a length scale equals to that of the radius of gyration or mean end-to-end distance. The simulated results are plotted in Fig. 5(b). Like Fig. 5(a), the fitted lines also show a slope change at a critical concentration Φ_c=0.3, and the values of the two slopes are identical to those in Fig. 5(a). This further shows the accuracy and the consistency of our simulation results.
Figs. 6(a)-(b) illustrate the scaling relationships between diffusion coefficient and polymer concentration below and above the critical concentration for polymer chains in athermal solvents, respectively. These plots demonstrate excellent agreement between the calculated results and the literature scaling relations for unentangled and entangled polymer solutions <cit.>, highlighting the ability of our model to accurately simulate the dynamics of polymer solutions. Our simulations of dynamic responses, in terms of scaling relations of diffusivity and relaxation time constant, have uncovered a new critical concentration Φ_c, above which a long-lived entanglement effect becomes significant. Importantly, this critical concentration is significantly larger than the critical concentration for chain overlap. We propose that Φ_c should be regarded as a new parameter to quantify the dynamical state of entangled polymers in athermal solvents.
§.§ C.Local Swelling Effect
Polymer chains in athermal solvents are strongly influenced both by the solvents and adjacent chains. The size of a target polymer chain becomes swelled by the solvents and follows the law of self-avoidance walk (SAW) in dilute solutions. In concentrated polymer solutions, the screening effect of proximity chains causes the chains to exhibit the Gaussian scaling in the whole. Such screening effect implicates that the swelling effects are limited to the local scale. The local environment of a chain can be described by packing length, which is defined by Milner as the closest distance between to backbone moieties on different strands. A larger packing length means there are more solvent molecules and fewer adjacent polymer chains around a target polymer chain. Milner’s definition enables the packing length to be readily evaluated from the radial distribution function (RDF), and accurate values of the packing lengths p for flexible polymers of different architectures were obtained accordingly <cit.>. The scaling relations between the packing length and polymer concentration can be derived following the analysis by Y.H. Lin, who stipulates that for all flexible polymers the number of entanglement strands coexisting in the volume penetrated by one entangled strands remains constant <cit.>, i.e.,
p^3∼ N_e (5)
Here p is the packing length, and N_e is the entanglement length. By substituting N_e ∼Φ^-5/4 to equation (5), the packing length is shown to scale with the polymer concentration as follows:
p ∼Φ^-5/12 (6)
The packing length of polymer chains at different polymer concentrations was calculated using the radial distribution function (RDF) method proposed by Milner, and the result was plotted together with that predicted using equation (6) in Fig. 7(a). The agreement between the simulation and the scaling prediction is excellent.
To capture the local swelling effect in our model, we proposed a hypothesis that the polymer chain can be viewed as a series of connected swelling blobs, each with a diameter of ξ and a length of g. The length of each blob, g, can be described by the correlation function of bond vectors. A larger g corresponds to a stiffer chain, as it indicates a stronger correlation between bond vectors. We assumed that the correlation between chain segments in different blobs is lost due to the screening effect, while within a blob, the chain segment cannot effectively perceive the presence of other chains, resulting in swelling by the athermal solvents.
By assuming the blob size scales linearly with the packing size p, we find:
ξ∼Φ^-5/12 (7)
And the coil diameter scales with the correlation length g by:
ξ∼ g^3/5 (8)
Combine equations (7) and (8), we arrived at the concentration scaling expression for the length g as follows:
g∼Φ^-25/36 (9)
Thus, the blob volume Ω_b is:
Ω_b∼ξ^3∼Φ^-5/4 (10)
We plotted the scaling equations for g and Ω_b together with the simulated data in Figs. 7(b)-(c), and observed excellent agreement.
Our hypothesis, based on the swelling concept introduced previously, was that the elastic properties of the polymer chains could be altered, such that the plateau modulus of polymer chains G in athermal solvents may not follow that by the classical Lin-Noolandi ansatz: G ∼Φ/N_e. We have demonstrated that swelling of the chains inside the swelling blobs are stiffer as they exhibited larger correlation function g. In the meantime, the local swelling also leads to the formation of larger blob volumes, which render the chains softer. By taking these two factors into account, we obtained a modified scaling equation for the plateau modulus as follows:
G ∼Φ/(N_e / g) Ω_b∼Φ g/N_eΩ_b (11)
In addition, as described earlier (see Fig. 4), the scaling relation for N_e should account for the non-entangling condition for dilute solutions, i.e., N_e = AΦ^-1.25+B. Combining these together, we arrived at a new scaling expression for G as G ∼Φ^2.28. This expression agrees well with the previously reported experimental findings for concentrated polymer solutions in good solvent systems <cit.>. On the other hand, substitution of N_e = AΦ^-5/4+B into the classical Lin-Noolandi ansatz, we found that the scaling relation for G as G ∼Φ^1.69. This unexpected scaling exponent differs from previous literature reports <cit.>, we attribute the discrepancy to the non-negligible B, which had been previously ignored.
To investigate this numerically, we simulated the plateau moduli as a function of polymer concentration under simple shear deformation. Initially, we constructed a large-scale model box that was larger than the <R_ee> of the chains, as shown in Fig. 8(a), to ensure the conformation of the polymers was not restricted by size. To simulate the shear progress, we deformed the box under a stable velocity along the X-direction<cit.>. Figs. 8(b) shows the stress-strain relationship in shear process, for a better observation, the beads in the middle of the system were colored in red. It can be observed that the system exhibited a uniform velocity field, the maximum shear strain was set to 0.2. After the shearing deformation, the system was relaxed at the fixed maximum strain, seen in Fig. 8(c), the chains in the system did not recover during the relaxation process. To further ensure the deformation was elastic, the stress evolution of the system during and after shearing are plotted in Fig. 9(a)-(b). From Fig. 9(a), the stress is linearly dependent on strain in the process of shearing, while in Fig. 9(b), when the system is fixed at the maximum strain, no stress relaxation is observed. These results suggest that the system was elastically deformed under the shear deformation. We computed the plateau modulus of the polymer chains at concentrations where the long-lived entanglements are important, i.e., for [Φ = 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], and plotted the data in Fig. 10 along with the classical Lin-Noolandi ansatz and equation (11).
The orange and blue data points respectively represent the modulus calculated from the MD simulation of classical Lin-Noolandi ansatz and equation (11), while the red triangles represent the simulated modulus values from the shear simulation. Orange and blue curves show the fitting results of G ∼Φ/N_e and equation (11), resulting in G ∼Φ^1.69 and G ∼Φ^2.28, respectively. Our results show the simulated values deviate from the classical Lin-Noolandi ansatz but matches well with equation (11). Both equation (11) and direct numerical modelling show excellent agreement with the concentration scaling law G ∼Φ^2.28. This new scaling relationship matches well with the experimental results G ∼Φ^2.0∼2.3 <cit.>, suggesting that large-scale MD simulations are important to study the underlying mechanisms for entangled polymer systems.
§ 4. DISCUSSION
Our study demonstrates that our coarse-grained model can accurately describe the geometric scaling relationships for concentrated solutions of fully flexible polymer chains in athermal solvents. Through large-scale molecular dynamics simulations, we were able to verify and extend theories proposed by de Gennes, Kavasslis, Milner, etc. We also investigated the dynamic behavior of both pure melts and solutions, finding a consistent relationship between dynamic and geometric entanglement length. We identified a critical concentration above the crossover concentration which allows the solution to exhibit prolonged entanglement effects. Furthermore, a local swelling effect was discovered, using the concept of blob, we successfully correlated this local swelling with entanglement length N_e, packing length p, blob length g and volume Ω_b, and deduced the scaling relationships with concentration. As a result, we modified the Lin-Noolandi ansatz due to this local swelling effect.
We acknowledge that our coarse-grained model does not consider certain structural details of chains and solvents, and the drag effect and swelling mechanism remain unresolved issues. However, these limitations do not appear to affect our modelling capability, and further studies are needed to address these issues. Additionally, in this work, we focused only on the properties of flexible chains. We predict that there will be different dynamics in semi-flexible and stiff chains, and we plan to explore this issue in future work.
|
http://arxiv.org/abs/2306.04323v1
|
20230607104001
|
An Analytical Model-based Capacity Planning Approach for Building CSD-based Storage Systems
|
[
"Hongsu Byun",
"Safdar Jamil",
"Jungwook Han",
"Sungyong Park",
"Myungcheol Lee",
"Changsoo Kim",
"Beongjun Choi",
"Youngjae Kim"
] |
cs.DC
|
[
"cs.DC"
] |
acmcopyright
XXXXXXX.XXXXXXX
TECS
definitionDefinition[section]
|
http://arxiv.org/abs/2306.09445v1
|
20230615185548
|
Understanding the Application of Utility Theory in Robotics and Artificial Intelligence: A Survey
|
[
"Qin Yang",
"Rui Liu"
] |
cs.RO
|
[
"cs.RO",
"cs.AI",
"cs.MA",
"cs.NE",
"cs.SY",
"eess.SY"
] |
IEEEexample:BSTcontrol
definitionDefinition
theoremTheorem
lemmaLemma
propositionProposition
propertyProperty
observationObservation
corollaryCorollary
Making an agent's trust stable in a series of success and failure tasks through empathy
Seiji Yamada
July 31, 2023
=======================================================================================
empty
empty
Abstract —
As a unifying concept in economics, game theory, and operations research, even in the Robotics and AI field, the utility is used to evaluate the level of individual needs, preferences, and interests. Especially for decision-making and learning in multi-agent/robot systems (MAS/MRS), a suitable utility model can guide agents in choosing reasonable strategies to achieve their current needs and learning to cooperate and organize their behaviors, optimizing the system's utility, building stable and reliable relationships, and guaranteeing each group member's sustainable development, similar to the human society.
Although these systems' complex, large-scale, and long-term behaviors are strongly determined by the fundamental characteristics of the underlying relationships, there has been less discussion on the theoretical aspects of mechanisms and the fields of applications in Robotics and AI.
This paper introduces a utility-orient needs paradigm to describe and evaluate inter and outer relationships among agents' interactions. Then, we survey existing literature in relevant fields to support it and propose several promising research directions along with some open problems deemed necessary for further investigations.
§ INTRODUCTION
When people study, analyze, and design robotic or artificial intelligence (AI) systems, they would like to compare them with natural systems working.
For example, many natural systems (e.g., brains, immune system, ecology, societies) are characterized by apparently complex behaviors that emerge as a result of often nonlinear spatiotemporal interactions among a large number of component systems at different levels of the organization <cit.>.
From the single-robot perspective, cognitive developmental robotics (CDR) aims to provide a new understanding of how human higher cognitive functions develop by using a synthetic approach that developmentally constructs cognitive functions <cit.>.
For multi-agent systems (MAS)[In this paper, we use the term agent to mean autonomous intelligent entities, such as robot.], they potentially share the properties of swarm intelligence <cit.> in practical applications (like search, rescue, mining, map construction, exploration, etc.), representing the collective behavior of distributed and self-organized systems. There are a lot of research fields related to it, especially building so-called artificial social systems, such as drones and self-driving cars. Specifically, in multi-robot systems (MRS), each robot can internally estimate the utility of executing an action. And robots' utility estimates will be inexact for several reasons, including mission cost, strategy rewards, sensor noise, general uncertainty, and environmental change <cit.>.
Furthermore, in MAS cooperation, trust is an essential component for describing the agent's purposes. In other words, there are complex relationships between utility, transparency, and trust, which depend on individual purposes <cit.>. Especially considering humans and robots collaborated as a team, recognizing the utility of robots will increase interest in group understanding and improve Human-Robot Interaction (HRI) performance in tasks <cit.>. In this paper, we introduce a utility-orient needs paradigm and survey existing work on robotics and AI through the lens of utility. We organize this review by the relationships between single-agent, multi-agent, trust among agents, and HRI to underscore the effects of utility on interaction outcomes.
§.§ Scope and Contributions
One of the significant challenges in designing robotic and AI systems is how to appropriately describe the relationships of their interconnected elements and natures.
It is natural to introduce a unifying concept expressing various components and evaluating diverse factors on the same ground.
As a results, Utility Theory is a wide cross-area of research that lies at the intersection of economics, operations research, robotics, AI, cognitive science, and psychology.
To the best of our knowledge, this survey is the first to formalize a comprehensive model of the utility paradigm in agents' interaction from agents' needs perspective and situate prior and ongoing research regarding how the utility can influence inter and outer relationships among agents and affect systems' performance.
The main contributions of this paper are listed as follow:
* We introduce a utility paradigm based on agents' needs to describe and evaluate inter and outer relationships among agents' interaction, integrating and summarizing insights from robotics and AI studies.
* We review existing literature in relevant fields that support the proposed utility-orient needs paradigm of agents' interaction.
* We propose several open problems for future research based on utility theory, which will support researchers in building robust, reliable, and efficient robotic and AI systems, especially in the MAS and HRI domain.
Fig. <ref> outlines the vital relationships of the utility-orient needs paradigm based on agents' motivations among single-agent, MAS, and HRI, which affect systems' abilities to meet the requirements of specific tasks and adapt to human needs and uncertain environments. We survey how the utility-orient relationships affect the performance of different systems, especially evaluating the trust level among agents through various types of utilities and integrating human needs and agents' utilities to build a harmonious team.
§ SYNOPSIS OF AFFORDANCE FORMALISMS
This section summarises the evolution of formalisms that use the concept of utilities to improve agents' performance. This evolution can be classified into two main stages. Mathematical conceptualizations characterize the first stage as extensions of theories from various fields. The second stage corresponds to formalisms that focus on optimizing systems' capabilities by unifying different concepts based on Utility Theory.
§.§ Utility Theory
Utility theory is postulated in economics to explain the behavior of individuals based on the premise that individuals can consistently rank the order of their choices depending on their needs or preferences. This theoretical approach aims to quantify an agent's degree of preference across a set of available alternatives and understand how these preferences change when an agent faces uncertainty about which alternative it will receive <cit.>.
To describe the interactions between multiple utility-theoretic agents, we use the specific utility function to analyze their preferences and rational action. The utility function is a mapping from states of the world to real numbers, which are interpreted as measures of an agent's level of happiness (needs) in the given states. If the agent is uncertain about its current state, the utility is defined as the expected value (Eq. (<ref>)) of its utility function for the appropriate probability distribution over the states <cit.>. From the perspective of the connection between a decision-maker and its preference, a decision-maker would rather implement a more preferred alternative (act, course of action, strategy) than one that is less preferred <cit.>.
𝔼[ u(X) ] = ∑_i u(x_i) · P(x_i)
Generally, utility serves as a unified concept in value systems to describe the capacity of action, behavior, and strategy to satisfy the agents' desires or needs in their decision-making. Furthermore, modern utility theory has its origins in the theory of expected value, and the value of any course of action could be determined by multiplying the gain realized from that action by the likelihood of receiving that gain <cit.>.
§.§ Basic Concepts in Agent's Motivation
Supposing AI agents can sense various environments, monitor their behaviors, and get internal and external information through perception P(t) in the given time. And the data can generally be categorized into two classes: different variables in the environments (like light, sound, color, etc.) and agents' physical structure and operation (such as arm extension, contact in leg, etc), the operation of the cognitive processes within the robot (learning error, perceptual novelty, etc.) <cit.>. Then, the set of values transmits to the agent's actuators making up the actuation vector A(t) in a given time. Furthermore, we can define a series of specific concepts as follow:
* Goal: An agent obtains utilities in a perceptual state P(t).
* Innate Value/needs/Motivation/Drive: An agent has the plasticity to encode the feedback received from its interactive environments continuously, and the encoding of experience should facilitate the emergence of adaptive behaviors <cit.>.
* Strategy: A strategy describes the general plan of an agent achieving short-term or long-term goals under uncertainty, which involves setting sub-goals and priorities, determining action sequences to fulfill the tasks, and mobilizing resources to execute the actions <cit.>.
* Utility: The benefit of using a strategy is to achieve a goal satisfying a specific innate value and needs. The utility is determined by the system's interaction with the environment and is not known a priori, traditionally called Reward in the reinforcement learning field <cit.>.
* Expected Utility (e_u): The probability of obtaining utility starts from a given perceptual state P(t), which is modulated by the amount of utility obtained.
* Utility Model: A function provides the expected utility for any point in state space, which establishes the relationship between the real utility obtained from the interaction and how to achieve the goal.
* Value Function (VF): A specific utility model expresses the expected utility for any perceptual state P(t):
e_u(t+1) = VF(P(t+1) )
* Episode: It describes a circle of the interaction between agents and environments:
episode = {P(t); A(t); P(t+1; e_u(t+1) }
Especially, <cit.> presents a model for a generic intrinsically motivated agent shown in Fig. <ref> [Here, intrinsic motivation components modulate behavioral components to connect perception components (sensors) and actuators.]. Through sensors generating an environment state at each time step, the agent will form various goals and corresponding expected utilities in motivation components. Then, the behavioral components will make decisions based on specific goals to modify the agent's internal state and generate diverse strategies and actions to achieve goals by triggering actuators to interact with the environment changing the external state.
§.§ Needs-driven Behaviors
In nature, from cell to human, all intelligent agents represent different kinds of hierarchical needs such as the low-level physiological needs (food and water) in microbe and animal; the high-level needs self-actualization (creative activities) in human being <cit.>. Different levels of needs stimulate all agents to adopt diverse behaviors achieving certain goals or purposes and satisfying their various needs in the natural world <cit.>. They also can be regarded as self-organizing behaviors <cit.>, which keep on interacting with the environments and exchanging information and energy to support the system's survival and development.
Specifically, the needs describe the necessities for a self-organizing system to survive and evolve, which arouses an agent to action toward a goal, giving purpose and direction to behavior and strategy. The dominant approach to modeling an agent's interests or needs is Utility Theory. <cit.> introduced the self-sufficient robots' decision-making based on a basic work cycle – find fuel – refuel. They utilized some utility criterion needs, such as utility maximization, to guide robots performing opportunistic behaviors <cit.>.
From the utility theory perspective, needs are regarded as innate values driving agents to interact with environments.
Especially the ultimate payoff for animals is to maximize inclusive fitness utilizing some evolutionary strategy, and for humans is to maximize profitability through some marketing strategy <cit.>.
If we consider the ecological status of AI agents, like robots, accomplishing tasks in the real world, their situation is similar to our ecological situation <cit.>.
Especially reinforcement learning (RL) is an excellent model to describe the needs-driven behaviors of AI agents.
The essence of RL is learning from interaction based on reward-driven (such as utilities and needs) behaviors, much like natural agents.
When an RL agent interacts with the environment, it can observe the consequence of its actions and learn to change its behaviors based on the corresponding rewards received.
The theoretical foundation of RL is the paradigm of trial-and-error learning rooted in behaviorist psychology <cit.>. In RL, the policy dictates the actions that the agent takes as a function of its state and the environment, and the goal of the agent is to learn a policy maximizing the expected cumulative rewards/utilities in the process.
Fig. <ref> illustrates the RL process of rewards-driven behaviors <cit.>.
§.§ Utility-orient Needs Paradigm Systems
Fig. <ref> represents the relationships among single-agent, multi-agent, and humans based on the paradigm of the utility-orient needs. More specially, before an individual agent cooperates with group members, it needs to fit some basic needs, such as having enough energy, guaranteeing other agents' safety, etc. Then it requires corresponding capabilities to collaborate with other agents, satisfy task requirements, and adapt to dynamically changing environments (Fig. <ref> shows four categories of robot, carrier, supplier, observer, and executor, working as a team in the middle graph.). Especially trust among agents can be evaluated based on their current status and needs, performance, knowledge, experience, motivations, etc.
Furthermore, when AI agents work with humans, they should satisfy humans' needs and assist humans in fulfilling their missions efficiently. Therefore, it is necessary to build a unified utility paradigm describing the needs of AI agents and humans in the common ground for evaluating their relationships like trust, which is also the pre-condition of safety, reliability, stability, and sustainability in cooperation. In the following sections, we will review existing literature from the perspective of single-agent systems, multi-agent systems (MAS), trust among agents, and human-robot interaction (HRI) to support the proposed utility-orient needs paradigm systems.
§ SINGLE-AGENT SYSTEMS
For single-agent systems, cognitive robotics is the most typical research area for implementing Utility Theory. It studies the mechanisms, architectures, and constraints that allow lifelong and open-ended improvement of perceptual, reasoning, planning, social, knowledge acquisition, and decision-making skills in embodied machines <cit.>. Especially building a value [In this section, we use the terms value and utility interchangeably.] system to mimic the “brain” of an AI agent mapping behavioral responses for sensed external phenomena is the core component of cognitive robotics, which is also an emerging and specialized sub-field in neurorobotics, robotics, and artificial cognitive systems research.
Here, the value measures an agent's effort to expend to obtain a reward or avoid punishment. It is not hard-wired for an AI agent, even a biological entity, and the specific value system achieved through experience reflects an agent's subjective evaluation of the sensory space. And the value mechanisms usually have been defined as the expected values, particularly in uncertain environments. Moreover, the innate value reflects an agent's subjective evaluation of the sensory space, but the acquired value is shaped through experience during its development.
<cit.> defines the artificial value system for the autonomous robot as a mechanism that can reflect the effect of their experience in future behaviors. In the life of an agent, the agent starts from inherent values, and its acquired values are modified through interaction with various environments and knowledge accumulation in learning. So the acquired value is thus activity-dependent and allows the value system to become sensitive to stimuli that are not able to trigger a value-related response by themselves <cit.>.
According to the development of cognitive robotics, the existing value systems can mainly be classified into three categories. Neuroanatomical Systems discuss the explainable biologically inspired value systems design from neuroanatomy and physiology perspectives <cit.>; Neural Networks Systems build more abstract models through mathematical approaches to mimic the agent's value systems; Motivational Systems consider the model that agents interact with environments to satisfy their innate values, and the typical mechanism is reinforcement learning (RL). We review the three categories as follow:
§.§ Neuroanatomical Systems
In the studies of the theory of primate nervous systems in the brain, researchers usually used robotics systems as a test-bed <cit.>, figuring out how neurons influence each other, how long neurons activate, and the brain regions are affected <cit.>. Fig. <ref> illustrates the functions of the brain from anatomical and physiological perspectives.
Following the goals of synthetic neural modeling, they aim to forge links between crucial anatomical and physiological properties of nervous systems and their overall function during autonomous behavior in an environment <cit.>.
For example, <cit.> designed a value system in the NOMAD mobile robot, driven by the object – “taste” to modulate changes in connections between visual and motor neurons, thus linking specific visual responses to appropriate motor outputs. <cit.> described the robotic architecture embedding of a high-level model of the basal ganglia and related nuclei based on varying motivational and sensory inputs, such as fear and hunger, to generate coherent sequences of robot behavior.
Especially based on the principles of neuromodulation in the mammalian brain, <cit.> presents a strategy for controlling autonomous robots. They used a cognitive robot – CARL-1, to test the hypothesis that neuromodulatory activity can shape learning, drive attention, and select actions. The experiments demonstrated that the robot could learn to approach and move away from stimuli by predicting the positive or negative value.
Recently, <cit.> shows the putative functions ascribed to dopamine (DA) emerging from the combination of a standard computational mechanism coupled with differential sensitivity to the presence of DA across the striatum through testing on the simulated humanoid robot iCub interacting with a mechatronic board. However, it is worth mentioning that the current trend tries to blur the line between designing value systems that follow the neuroanatomy and biology of the brain and artificial neural networks (ANN) that are biologically inspired but less anatomically accurate <cit.>.
§.§ Artificial Neural Networks Systems
Artificial Neural Networks (ANN) are an information manager model inspired by the biological nervous systems function of the human brain. Similarly, Fig. <ref> demonstrates the connection of different function modules within the human brain working as a neural network to perform various reasoning activities. Like the human brain adjusting the synaptic relationships between and among neurons in learning, ANN also needs to modify the parameters of “nodes” (neuron) to adapt to the specific scenario through a learning process. Fig. <ref> illustrates a two-layered feedforward neural network.
There are several typical ANN models, such as restricted Boltzmann machines (RBMs) <cit.>, spiking neural networks <cit.>, adaptive resonance theory (ART) networks <cit.>, autoencoder (AE) <cit.>, convolutional neural networks (CNNs) <cit.>, growing neural gases (GNGs) <cit.>, ect. In the ANN, Nodes working as neurons are non-linear processing units receiving various signals and performing different processing steps, as depicted graphically in Fig. <ref>. Moreover, the AI agent's sensor states represent its properties for the input nodes, and the output nodes express corresponding actions and behaviors in the current status. The agent needs to learn a function that can describe its future expected reward (utility) based on the specific action or strategy in the given state. Due to the reward inverse proportion to learning error, the agent is more motivated to learn the actions with higher learning error until it gets the skill. Then, it will switch to another to learn based on its needs.
Especially <cit.> proposes a neural network architecture for integrating different value systems with reinforcement learning, presenting an empirical evaluation and comparison of four value systems for motivating exploration by a Lego Mindstorms NXT robot. <cit.> introduces a model of hierarchical curiosity loops for an autonomous active learning agent by selecting the optimal action that maximizes the agent’s learning of sensory-motor correlations. From the information theory perspective, <cit.> presents a method linking information theoretic quantities on the behavioral level (sensor values) to explicit dynamical rules on the internal level (synaptic weights) in a systematic way. They studied an intrinsic motivation system for behavioral self-exploration based on the maximization of the predictive information using a range of real robots, such as the humanoid and Stumpy robots.
<cit.> recently proposed the value system of a developmental robot using the RBM as the data structure. They use simulation to demonstrate the mechanism allowing the agent to accumulate knowledge in an organized sequence with gradually increasing complexity while hardly learning from purely random areas. From another angle, RL gradually becomes a popular and effective model for non-specific reward tasks <cit.>, especially combined with deep learning methods.
§.§ Motivational Systems
To clarify agents' motivation, providing an expected utility for each perceptual state in the motivation evaluation is the pre-condition in their decision-making, especially in discovering goals achieving them automatically, and determining the priority of drives <cit.>. Many machine learning algorithms share the properties of value systems and provide a way for AI agents to improve their performance in tasks and respect their innate values <cit.>. The most typical and general model is reinforcement learning (RL).
The essence of RL is learning from interaction. When an RL agent interacts with the environment, it can observe the consequence of its actions and learn to change its behaviors based on the corresponding rewards received. Moreover, the theoretical foundation of RL is the paradigm of trial-and-error learning rooted in behaviorist psychology <cit.>. In this process, the best action sequences of the agent is determined by the rewards through interaction and the goal of the agent is to learn a policy π maximizing its expected return (cumulative utilities with discount). In a word, given a state, a policy returns an action performance and learn an optimal policy from the consequence of actions through trial and error to maximize the expected return in the environment. Figure <ref> illustrates this perception-action-learning loop.
Furthermore, the flexibility of RL permits it as an effective utility system to model agents' motivation through designing task-nonspecific reward signals derived from agents' experiences <cit.>. <cit.> introduced one of the earliest self-motivated value systems integrated with RL, demonstrating that the SAIL robot could learn to pay attention to salient visual stimuli while neglecting unimportant input. To understand the origins of reward and affective systems, <cit.> built artificial agents sharing the same intrinsic constraints as natural agents: self-preservation and self-reproduction. <cit.> uses a table-based Q-learning as a baseline compared with other function approximation techniques in motivated reinforcement learning.
From the mechanism of intelligent adaptive curiosity perspective, <cit.> introduced an intrinsic motivation system pushing a robot toward situations to maximize its learning progress. <cit.> presented a curiosity-driven reinforcement learning approach that explores the iCub's state-action space through information gain maximization, learning a world model from experience, and controlling the actual iCub hardware in real time. To help the iCub robot intrinsically motivated to acquire, store and reuse skills, <cit.> introduced Continual Curiosity driven Skill Acquisition (CCSA), a set of compact low-dimensional representations of the streams of high-dimensional visual information learned through incremental slow feature analysis.
Especially <cit.> proposed an RL model combined with the
agent basic needs satisfaction and intrinsic motivations. It was tested in a simulated survival domain, where a robot was engaged in survival tasks such as finding food or water while avoiding dangerous situations. Furthermore, <cit.> recently proposed the first formalized utility paradigm – Agent Needs Hierarchy – for the needs of an AI agent to describe its motivations from the system theory and psychological perspective. The Agent Needs Hierarchy is similar to Maslow's human needs pyramid <cit.>. An agent's abstract needs for a given task are prioritized and distributed into multiple levels, each of them preconditioned on their lower levels. At each level, it expresses the needs as an expectation over the corresponding factors/features' distribution to the specific group <cit.>. Fig. <ref> illustrates the Agent Needs Hierarchy as five different modules in the proposed Self-Adaptive Swarm Systems (SASS) <cit.>.
Specifically, the lowest (first) level is the safety features of the agent (e.g., features such as collision detection, fault detection, etc., that assure safety to the agent, human, and other friendly agents in the environment). The safety needs can be calculated through its safety feature's value and corresponding safety feature's probability based on the current state of the agent. After satisfying safety needs, the agent considers its basic needs, which includes features such as energy levels, data communication levels that help maintain the basic operations of that agent.
Only after fitting the safety and basic needs, an agent can consider its capability needs, which are composed of features such as its health level, computing (e.g., storage, performance), physical functionalities (e.g., resources, manipulation), etc.
At the next higher level, the agent can identify its teaming needs that accounts the contributions of this agent to its team through several factors (e.g., heterogeneity, trust, actions) that the team needs so that they can form a reliable and robust team to successfully perform a given mission.
Ultimately, at the highest level, the agent learns skills or features to improve its capabilities and performance in achieving a given task through RL. Later work by <cit.> presented a strategy-oriented Bayesian soft actor-critic (BSAC) Model based on the agent needs, which integrates an agent strategy composition approach termed Bayesian Strategy Network (BSN) with the soft actor-critic (SAC) method to achieve efficient deep reinforcement learning (DRL).
Generally, the set of agent needs reflects its innate values and motivations. It is described as the union of needs at all the levels in the needs hierarchy. Each category needs level is combined with various similar needs (expected utilities) presenting as a set, consisting of individual hierarchical and compound needs matrix <cit.>. It drives the agent to develop various strategies and diverse skills to satisfy its intrinsic needs and values, much like a human does.
§ MULTI-AGENT SYSTEMS
A multi-agent system (MAS) consists of several agents interacting with each other, which may act cooperatively, competitively, or exhibit a mixture of these behaviors <cit.>. In the most general case, agents will act on behalf of users with different goals and motivations to cooperate, coordinate, and negotiate with each other, achieving successful interaction, such as in human society <cit.>. Most MAS implementations aim to optimize the system's policies with respect to individual needs and intrinsic values, even though many real-world problems are inherently multi-objective <cit.>. Therefore, many conflicts and complex trade-offs in the MAS need to be managed, and compromises among agents should be based on the utility mapping the innate values of a compromise solution – how to measure and what to optimize <cit.>.
However, in the MAS setting, the situations will become much more complex when we consider individual utility reflects its own needs and preferences. For example, although we assume each group member receives the same team rewards in fully cooperative MAS, the benefits received by an individual agent are usually significant differences according to its contributions and innate values in real-world scenarios or general multi-agent settings. Especially in distributed intelligent systems, such as multi-robot systems (MRS), self-driving cars, and delivery drones, there is no single decision-maker with full information and authority. Instead, the system performance greatly depends on the decisions made by interacting entities with local information and limited communication capabilities <cit.>. This section reviews the works of decision-makers compromise in MAS from the game-theoretic utility cooperative decision-making perspective based on their needs and innate values. Then, we discuss the design and influence of individual utility functions to induce desirable system behaviors and optimization criteria.
§.§ Game-theoretic Utility Systems
From a game-theoretic control perspective, most of the research in game theory has been focused on single-stage games with fixed, known agent utilities <cit.>, such as distributed control in communication <cit.> and task allocation <cit.>. Especially, recent MAS research domains focus on solving path planning problems for avoiding static or dynamical obstacles <cit.> and formation control <cit.> from the unintentional adversary perspective. For intentional adversaries, the Pursuit Domain <cit.> primarily deals with how to guide pursuers to catch evaders <cit.> efficiently. Similarly, the robot soccer domain <cit.> deals with how one group of robots wins over another group of robots on a typical game element.
Furthermore, optimal control of MAS via game theory assumes a system-level objective is given, and the utility functions for individual agents are designed to convert a Multi-Agent System into a potential game <cit.>. Although game-theoretic approaches has received intensive attention in the control community, it is still a promising new approach to distributed control of MAS. Especially dynamic non-cooperative game theory using distributed optimization in MAS has been studied in <cit.>. In <cit.>, the authors investigated the MAS consensus and the work in <cit.> provided an algorithm for large-scale MAS optimization. However, designing utility functions, learning from global goals for potential game-based optimization of control systems, and converting the local optimization problem to an original optimization problem into a potential networked game are still open challenges <cit.>.
It is worth mentioning that <cit.> and <cit.> introduced an architectural overview from the perspectives of decomposing the problem into utility function design and building the learning component. Specifically, the utility function design concerns the satisfaction of the results of emergent behaviors from the standpoint of the system designer, and the learning component design focuses on developing distributed algorithms coordinating agents towards one such equilibrium configuration. Recently, <cit.> solved the utility function design of the separable resource allocation problems through systematic methodology optimizing the price of anarchy. <cit.> discussed the learning in games of rational agents' coverage to an equilibrium allocation by revising their choices over time in a dynamic process.
More recently, <cit.> proposed a new model called the Game-Theoretic Utility Tree (GUT), combining the core principles of game theory and utility theory to achieve cooperative decision-making for MAS in adversarial environments. It combines with a new payoff measure based on agent needs for real-time strategy games. By calculating the Game-Theoretic Utility Computation Unit distributed at each level, the individual can decompose high-level strategies into executable low-level actions. It demonstrated the GUT in the proposed Explore Game domain and verified the effectiveness of the GUT in the real robot testbed – Robotarium <cit.>, indicating that GUT can organize more complex relationships among MAS cooperation and help the group achieve challenging tasks with lower costs and higher winning rates. Fig <ref> illustrates a MAS (UAV) working in adversarial environments with several typical scenarios.
§.§ Cooperative Decision-making and Learning
Cooperative decision-making among the agents is essential to address the threats posed by intentional physical adversaries or determine tradeoffs in tactical networks <cit.>.
Although cooperative MAS decision-making used to be studied in many separate communities, such as evolutionary computation, complex systems, game theory, graph theory, and control theory <cit.>, these problems are either be episodic or sequential <cit.>. Agents' actions or behavior are usually generated from a sequence of actions or policies, and the decision-making algorithms are evaluated based on utility-orient policy optimality, search completeness, time complexity, and space complexity.
Furthermore, existing cooperative decision-making models use Markov decision process (MDP) and its variants <cit.>, game-theoretic methods, and swarm intelligence <cit.>. They mostly involve using Reinforcement Learning (RL) and Recurrent Neural Networks (RNN) to find optimal or suboptimal action sequences based on current and previous states for achieving independent or transferred learning of decision-making policies <cit.>.
Considering decision-making in the context of cooperative MAS, the learning process can be centralized or decentralized. <cit.> divides it into two categories: team learning and concurrent learning.
In team learning, only one learner involves the learning process and represents a set of behaviors for the group of agents, which is a simple approach for cooperative MAS learning since the standard single-agent machine learning techniques can handle it.
So considering multiple learning processes improve the team's performance, concurrent learning is the most common approach in cooperative MAS. Since concurrent learning projects the large joint team search space onto separate, smaller subset search spaces, <cit.> argue that it is suitable for the domains that can be decomposed into independent problems and benefit from them. Especially for individual behaviors that are relatively disjoint, it can dramatically reduce search space and computation complexity. Furthermore, breaking the learning process into smaller chunks provides more flexibility for the individual learning process using computational resources.
However, <cit.> argue that the central challenge for concurrent learning is that each learner is adapting its behaviors in the context of other co-adapting learners over which it has no control, and there are three main thrusts in the area of concurrent learning: credit assignment, the dynamics of learning, and teammate modeling.
The credit assignment problem focuses on appropriately apportioning the group rewards (utilities) to the individual learner. The most typical solution is to separate the team rewards for each learner equally or the reward changing trend of individual learners are the same. The approach divvying up among each learner and receiving rewards through joint actions or strategies is usually termed global rewards. <cit.> argue that global reward does not scale well to increasingly difficult problems because the learners do not have sufficient feedback tailored to their own specific actions.
In contrast, if we do not divide the team rewards equally, evaluating each agent's performance is based on individual behaviors, which means agents do not have the motivation to cooperate and tend to greedy behaviors; these methods are the so-called local reward. Through studies different credit assignment policies, <cit.> argue that local reward leads to faster learning rates, but not necessarily to better results than global reward. Because local rewards increase the homogeneity of the learning group, he suggests that credit assignment selection relies on the desired degree of specialization. Furthermore, <cit.> argues that individual learning processes can be improved by combining separate local reinforcement with types of social reinforcement.
The dynamics of learning consider the impact of co-adaptation on the learning processes. Assuming agents work in dynamic changing and unstructured environments, they need to constantly track the shifting optimal behavior or strategy adapting to various situations, especially considering the agents may change other group members' learning environments. Evolutionary game theory provides a common framework for cooperative MAS learning. <cit.> studied the properties of cooperative coevolution and <cit.> visualize basins of attraction to Nash equilibria for cooperative coevolution.
Moreover, lots of research in concurrent learning involve game theory to investigate MAS problems <cit.>. Especially for Nash equilibria, it provides a joint strategy for each group member, which lets individuals not have the motivation to shift their strategy or action in the current situation in terms of a better reward. In the full-cooperative scenario with global reward, the reward affects the benefits of each agent and only needs to consider the globally optimal Nash equilibrium. However, the relationships among agents are not clear for more general situations, which combine the cooperative and non-cooperative scenarios. In other words, individual rewards do not directly relate to the team reward, so the general-sum game <cit.> is the cooperative learning paradigm.
§.§ From Individual Utility to Social Welfare
As we discussed above, the reward same as the utility, reflects the agents' diverse intrinsic motivations and needs. Individual and team rewards also directly describe the relationships between each group member and the team.
We are taking MAS football and the Robot-soccer domain as an example. When considering multi-agent systems (MAS) cooperation as a team, an individual agent must first master essential skills to satisfy low-level needs (safety, basic, and capability needs) <cit.>. Then, it will develop effective strategies fitting middle-level needs (like teaming and collaboration) to guarantee the systems' utilities <cit.>. Through learning from interaction, MAS can optimize group behaviors and present complex strategies adapting to various scenarios and achieving the highest-level needs, and fulfilling evolution. By cooperating to achieve a specific task, gaining expected utility (reward), or against the adversaries decreasing the threat, intelligent agents can benefit the entire group development or utilities and guarantee individual needs and interests.
It is worth mentioning that <cit.> developed the end-to-end learning implemented in the MuJoCo multi-agent soccer environment <cit.>, which combines low-level imitation learning, mid-level, and high-level reinforcement learning, using transferable representations of behavior for decision-making at different levels of abstraction. Figure <ref> represents the training process.
However, suppose we extend the scalar to the entire system, like community and society, and consider optimizing the overall social welfare across all agents. In that case, we have to introduce the social choice utility. In this setting, agents have different agendas driven by their needs, leading to complex dynamic behaviors when interacting, which causes the utility hard to predict, let alone optimize. To solve this problem, we usually assume that agents are self-interested and optimize their utilities from the perspective of socially favorable desires and needs <cit.>. By defining the social choice function representing social welfare reflecting each agent's needs and desired outcome, we can design a system of payments making the joint policy converge to the desired outcome <cit.>.
For example, the government controls the parameters of tenders to balance the different requirements of projects for social welfare. Also, <cit.> discussed computing a convex coverage set for a cooperative multi-objective multi-agent MDP balancing traffic delays and costs as a posteriori in traffic network maintenance planning.
From another perspective, considering individual rewards and utilities from the standpoint of social choice, they are weighed and summed up, consisting of a social welfare function. In other words, the social welfare function relies on individual values or return rewards and its utility computing model or mechanism. For example, <cit.> discussed which properties must satisfy a multi-criteria function so it can be used to determine the winner of a multi-attribute auction. Here, social welfare may depend on attributes of the winning bids, as well as a fair outcome in terms of payments to the individual agents, that together with the costs the agents need to support to execute their bids, if chosen, typically determine the individual utilities <cit.>.
Generally speaking, the design of the social welfare function aims to find a mechanism for making agents' utility function transparently reflect their innate values and needs, which can optimize the joint policy concerning sustainable social welfare and guarantee individual development.
§ TRUST AMONG AGENTS
Trust describes the interdependent relationship between agents <cit.>, which can help us better understand the dynamics of cooperation and competition, the resolution of conflicts, and the facilitation of economic exchange <cit.>. From the economic angle, <cit.> has regarded trust as an expectation or a subjective probability and defined it using expected utility theory combined with concepts such as betrayal premium.
In Automation, authors in <cit.> describe trust in human-automation relationships as the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability. The trustor here is a human and the trustee usually is the automation system or an intelligent agent like robot. Those systems' primary purpose is to assess and calibrate the trustors' trust beliefs reflecting the automation system's ability to achieve a specific task.
In Computing and Networking, the concept of trust involves many research fields, such as artificial intelligence (AI), human-machine interaction, and communication networks <cit.>. They regard an agent's trust as a subjective belief which represents the reliability of the other agents' behaviors in a particular situation with potential risks. Their decisions are based on learning from the experience to maximize its interest (or utility) and minimize risk <cit.>. Especially in the social IoT domain <cit.>, trust is measured based on the information accuracy and agents' intentions (friendly or malicious) according to the current environment to avoid attacks on the system. <cit.> proposes a decision-making model named MATE, which applies the expected-utility decision rule to decide whether or not pervasive devices are in their node interests to load a particular component.
In MAS, trust is defined by mainly two major ways. One way is reliable relations between human operators and the MAS/MRS. In this context, trust reflects human expectations and team performance assessments <cit.>. The primary research includes using the trust to measure the satisfactory degree of robot team performance that hybrid considering in-process behaviors and final mission accomplishments. In this process, the visible and invisible behaviors regarding humans supervising robots and robots' behaviors will be investigated to decide the most influential factors. Especially investigating human trust in robots will help the robots to make decisions or react to emergencies, such as obstacles, when human supervision is unavailable <cit.>.
The second type of trust will be defined as reliable relations among agents, reflecting the trusting agents' mission satisfaction, organizational group member behaviors and consensus, and task performance at the individual level. Moreover, trust has been associated with system-level analysis, such as reward and function at the organized group structure in a specific situation, the conflict solving of heterogeneous and diverse needs, and the system evolution through learning from interaction and adaptation.
On the other hand, the act of trusting by an agent can be conceptualized as consisting of two main steps: 1) trust evaluation, in which the agent assesses the trustworthiness of potential interaction partners; 2) trust-aware decision-making, in which the agent selects interaction partners based on their trust values <cit.>. Also, assessing the performance of agent trust models is of concern to agent trust research <cit.>.
It is worth noticing that reputation and trust mechanisms have gradually become critical elements in MAS design since agents depend on them to evaluate the behavior of potential partners in open environments <cit.>. As an essential component of reputation, trust generates the reputation from agents' previous behaviors and experiences, which becomes an implicit social control artifact <cit.> when agents select partners or exclude them due to social rejection (bad reputation).
Furthermore, considering the interactions between human agents and artificial intelligence agents like human-robot interaction (HRI), building stable and reliable relationships is of utmost importance in MRS cooperation, especially in adversarial environments and rescue missions <cit.>. In such multi-agent missions, appropriate trust in robotic collaborators is one of the leading factors influencing HRI performance <cit.>. In HRI, factors affecting trust are classified into three categories <cit.>: 1) Human-related factors (i.e., including ability-based factors, characteristics); 2) Robot-related factors (i.e., including performance-based factors and attribute-based factors); 3) Environmental factors (i.e., including team collaboration, tasking, etc.). Although there is no unified definition of trust in the literature, researchers take a utilitarian approach to defining trust for HRI adopting a trust definition that gives robots practical benefits in developing appropriate behaviors through planning and control <cit.>.
Moreover, with the development of the explanation of machine learning, such as explainable AI, although explanation and trust have been intertwined for historical reasons, the utility of machine learning explanations will have more general and consequential <cit.>. Especially the utility of an explanation must be tied to its appropriate trust model.
Although modeling trust has been studied from various perspectives and involves many different factors, it is still challenging to develop a conclusive trust model that incorporates all of these factors.
Therefore, future research into trust modeling needs to be more focused on developing general trust models based on measures other than the countless factors affecting trust <cit.>. Specifically, few works from the literature considered trust in socio-intelligent systems from an AI agent perspective to model trust with agents having similar needs and intrinsic values.
Recently, <cit.> proposed a novel trust model termed relative needs entropy (RNE) by combining utility theory and the concept of relative entropy based on the agent's needs hierarchy <cit.> and innate values. Similar to information entropy, it defines the entropy of needs as the difference or distance of needs distribution between agents in a specific scenario for an individual or group.
From a statistical perspective, the RNE can be regarded as calculating the similarity of high-dimensional samples from the agent needs vector. A lower RNE value means that the trust level between agents or groups is higher because their needs are well-aligned and there is low difference (distance) in their needs distributions. Similarly, a higher RNE value will mean that the needs distributions are diverse, and there exists a low trust level between the agent or groups because of their misalignment in their motivations.
As discussed above, trust is the basis for decision-making in many contexts and the motivation for maintaining long-term relationships based on cooperation and collaboration <cit.>. Although trust is a subjective belief, it is a rational decision based on learning from previous experience to maximize its interest (or utility) and minimize risk by choosing the best compromise between risk and possible utility from cooperation.
Trust assessment involves various factors, particularly utility and risk analysis under dynamic situations, which rely on context and balance key tradeoffs between conflicting goals to maximize decision effectiveness and task performance.
§ HUMAN-ROBOT INTERACTION
As the higher-level intelligent creature, humans represent more complex and diversified needs such as personal security, health, friendship, love, respect, recognition, and so forth. In this regard, humans are highly flexible resources due to the high degree of motion of the physical structure and the high level of intelligence and recognition <cit.>. When we consider humans and robots work as a team, organizing their needs and getting a common ground is a precondition for human-robot collaboration in complex missions and dynamic changing environments. Considering building more stable and reliable relationships between humans and robots, <cit.> depicts an updated model in HRI termed Human Robot Team Trust (HRTT). It considers the human as a resource with specific abilities that the resource can gain or lose trust with training and realizes the robots as resources with known performance attributes included as needs in the design phase of the collaborative process.
Due to the constantly increasing need for robots to interact, collaborate and assist humans, Human-Robot Interaction (HRI) poses new challenges, such as safety, autonomy, and acceptance issues <cit.>. From a robot needs perspective <cit.>, it first needs to guarantee human security and health, such as avoiding collision with humans, protecting them from risks, etc. Moreover, in the higher level teaming needs, robots should consider human team members' specialty and capability to form corresponding heterogeneous Human-Robot team adapting specific missions automatically <cit.>. Furthermore, efficient and reliable assistance is essential to the entire mission process in HRI. More importantly, designing an Interruption Mechanism can help humans interrupt robots' current actions and re-organize them to fulfill specific emergency tasks or execute some crucial operations manually.
Especially humans also expect robots to provide safety and a stable working environment in missions. <cit.> supports the utility of people’s attitudes and anxiety towards robots to explain and predict behavior in human-robot interaction and suggests further investigation of which aspects of robots evoke what type of emotions and how this influences the overall evaluation of robots. From the psychophysiology perspective, <cit.> combines subjective research methods, such as behavioral and self-report measurements, to achieve a more complete and reliable knowledge of the person’s experience. For example, when people interact with AI agents like robots, which will be affected by mood and social desirability in a specific issue. However, psychophysiological techniques have different challenges in data acquisition and interpretation. Moreover, the monitor quality is influenced by both confounding environmental factors, such as noise or lightning, and by the individual psychological internal state during the evaluation, which might undermine the reliability and the correct interpretation of the gathered data <cit.>.
From another angle, recognizing the potential utility of human affect (e.g., via facial expressions, voice, gestures, etc.) and responding to humans is essential in a human-robot team's collaborative tasks, which would make robots more believable to human observers <cit.>. <cit.> proposed an architecture including natural language processing, higher-level deliberative functions, and implementing “joint intention theory”. <cit.> demonstrated that expressing affect and responding to human affect with affect expressions can significantly improve team performance in a joint human-robot task.
For Multi-agent HRI systems, they require more complex control strategies to coordinate several agents that may be dissimilar to one another (in roles, communication modes etc.), can involve interactions that connect more than two agents at once, and need special attention to address any conflicts that may arise from their interactions <cit.>. Specifically, the human would give high-level commands to the robot group or control them individually using a screen-based interface, and robots need to adapt to the operator's actions or optimize some utility function (computational characteristics like task requirements, teaming needs, etc.) (See Fig. <ref>).
Moreover, building the shared mental models (SMM) enable team members to draw on their own well-structured common knowledge to select actions that are consistent and coordinated with those of their teammates, which are strongly correlated to team performance <cit.>. <cit.> designs the human-robot cross-training inspired by SMM to compute the robot policy aligned with human preference (needs) by iteratively switching roles between humans and robots to learn a shared plan for a collaborative task. Furthermore, considering humans infer the robot’s capabilities and partially adapt to the robot, <cit.> presents a game-theoretic model of human partial adaptation to the robot, where the human responds to the robot’s actions by maximizing a reward function that changes stochastically over time, capturing the evolution of their expectations of the robot’s capabilities.
In the HRI, adaptability is a crucial property provided by the SMM, which intrinsically relates to performance and is objectively measured by an adaptable controller as an independent variable to compare alongside other controllers <cit.>. Especially in complex, uncertain, and dynamically changing environments, robots need to adapt efficiently to humans' behaviors and strategies caused by their frequently changing needs and motivations. Additionally, <cit.> showed that a robot adapting to the differences in humans’ preferences and needs is positively correlated with trust between them. Moreover, SMM improves trust and reliability between humans and robots by alleviating uncertainty in roles, responsibilities, and capabilities in the interactions. By integrating research regarding trust in automation and describing the dynamics of trust, the role of context, and the influence of display characteristics, <cit.> believes trust involves the experience and knowledge of humans about the needs, motivation, function processing, and performance of robots <cit.>, such as whether robots' capabilities meet expectations of humans and tasks <cit.>, and minimizing system fault occurrence, system predictability, and transparency <cit.>.
Generally speaking, through formulizing the motivations and needs of humans and robots and describing their complex, diverse, and dynamic relationships in a common ground like trust, utility theory provides a unified model to evaluate the performance of the interaction between humans and AI agents effectively and make a fundamental contribution to safety in the interaction.
§ INSIGHTS AND CHALLENGES
Currently, with the tremendous growth in AI technology, Robotics, IoT, and high-speed wireless sensor networks (like 5G), it gradually forms an artificial ecosystem termed artificial social systems that involves entire AI agents from software entities to hardware devices <cit.>. How to integrate artificial social systems into human society and coexist harmoniously is a critical issue for the sustainable development of human beings, such as assistive and healthcare robotics <cit.>, social path planning and navigation <cit.>, search and rescue <cit.>, and autonomous driving <cit.>. Especially the future factory is likely to utilize robots for a much broader range of tasks in a much more collaborative manner with humans, which intrinsically requires operation in proximity to humans, raising safety and efficiency issues <cit.>.
At this point, building a robust, stable, and reliable trust network between humans and AI agents to evaluate agents' performance and status in a common ground is the pre-condition for efficient and safe interaction in their collaboration. Here, utility is the key concept representing the individual agent's innate values, needs and motivations over states of the world and is described as expected rewards. It measures sensitivity to the impact of actions on trust and long-term cooperation and is efficient enough to allow AI agents like robots to make real-time decisions <cit.>. For example, self-driving cars might be the first widespread case of trustworthy robots designed to earn trust by demonstrating how well they follow social norms.
From the learning perspective, the individual learning model can be regarded as constructing models of the other agents, which takes some portion of the observed interaction history as input, and returns a prediction of some property of interest regarding the modeled agent <cit.>.
Taking human society as an example, human beings represent more complex and diversified needs, such as personal security, health, friendship, love, respect, recognition, and self-actualization. These diverse needs can be expressed as corresponding rewards or utility mechanisms driving people to adopt various strategies and skills to achieve their requirements. Especially the multi-layer and diverse needs of people are the foundation for describing their complex and dynamic relationships, such as trust.
Similarly, how to design a general and standard utility mechanism integrating humans and AI agents' needs and learning to efficiently build reliable and stable trust relationships between agents and humans is essential <cit.>. Especially a suitable knowledge graph combined with efficient DRL can help agents learn to adopt appropriate behaviors or strategies benefiting the group utilities, adapting to human members' needs, and guaranteeing their development in various situations <cit.>. Another interesting direction is behavior manipulation, known as policy elicitation, which refers to problems in human-robot collaboration wherein agents must guide humans toward an optimal policy to fulfill a task through implicit or explicit communication <cit.>. For example, it is used to achieve effective personalized learning in AI agents' teaching and coaching <cit.>.
For future MAS research, AI agents learn and adapt to human needs and maintain trust and rapport among agents, which are critical for task efficiency and safety improvement <cit.>. And we list potential directions as follow:
* Adopting suitable formation to perceive and survey environments predicting threats warning human team members.
* Reasonable path planning adaptation in various scenarios avoids collision guaranteeing human security and decreasing human working environment interference.
* Combining the specific capabilities and needs of agents and humans, calculating sensible strategies to efficiently organize the entire group collaboration and fulfill the corresponding mission.
As discussed above, Agent Needs Hierarchy is the foundation for MAS learning from interaction <cit.>. It surveys the system’s utility from individual needs. Balancing the needs and rewards between agents and groups for MAS through interaction and adaptation in cooperation optimizes the system’s utility and guarantees sustainable development for each group member, much like human society does. Furthermore, by designing suitable reward mechanisms and developing efficient DRL methods, MAS can effectively represent various group behaviors, skills, and strategies to adapt to uncertain, adversarial, and dynamically changing environments <cit.>.
§ CONCLUSION
This paper proposes a utility-orient needs paradigm based on agents' needs to describe and evaluate inter and outer relationships among agents' interactions in robotics and AI. We review existing literature illustrating the application of utility theory as a unified concept from the perspective of single-agent, MAS, agent trust, and HRI. Then, we discuss the insight and open problems for future research based on utility theory. We anticipate that this comprehensive overview of the role of the utility-orient needs paradigm based on agents' needs in robotics and AI systems will support future research that leverages, compares, or constructs interactions for artificial social systems.
IEEEtran
|
http://arxiv.org/abs/2306.04207v1
|
20230607072527
|
Resource Aware Clustering for Tackling the Heterogeneity of Participants in Federated Learning
|
[
"Rahul Mishra",
"Hari Prabhat Gupta",
"Garvit Banga"
] |
cs.DC
|
[
"cs.DC"
] |
Resource Aware Clustering for Tackling the Heterogeneity of Participants in Federated Learning
Rahul Mishra, Member IEEE, Hari Prabhat Gupta, Senior Member IEEE, and Garvit Banga
Rahul Mishra is with the Department of Information and Communication Technology, Dhirubhai Ambani Institute of Information and Communication Technology (DA-IICT), Gujrat, Gandhinagar,
e-mail: [email protected]
Hari Prabhat Gupta is with the Department of Computer Science and Engineering, Institute of Technology (BHU) Varanasi, India,
e-mail: [email protected]
Garvit Banga is with the the Department of Metallurgical Engineering, Indian Institute of Technology (BHU) Varanasi, India,
e-mail: [email protected]
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Federated Learning is a training framework that enables multiple participants to collaboratively train a shared model while preserving data privacy and minimizing communication overhead. The heterogeneity of devices and networking resources of the participants delay the training and aggregation in federated learning. This paper proposes a federated learning approach to manoeuvre the heterogeneity among the participants using resource aware clustering. The approach begins with the server gathering information about the devices and networking resources of participants, after which resource aware clustering is performed to determine the optimal number of clusters using Dunn Indices. The mechanism of participant assignment is then introduced, and the expression of communication rounds required for model convergence in each cluster is mathematically derived. Furthermore, a master-slave technique is introduced to improve the performance of the lightweight models in the clusters using knowledge distillation. Finally, experimental evaluations are conducted to verify the feasibility and effectiveness of the approach and to compare it with state-of-the-art techniques.
Federated learning, Heterogeneity, Master-slave technique, Resource aware clustering.
§ INTRODUCTION
Federated Learning (FL) is a newly emerging paradigm that enables a distributed training framework where data collection and model training occur locally for each participant. Thus, it preserves data privacy and reduces communication overhead of transmitting data to the server <cit.>. Unlike traditional distributed training frameworks that require consensus after each local iteration, either through server or peer communication, FL minimizes the frequency of consensus among distributed participants. FL is initiated by the central server, which broadcasts a randomly initialized model to all participants. Each participant trains the received model using their local dataset and sends the Weight Parameter Matrices (WPM) to the server. The server then aggregates the WPM received from multiple participants and sends back the aggregated one, generating a robust and generalized model for each participant <cit.>.
FL participants exhibit significant heterogeneity in terms of devices and networking resources, including processing speed, available memory, and data transmission rate. Each participant uses its resources to load the model and train it locally. The availability of device resources among participants depends on their respective configurations and installed services, leading to irregular intervals between WPM generation. Furthermore, the data transmission rate affects the time required to upload WPM from participants to the server. Consequently, participant heterogeneity hinders the simultaneous transmission and aggregation of WPM. In other words, slower participants (i.e., stragglers) delay the entire training process. The server can mitigate this issue by setting a Maximum Allowable Response (MAR) time for training to minimize the delay caused by stragglers. However, using a fixed MAR time can result in inadequate training due to a reduced number of local updates across communication rounds on stragglers.
Previous researches on FL have addressed the issue of heterogeneity among participants by excluding stragglers from the training process <cit.>. However, removing stragglers from the training process deprives the system of their valuable datasets, which in turn reduces the model's generalization ability. Additionally, it also prevents the potential performance improvement of these stragglers using FL. Cluster-based techniques have been proposed in prior studies to address the heterogeneity among participants in FL. These techniques utilize the relationship between local datasets <cit.>, the similarity of local updates <cit.>, and social relationships between participants <cit.> to form clusters. Nevertheless, these studies did not consider the devices and networking resources of participants during clustering. In their work <cit.>, the authors pointed out the issue of heterogeneous devices in FL that restricts the size of the global model to accommodate stragglers. Similarly, in <cit.>, the authors proposed a technique called HeteroFL to handle variations in computational and communication resources by generating multiple sized models and selecting the best one for each participant. Despite these benefits, neither <cit.> and <cit.> have addressed the issue of improving performance of the lightweight models used by participants with limited resources. Additionally, earlier works <cit.> employed Knowledge Distillation (KD) to enhance the performance of the lightweight model by leveraging insights from the large-sized; but, these methods were confined to centralized training.
This paper presents a novel approach called Fed-RAC (short for Federated learning with Resource Aware Clustering) to address the negative impact of participant heterogeneity in Federated Learning. We investigate the effect of participant heterogeneity and determine an expression for the required communication rounds per cluster. Fed-RAC is also designed to estimate the error caused by inconsistent objective functions in the presence of heterogeneous devices and networking resources. In particular, we focus on investigating the following problem: "How can we achieve satisfactory performance while training local models on heterogeneous participants in FL within the given MAR?" To this end, the major contributions and novelty of this work are as follows:
∙ Resource aware clustering: The first contribution is to conduct resource-aware clustering for identifying the most suitable number of clusters based on the devices and networking resources available to the participants. The server first gathers information regarding the processing speed, data transmission rate, and available memory of all participants to create resource vectors. These vectors are then subjected to unit-based normalization to bring their values within the range of [0,1]. To determine the optimal number of clusters, the server calculates the Dunn Indices <cit.> among the normalized resource vectors of all participants.
∙ Participants assignment to the clusters: The next contribution is the allocation of participants to the identified clusters, ensuring that the model training within each cluster is performed within a specified maximum allowable response time and communication rounds. Additionally, a mathematical analysis is carried out to derive the expression for the communication round and error caused by an inconsistent objective function in the presence of heterogeneous participants.
∙ Master-slave technique: Further, our approach introduces the master-slave technique to enhance the performance of the generic model in low-configuration clusters (slaves) by leveraging the model of the highest configuration cluster (master). In this technique, the master model is initially trained, and then it guides the training of slave models using knowledge distillation to improve their performance.
∙ Experimental validation: In the end, we conduct experimental evaluations to confirm the effectiveness of the Fed-RAC approach. We validate our proposed method by comparing it with existing baseline techniques <cit.>, using various evaluation metrics and established datasets <cit.>. The results demonstrate that the proposed approach achieves better performance in the presence of heterogeneous participants.
Paper Organization: Section <ref> provides an overview of the related literature. Section <ref> outlines the preliminary information and problem statement of our proposed approach. Section <ref> details the Fed-RAC approach. Section <ref> evaluates the performance of our approach, while Section <ref> presents the discussion and future directions for this work. Finally, Section <ref> concludes the paper.
§ BACKGROUND AND MOTIVATION
In this section, we provide a description of prior studies that focus on the heterogeneity of participants, clustering in FL, and knowledge distillation to enhance performance.
∙ Heterogeneous participants in FL: FL involves a significant number of participant devices with varying resources, leading to degraded performance and increased convergence time when running the same model on all participants <cit.>. In <cit.>, the authors proposed a system that selects participants for global aggregation and simultaneously generates WPM. The system discards straggling participants from the aggregation. To account for the slower computational speed of stragglers, the authors in <cit.> proposed reducing the CPU frequency of faster participants in the federation. The authors in <cit.> identified the problem of heterogeneous devices in FL, which limits the size of the global model to accommodate low-resource or slow participants. They proposed a dynamically adaptive approach to model size called ordered dropout, FjORD. In <cit.>, the authors presented a specialized technique called Oort, which prioritizes participant selection in FL. The authors in <cit.> introduced a framework called FedProx to handle the issue of data/task heterogeneity in FL. FedProx used a proximal term to minimize the impact of local updates.
In previous studies, various mechanisms have been proposed to address the issue of stragglers in FL, including asynchronous <cit.> and semi-synchronous <cit.> global update approaches. The authors in <cit.> introduced an asynchronous algorithm to optimize the FL-based training for stragglers. The algorithm solved the local regularization to ensure convergence in finite time and performed a weighted average to update the global model. Similarly, the authors in <cit.> introduced the mechanism of asynchronous learning and weighted temporal aggregation on participants and server, respectively. To overcome the problem of higher waiting time in the asynchronous global updates, the authors in <cit.> introduced the semi-asynchronous mechanism, where the server aggregates the weight parameters from a set of participants as per their arrival order in each communication round.
∙ Clustering in FL: The prior studies utilized the relationship between local datasets <cit.>, the similarity of local updates <cit.>, and social relationship between the participants <cit.> to form clusters in FL. Authors in <cit.> exploited the intrinsic relationship between local datasets of multiple participants and proposed a similarity-aware system, namely ClusterFL. The system generated various clusters based on the similarity among local datasets. Similar to <cit.>, the authors in <cit.> created various groups of participants as per the similarity among their local datasets. The group formation led to a minor loss over all the participants and provided communication efficiency. In <cit.> authors introduced a modified FL approach, where hierarchical clustering is performed as per the similarity of local updates. The authors in <cit.> introduced the technique to handle variation in computational and communication resources. They named the technique as HeteroFL.
∙ KD based performance improvement: The existing literature introduced various techniques to improve the performance of the lightweight model using a large-size model via KD <cit.>. The concept of KD was first introduced by the authors in <cit.>, where the knowledge of a large-size model (teacher) is utilized to improve the performance of the lightweight model (student). The authors in <cit.> proposed the concept of simultaneous training of scratch teacher and student, which provided a soft target of logits to estimate the distillation loss. Finally, the authors in <cit.> introduced the concept of pre-trained teacher and scratch teacher-guided KD technique to improve the performance of student.
Motivation: We observed the following limitations in existing literature. Prior studies discarded stragglers from the training to cope up with the heterogeneity of the resources among participants in FL <cit.>. When the stragglers are discarded, their available local datasets are not utilized during training, which reduces the generalization ability of all the participants. In addition, discarding slow participants hampered their performance improvement via FL. Reducing the processing power of the participant device during training of the model slowdown the aggregation process <cit.>. The asynchronous federated learning mechanisms <cit.> demand the server to wait for stragglers, leading to significant waiting time. The semi-asynchronous global aggregation mechnism <cit.> is more effective than synchronous, but it discards some participants in each communication round. Suppressing the communication round for aggregation <cit.> also increases the stale models at participants. The existing work exploited clustering in FL but did not consider the devices and networking resources during clustering <cit.>. The prior studies <cit.> helped in improving the performance of the lightweight model using knowledge from the large-size model. However, these techniques were limited to centralized training.
In summary, the existing FL approaches avoid straggler devices during aggregation at the central server. The asynchronous global update leads to higher weighting time at the server. The existing clustering mechanism in FL did not consider the resources of the participants, like memory, processing speed, and communication channel. Additionally, the existing work on KD to improve the performance of lightweight models are limited to centralized training.
§ PRELIMINARIES AND PROBLEM STATEMENT
This section describes the terminologies and notations, followed by the problem of the heterogeneous participants.
§.§ Preliminaries
This work considers a set 𝒫 of N participants and a central server, where 𝒫={p_1,⋯,p_N}. We consider a multi-class classification problem with a set Q of c classes, i.e., Q={1,⋯, c}. Each participant p_i has local dataset 𝒟_i with n_i number of instances and set of Q classes, where 1≤ i ≤ N. Let (𝐱_ij, y_ij) denotes an instance of dataset 𝒟_i, where 1≤ j ≤ n_i. During training the model on the participant p_i learn the mapping between 𝐱_ij and y_ij, ∀ j∈{1≤ j ≤ n_i}, to build a classifier Π_i. The classifier recognizes the class label of unidentified instances in testing. Let B_i denotes the batch-size used for training model on p_i. Further, let τ_i represents the number of Stochastic Gradient Descent (SGD) operations performed in one round of training on p_i. τ_i is estimated as: τ_i=⌊ E n_i /B_i⌋, where E is the number of local epochs to train on p_i. We can change B_i and n_i to change τ_i.
§.§ FL with heterogeneous participants
FL begins with the generation and random initialization of a model at the central server that further broadcasts the initialized model to all the participants. Each participant p_i receives and trains the model using local dataset 𝒟_i with n_i instances, where 1≤ i ≤ N. p_i performs training for E number of local epochs on a batch size of B_i over n_i instances using SGD operations τ_i. The participant minimizes the local loss function ℒ_i(𝐰_i), where 𝐰_i is the WPM of p_i. ℒ_i(𝐰_i) is estimated as: ℒ_i(𝐰_i)=1/n_i∑_j← 1^n_iℒ_ij(𝐰_ij), where 𝐰_ij∈𝐰_i, 1≤ j ≤ n_i, and 1≤ i ≤ N. The participant transfers estimated ℒ_i(·) and 𝐰_i to the server for global aggregation. Upon receiving local loss and WPM from all the participants, the server estimates global loss (ℒ(𝐰)) and WPM (𝐰) as:
ℒ(𝐰)=∑_i← 1^N(n_i/n_1⋯ n_N)ℒ_i(𝐰_i), 𝐰=∑_i← 1^N (n_i/n_1⋯ n_N)𝐰_i.
The server broadcasts 𝐰 for next round of training. The process of local training and aggregation are orchestrated for ℛ iterations to achieve a trained model for all the participants. At each global iteration t∈ℛ the local loss function and WPM are denoted as ℒ_i^t(𝐰_i^t) and 𝐰_i^t for participant p_i, respectively, where 1≤ i ≤ N. 𝐰_i^t at global iteration t (t∈ℛ) of participant p_i is updated as: 𝐰_i^t = 𝐰_i^t-1-η∇ℒ_i^t(𝐰_i^t), where η is the learning rate. Using above equation, we can define the objective function of FL as follows:
min_𝐰^ℛℒ(𝐰^ℛ)=∑_t← 1^ℛ∑_i← 1^N(n_i/n_1+⋯+n_N)ℒ_i^t(𝐰_i^t).
§.§.§ Heterogeneous participants
The heterogeneous participants in FL require non-identical training and communication time. Let T_i denotes training and communication time of p_i, estimated as: T_i = T_i^a . E + T_i^c, ∀ i ∈{1,2,⋯ N}, where T_i^a is the training time for one local epoch, and T_i^c is the per-round communication time for sharing WPM from p_i to server. The participants train local model and communicate WPM in parallel. Thus, for each iteration t∈ℛ the training and communication time T^t depends on the slowest participant, where T^t=max_1≤ i ≤ N{T_i}. We obtain total training time, denoted as 𝕋(N,E,ℛ), as:
𝕋(N,E,ℛ) = ∑_t← 1^ℛ T^t = ∑_t← 1^ℛmax_1≤ i ≤ N{T_i}.
§.§.§ Objective inconsistency
The server has a fixed MAR time to complete the global iterations, which reduces the training delay due to slow processing and communication of stragglers.
It also minimizes the idle time of faster participants. However, the number of local SGD operations vary over heterogeneous participants within the fixed MAR time. The faster participants perform more local updates in contrast with stragglers. In addition, the number of local updates on the participants also vary across the communication rounds. The objective function of FL given in (<ref>) relies upon the assumptions that the number of local updates, τ_i for p_i ∀ i∈{1,2,⋯, N}, remain same for all participants (τ_i=τ). However, the variation in the local updates on the heterogeneous participants results in an inconsistent objective function for FL <cit.>. Let ℒ̅(𝐰̅^ℛ) denotes the inconsistent objective function, where 𝐰̅^ℛ is the aggregated WPM generated after ℛ global iterations. The error (err) between actual and inconsistent objective function is defined as: err=|ℒ̅(𝐰̅^ℛ)-ℒ(𝐰^ℛ)|. Thus, it is required to minimize error to mitigate the negative impact of heterogeneous participants.
Let p_1-p_10 denote 10 participants in FL. In the absence of information about available resources, we can assume homogeneous participants with an equal number of data instances to estimate their local loss functions. It implies ℒ_1(𝐰_1)=ℒ_2(𝐰_2)=⋯=ℒ_10(𝐰_10)=l_1. Let l_1=0.027; thus, the objective function given in (<ref>) attains the value of ℒ(𝐰^ℛ)=0.027. However, the participants may be heterogeneous; thus, the inconsistent objective function may obtain 0.036, which implies err=0.009.
§.§ Problem statement and solution overview
The fundamental challenges encountered while developing an FL approach to mitigate the heterogeneity are: 1) how to reduce the training and communication time of the stragglers in FL?, 2) how to achieve adequate performance within the fixed time interval for communication?, and 3) how to minimize the error gap between actual and inconsistent objective functions due to heterogeneous participants? In this work, we investigate and solve the problem of training the local model on all the heterogeneous participants within a given maximum allowable response time, achieving adequate performance and minimizing error due to inconsistent objective function.
Apart from the standard FL workflow, the Fed-RAC trains the local models on all the participants despite higher heterogeneity and reduces training time without compromising performance. Fed-RAC starts with the estimation of the optimal number of clusters to accommodate all N heterogeneous participants. We named the step as resource aware clustering (Section <ref>). During clustering, a set 𝒦 of k clusters is first identified (Section <ref>), followed by the generation of a generic model for each cluster (Section <ref>). Next, the participants are assigned to the empty clusters using participant assignment mechanism (Section <ref>). Further, we introduce master-slave technique (Section <ref>) to enhance the performance of the generic models using KD.
§ FED-RAC: FEDERATED LEARNING VIA RESOURCE AWARE CLUSTERING
In this section, we first cover the details of the Federated learning approach to mitigate the heterogeneity of participants using Resource Aware Clustering (Fed-RAC). The workflow of the Fed-RAC is shown in Fig. <ref>.
§.§ Resource aware clustering
This sub-section describes the mechanism of dividing the set of N participants into k disjoint clusters. The clustering is performed on the server to preserve the resources of the participants. In doing so, the server fetches three resources from all the participants, i.e., processing speed, data transmission rate, and available memory, denoted as s_i, r_i, and a_i for p_i (1≤ i ≤ N), respectively. s_i and a_i are the machine-dependent parameters that rely upon the configuration of the devices. The data transmission rate r_i depends on the bandwidth, channel coefficient, and path loss between participant and server and is estimated using technique discussed in <cit.>. The static information of s_i, r_i, and a_i from the participants are used to initialized the Fed-RAC approach. Afterward, the approach provides the opportunity to upgrade or downgrade the cluster depending on the available dynamic resources of the participants. If a participant is in the smallest cluster and its resources are dynamically reduced then Fed-RAC sets batch-size and local epochs to continue the training, as discussed in Section <ref>. It implies the Fed-RAC can easily tackle the dynamic resources of the participants in FL.
All the participants of a cluster possess similar processing speed, transmission rate, and memory. However, it is tedious to determine the similarity among the three independent resources. Thus, we use a vector v_i=[s_i,r_i,a_i] for participant p_i (1≤ i ≤ N) to estimate similarity among resources. We use normalize vector v̅_̅i̅=[s̅_̅i̅,r̅_̅i̅,a̅_̅i̅] in place of v_i, to eliminate impact of biasness of high values. The bias value s̅_̅i̅ is estimated as: s̅_̅i̅=s_i-min{s_i}_i=1^N/max{s_i}_i=1^N-min{s_i}_i=1^N, r̅_̅i̅ and a̅_̅i̅ are also estimated similarly. We further estimate the similarity (𝒮_ij) among any two participant p_i and p_j using normalized vectors v̅_̅i̅ and v̅_̅j̅, respectively, ∀ i, j ∈{1,2,⋯, N} using Euclidean distance. 𝒮_ij is estimated as: 𝒮_ij=√(λ_1(s̅_̅i̅-s̅_̅j̅)^2+λ
_2(r̅_̅i̅-r̅_̅j̅)^2+λ_3(a̅_̅i̅-a̅_̅j̅)^2), where λ_1, λ_2 and λ_3 are the contributions of processing capacity, transmission rate, and memory, respectively, λ_1+λ_2+λ_3=1. λ_1, λ_2, and λ_3 can be obtained from analysis given in <cit.>.
§.§.§ Estimating optimal number of cluster k
We introduce a modified version of the conventional Dunn and Dunn-like Indices <cit.> to estimate the optimal number of clusters using similarity. We use k-means clustering to determine the optimal number of clusters. Dunn index identifies an optimal number of clusters that hold compactness and provide good separation. Let C_f and C_g denote clusters in 𝒦 (C_f, C_g ∈{C_1, ⋯, C_k}, C_f C_g). The least distance 𝐝𝐢𝐬𝐭(C_f, C_g) among C_f and C_g is given as:
𝐝𝐢𝐬𝐭(C_f,C_g) = min_p_i ∈ C_f, p_j ∈ C_g, C_f C_g 𝒮_ij.
The diameter 𝐝𝐢𝐚(C_f) of cluster C_f∈{C_1, ⋯, C_k} is the maximum distance between any two participants in C_f. Let p_l^f and p_q^f be the two participants in C_f (p_l^f p_q^f), 𝐝𝐢𝐚(C_f) is estimated as:
𝐝𝐢𝐚(C_f) = max_p_l^f, p_q^f ∈ C_f, p_l^f p_q^f 𝒮^f_lq.
Using Equations <ref> and <ref>, we estimate Dunn Index (DI(k)) as:
DI(k) = min_∀ C_f ∈𝒦[min_∀ C_g ∈𝒦, C_f C_g(𝐝𝐢𝐬𝐭(C_f,C_g)/max_∀ C_f ∈𝒦𝐝𝐢𝐚(C_f))].
A high positive value of DI(·) indicates compact and adequate number of clusters. The divergence-based Dunn and Dunn-like Indices starts with k=2 and terminates when DI(·) achieves a higher positive value. We use the maximum number of clusters k_max≤√(N) as rule of thumb, inspired from <cit.>. The complete steps to obtain optimal number of clusters are given in Procedure 1.
Let there are 10 participants denoted as p_1, ⋯, p_10. The resource and normalized vectors of the example are shown in Table <ref>. Using Procedure 1 with λ_1=λ_2=λ_3=1/3, we obtain k=3 as optimal clusters.
[t]
() 1: Optimal number of clusters
Set of N participants 𝒫 in FL
Optimal set of k clusters 𝒦={C_1, C_2, ⋯, C_k}
Initialization: j←0, C_s← [ ], 𝒦={}, k← 2; /*starting with 2/*
each participant p_i ∈{p_1, p_2,⋯ p_N}
Server extracts information of s_i, r_i, and a_i from p_i
Estimate resource vector v_i
each participant p_i ∈{p_1, p_2,⋯ p_N}
Estimate s̅_̅i̅, r̅_̅i̅, and a̅_̅i̅ for p_i
Determine normalized resource vector v̅_̅i̅
k ≤√(N)
Perform k-mean clustering on set 𝒫
Estimate similarity among the normalized vectors
Preserve information about the k clusters
each pair C_f, C_g ∈{C_1, C_2, ⋯, C_k}, C_f C_g
Estimate Dunn index (DI(k)) using (<ref>)
C_s← append(DI(k)), k ← k+1
j←max(C_s), k← j+1; /*Optimal number of clusters*/
return 𝒦={C_1, C_2, ⋯, C_k};
Apart from k-means clustering, we also consider density-based clustering to obtain the optimal number of clusters using normalized resource vectors. We use Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Ordering points to identify the clustering structure (OPTICS) <cit.> during the experiment. Table <ref> illustrates the DI-values and accuracy for different k using k-means, DBSCAN, and OPTICS using the resource vectors, discussed in Section <ref>. From the results in the table, we observe that for DBSCAN clustering, the DI value decreases with increasing k; thus, it predicts k=2 as an optimal number of the cluster. However, the difference between resources among the participants within a cluster is high, which results in lower accuracy. Moreover, some participants with the least resources can not accommodate a large-size model assigned to the cluster. We can draw similar observations for OPTICS, which gives the optimal number of clusters k=3. k-means clustering results in k=5 optimal number of clusters, where inter-cluster and intra-cluster distances are high and low, respectively. It narrows downs the gaps between the resources of the participants within a cluster. Thus, all the participants can easily accommodate the assigned model to a cluster. Such narrow gapping also prevents the bucket effect, where a large model is assigned to the participant with the smallest resources.
§.§.§ Generic model for each cluster and compaction of clusters
This work considers three resources, i.e., processing speed, data transmission rate, and available memory, to obtain k clusters. However, the cumulative resources are unequal among all the clusters. Therefore, the size of the model on the clusters would be non-identical in FL. This work develops a generic model for each cluster and performs cluster compaction afterward. In doing so, we arrange the k clusters in descending order of their available resources. In other words, the participants in cluster C_1 can train a large-size model and quickly transfer WPM to the server, whereas C_k can train the smallest model and requires the huge time to share WPM.
Let M denotes the initial model generated and randomly initialized by the server. We assume M can directly accommodate on C_1, i.e., training and communication can be performed within the given time. Let M_1 denotes the size of model for C_1, where M=M_1. Beyond C_1 other clusters require some compression to train the model and share WPM. Let M_2 denotes the compressed version of M that can be deployed on the participants in C_2, consuming less training and communication time. Similarly, M_3-M_k are generated for remaining k-2 clusters. In this work, we consider the model of any cluster C_i is α times smaller than C_i-1, i.e., M_i-1=α M_i, where α<1. It implies M_k=α^k-1M_1 M_k=α^k-1M.
∙ Cluster compaction: The estimated k clusters and corresponding models suit the resources of the participants; however, higher compression of the model results in performance compromise. Thus, it is beneficial if all the participants can accommodate in fewer clusters than k. However, it introduces the straggler effect, where slow participants do not participate. To overcome the straggler effect, we merge some clusters out of k to obtain m clusters, where k<m.
§.§ Participants assignment to the clusters
This sub-section describes the mechanism of assigning N participants to the m clusters. We first deduce the expression to estimate the communication rounds required for the generic model in m different clusters. Next, we define the optimization error due to the heterogeneity of participants. Notably, Fed-RAC initially checks the possibilities of assigning participants to the higher cluster, decreasing as per the assignment criteria.
From Section <ref>, we have m different models M_1, M_2,⋯,M_m for clusters C_1, C_2, ⋯, C_m, respectively, where the size of models M_1>M_2>⋯>M_m and M_m=α^m-1M_1=α^m-1M. The server decides ℛ_1, ℛ_2, ⋯, ℛ_m communication rounds for training local models of the participants in clusters C_1, C_2, ⋯, C_m, respectively. We first determine the expression for communication rounds ℛ_f for cluster C_f, where 1≤ f≤ m.
§.§.§ Communication rounds per cluster
Let 𝒫_f denotes the set of F participants to be assigned in C_f, where 𝒫_f={p_1,⋯, p_F}, having loss functions ℒ_1, ⋯ℒ_F, respectively. We consider the assumptions given in <cit.> and applied on ℒ_1, ⋯, ℒ_F to estimate the round ℛ_f for cluster C_f.
Loss ℒ_j ∈{ℒ_1, ℒ_2, ⋯ℒ_F} is L-smooth; therefore, for any two WPM 𝐰_a and 𝐰_b on p_j∈𝒫_f, following inequality holds: ℒ_j(𝐰_a)≤ℒ_j(𝐰_b) + (𝐰_a-𝐰_b)^T∇ℒ_j(𝐰_b)+ L/2𝐰_a-𝐰_b^2, where 1≤ j ≤ F.
ℒ_j is μ-strongly convex; the inequality holds: ℒ_j(𝐰_a)≥ℒ_j(𝐰_b) + (𝐰_a-𝐰_b)^T∇ℒ_j(𝐰_b)+ μ/2𝐰_a-𝐰_b^2.
Let ε_j^t denotes the uniformly and randomly selected sample from the local dataset 𝒟_j of participant p_j on communication round t, where 1≤ t ≤ℛ_f. Let ∇ℒ_j(ε_j^t, 𝐰_j^t) and ∇ℒ_j(𝐰_j^t) denote the gradients of loss function ℒ_j(·) on ε_j^t samples and entire samples of the local dataset, respectively. The variance of gradients on participant p_j is bounded as: 𝔼∇ℒ_j(ε_j^t, 𝐰_j^t)-∇ℒ_j(𝐰_j^t)^2 ≤σ_f^2.
Expected square norm of loss gradient is uniformly bounded as: 𝔼∇ℒ_j(ϕ_j^t, 𝐰_j^t)^2 ≤ G^2_f, 1≤t≤ℛ_f and 1≤ j ≤ F.
Using Assumptions 1-4, we obtain a relation between desired precision (q_o^f), local epoch count E_f, and global iterations ℛ^f of cluster C_f. The precision is defined as: q_o^f=𝔼[ℒ(𝐰^ℛ_f)]-ℒ_f^*, where 𝐰^ℛ_f is the aggregated weight at final global epoch ℛ_f and ℒ_f^* is minimum and unknown value of ℒ_f at the server. Let ℒ^*_j is the minimum value of ℒ_j at p_j, where ∀ j ∈{1≤ j ≤ F}. In this work, we assume i.i.d datasets on the participants; thus, Γ=ℒ_f^*-∑_i=1^Fℒ_j^*=0, as given <cit.>. Γ quantifies the degree of non-i.i.d and it goes to zero for i.i.d. Let ϵ_j denotes the weight contribution of participant p_j ∈𝒫_f. Let β=max{8L/μ,E_f} and T_f is the total SGD operations on a participant then we obtain following relation of desired precision (q_o^f) for cluster C_f <cit.>:
𝔼[ℒ(𝐰^ℛ_f)]-ℒ^*_f ≤L/2μ^2/β+T_f-1(4B+μ^2β𝔼·^2),
where, B = ∑_j=1^Fϵ_j^2 σ_f^2 + 8(E-1)^2 G_f^2. Using upper bound of q_o^f and T_f=ℛ_fE_f, we obtain number of communication round (ℛ_f) for cluster C_f (1≤ f ≤ m) as follows:
ℛ_f=1/E_f[L/2μ^2 q_o^f(4B+μ^2β𝔼𝐰_1-𝐰^*_f^2)+1-β].
From (<ref>), we have fixed communication rounds ℛ^f for given precision threshold q_o^f and local epochs E_f for cluster C_f, where 1 ≤ f ≤ m. In addition, we have E_f=B_j τ_j/n_j; it implies we can change value of B_j, τ_j, and n_j in such a manner, where E_f and R_f remains fixed for p_j ∈𝒫_f and q_o^f changes. We set a threshold over q_o^f, denoted as δ_f for C_f.
Let μ=0.7, L=1.5, B=1, 𝔼𝐰_1-𝐰^*_f=0.08 and E_f=20 for cluster C_f. We obtain β=max{8×1.5/0.7,20}=20 using E_f, μ, and L. Further, we estimate ℛ_f=6 using (<ref>) and values of parameters given above.
§.§.§ Optimization error due to participants heterogeneity
The set of participants 𝒫_f to be assigned in cluster C_f posses low inter-cluster and high intra-cluster heterogeneity. Therefore, we obtain inconsistency in the objective function of a cluster, discussed in Section <ref>, due to intra-cluster heterogeneity despite using an effective clustering mechanism. To estimate the value of error err_f for cluster C_f, where C_f∈{C_1,C_2,⋯,C_m}, we use the assumptions given in <cit.>. The previous assumptions, i.e., Assumption <ref> and Assumption <ref> are same for estimating err_f. However, we need to define a new assumption (Assumption <ref>) to calculate err_f.
Let {ϵ_1,ϵ_2,⋯,ϵ_F} denote a set of weighted contribution of participants in set 𝒫_f of cluster C_f, where ∑_j=1^Fϵ_j=1 and C_f∈{C_1,⋯,C_m}. There exists two constants h_1 ≥ 1 and h_2 ≥ 0 such that ∑_j=1^Fϵ_j∇ℒ_j(𝐰_j)^2 ≤ h_1^2 ∑_j=1^Fϵ_j∇ℒ_j(𝐰_j)^2+h_2^2.
Using Assumptions <ref>, <ref>, and <ref>, we derive the expression for err_f of cluster C_f. In doing so, let 𝐨_j denotes a non-negative vector and defines how stochastic gradients are locally accumulated. For example, 𝐨_j=[1,⋯,1]∈ℝ^τ_j for FedAvg <cit.>. 𝐨_j_1 is the l1-norm of 𝐨_j and o_[j,-1] is the last element in vector 𝐨_j.
τ_e=∑_j=1^Fτ_j/F, τ_j=⌊ E_f n_j /B_j⌋ and η is the learning rate, where 1≤ j ≤ F.
err_f =min_t∈ℛ_f𝔼[∇ℒ̅(𝐰̅^t)^2]
≤ 4b_1/ητ_e ℛ_f + 4 η L σ_f^2b_2/F + 6 η^2 L^2 σ_f^2 b_3 + 12 η^2 L^2 h^2_2 b_4,
where b_1=[ℒ̅(𝐰̅^0)-ℒ^*_f], b_2 = F τ_e ∑_j=1^Fϵ_j^2𝐨_j_2^2/𝐨_j_1^2,
b_3 = ∑_j=1^Fϵ_j (𝐨_j_2^2-[o_j,-1]^2), b_4 = max_j{𝐨_j_1(𝐨_j_1-[o_j,-1])}. A small err_f indicates lower intra-heterogeneity among the participants. We set error bound for each cluster, i.e., error err_f≤θ_f for C_f, where 1≤ f ≤ m and err_f≤θ_f.
§.§.§ Participants assignment
Fed-RAC assigns each participant to an optimal cluster per the available device and networking resources. Such assignment facilitates easier and faster (within MAR time) training and inference of the local model on each participant assigned to a specific cluster. In other words, each participant trains the local model in R_f communication rounds ((<ref>)) for cluster C_f, 1≤ f ≤ m. The assignment verifies two conditions: a) precision (<ref>) of cluster C_f must be less than the threshold (q_o^f≤δ_f) and b) optimization error (<ref>) err_f ≤θ_f. Further, we get two possible cases for assigning participants in each cluster:
∙ Case 1 (Cluster is empty): p_i assigns to empty cluster C_f, if p_i can train the model M_f in given epochs E_f and communication round R_f. The local epoch E_f is high for a single participant as one communication round is required to train the model without multiple participants. In this case, the condition of q_o^f<δ_f is only verified and the optimization error is zero. It is because the constraint for homogeneity becomes zero with a single participant in (<ref>). If the participant is unable to train M_f in MAR and R_f, it uses the following two steps:
* p_i reduces τ_i and n_i, while satisfying q_o^f≤δ_f.
* If q_o^f≥δ_f for C_f then the participant switches to the lower cluster and repeats Step 1.
∙ Case 2 (Cluster is non-empty):
We initially estimate q_o^f (<ref>). Upon adding p_i to C_f, q_o^f should be less than threshold δ_f. Similar to Case 1, if p_i is incompetent in training M_f in MAR, τ_i and n_i are adjusted until q_o^f< δ_f; otherwise participant switches to the lower cluster. Next, the error (<ref>) is also estimated upon adding p_i to C_f. If estimated err_f ≤θ_f then p_i is added to C_f, else p_i switches to lower cluster.
After successfully executing these two cases, N participants are assigned to the m clusters. The assigned participants achieve desired precision and optimization errors less than the corresponding thresholds. The server optimally allocates each participant to a specific cluster as per the resource, precision threshold, and error threshold. Procedure 2 summarizes the steps involved in assigning participants to the clusters.
[t]
() 2: Participants assignment to the clusters
Set of clusters {C_1,⋯, C_m} with generic models {M_1,⋯, M_m}. Set of participants 𝒫={p_1,⋯, p_N}
Optimal participants in each cluster C_f, ∀ 1≤ f ≤ m
Initialization: i← 1, f←1
each participant p_i ∈{p_1, p_2,⋯ p_N}
each cluster C_f ∈{C_1,C_2,⋯, C_m}
/*Case 1 for assigning participant*/
isEmpty(C_f)==True
p_i can accommodate M_f
Check: Estimate precision q_o^f using (<ref>)
q_o^f ≤δ_f
Assign p_i to C_f
f← f+1
Reduce τ_i and n_i s.t., p_i can run M_f
Goto Check
/*Case 2 for assigning participant*/
p_i can accommodate M_f
Check-I: Estimate precision q_o^f using (<ref>)
Calculate error err_f using (<ref>)
q_o^f ≤δ_f and err_f ≤θ_f
Assign p_i to C_f
f← f+1
Reduce τ_i and n_i s.t., p_i can run M_f
Goto Check-I
return Optimal participants in each cluster C_f, ∀ 1≤ f ≤ m
§.§ Master-slave technique
This sub-section introduces the technique of improving the performance of lightweight models M_2, ⋯,M_m in clusters {C_2, ⋯,C_m} using generalization ability (or knowledge) of large-size model M_1 in cluster C_1. We utilize the assumption that the cluster C_1 is the fastest cluster and can accommodate the server's model without compression, i.e., M_1=M. We use the term master for M_1 and slave for models M_2-M_m, thus, named the technique as master-slave for performance improvement. The technique involve KD technique <cit.> to improve the performance of slave model using trained master model. MAR time (𝕋_max) for training models on all N participants and can be further divided as: 𝕋_max=T_1+max{T_2,T_3,⋯,T_m}, where T_f is the MAR time for training M_f on the participants of C_f, 1≤ f ≤ m. Since C_m is the slowest cluster and C_1 is the fastest cluster; thus, we can consider the following relation similar to generic models: T_f-1=κ T_f, where 1≤ f ≤ m and κ<1. It implies T_1=κ^m-1T_m then we obtain:
𝕋_max =κ^m-1T_m + max{κ^m-2T_m, κ^m-3T_m,⋯, T_m}},
=κ^m-1T_m+T_m = (κ^m-1+1)T_m.
In special case, where M_1 is master of M_2, M_2 is master of M_3, and so on, i.e., the FL-based training is performed sequentially for each cluster. In this case, 𝕋_max is defined as:
𝕋_max =κ^m-1T_m + κ^m-2T_m + κ^m-3T_m + ⋯ + T_m,
={κ^m-1 + κ^m-3 + ⋯ + 1}T_m =1-κ^m/1-κ, where κ <1.
This work starts FL based training from the fastest cluster C_1 with the adequate devices and networking resources to train M_1. We train M_1 for E_1 local epochs on the participants of C_1 using ℛ_1 communication rounds. The logits of trained M_1 is next supplied to all the remaining clusters to improve the performance of their generic models using the KD. Algorithm <ref> summarizes steps involved in the Fed-RAC.
Fed-RAC approach trains the lightweight models of the smaller clusters using the knowledge distillation (master-slave) technique. It may lead to biased learning because knowledge extracted from the data samples in the larger cluster is utilized more times than those in smaller ones. To avoid such biaseness, we adopt the resampling and reweighting scheme. It resolves the trade-off between minor and frequently chosen data instances for model training on the larger cluster. In other words, the participants of the largest cluster sample nearly equal number of data instances for all the classes during training in each communication round.
§ PERFORMANCE EVALUATION
This section describes the task of study, datasets, and models, followed by the baselines. We consider the different tasks, including Locomotion Mode Recognition (LMR), Human Activity Recognition (HAR), Handwritten Digit Recognition (HDR), and Image Classification (IC).
§.§ Datasets and models
This work uses four public datasets, including MNIST <cit.>, HAR <cit.>, CIFAR-10 <cit.>, and SHL <cit.>. These datasets were selected due to free accessibility, real-life acquisition, and correct annotations. MNIST is a handwritten digit dataset containing 50000 images of different digits from 0-9 for training. MNIST also has 10000 images for testing. HAR was collected using the smartphone (Samsung Galaxy S II) sensors, including a tri-axial accelerometer and gyroscope. The dataset contains sensory instances of six different activities; walking, standing, lying, sitting, walking upstairs, and walking downstairs. CIFAR-10 comprises 60000 images of ten different classes. The dataset is balanced and correctly annotated with 6000 images for each class and contained 50000 images for training and 10000 for testing. SHL <cit.> dataset was collected from the onboard sensors of HUAWEI Mate 9 smartphones to recognize the locomotion modes of the users.
We use a simplified arrangement of Convolutional Neural Networks (CNN) and Dense layers to obtain a model as: C(128)-C(64)-C(128)-C(256)-C(512)-D(classes_count), where C(X) indicates the convolutional layer with X filter units and D(Y) is the dense layer having Y neurons. We use ResNet-18 <cit.> and DeepZero <cit.> for training on CIFAR-10 and the SHL datasets, respectively.
§.§ Baselines
We considered the existing techniques <cit.> as baselines, noted as HeteroFL <cit.>, FedProx <cit.>, FedAvg <cit.>, and Oort <cit.>, to evaluate and compare the performance. HeteroFL <cit.> partitioned the heterogeneous participants into various clusters depending on the different computational complexities. FedProx <cit.> handled the problem of heterogeneity by introducing a proximal term. The term minimized the impact of local updates and restricted such updates close to global model. FedAvg <cit.> is the benchmark and most classical FL learning technique. Oort <cit.> selected a set of participants that achieved adequate accuracy and quickly trained the model.
It is a participant selection mechanism that performed cherry-picking of the participants.
§.§ System implementation
We implemented the Fed-RAC algorithm and procedures using Python programming language. The considered models are implemented using the functional API of Keras in Python language to friendly support developers. Additionally, we reimplemented all the baselines to perform a fair comparison. During the experiment, we set the loss function to “categorical cross-entropy”, batch size to 200 and other settings as discussed in <cit.>. We set ℒ^* between 0.01-0.05 and the number of participants N=40. The local epochs vary over the dataset, i.e., E=1-5 for MNIST and HAR, while E=10-40 for CIFAR-10 and SHL. The communication rounds were set to 200 for all the clusters during the experiment. We varied the learning rate between 0.001 to 0.010 and set the proximal term in FedProx to 0.001. We only compress the convolutional layers to obtain slave models. We use the dropout of 0.5, i.e., M_2=0.5(M_1), M_3=0.5(M_2), etc. inspired from <cit.>.
§.§ Evaluation strategy
The primary motive of FL is to improve the local performance and generalization ability. We adopt these strategies:
(1) Local performance: It determines: how well the local model is trained on the dataset of the participants?
(2) Cluster performance: It estimates: how well the participants can improve the cluster-wise performance through the aggregation of WPM?
(3) Global performance: It is the simple average over cluster performance and helps to determine: how much deviation is observed in the cluster performance from the average value?
§.§ Evaluation metrics
We use the standard metrics, including accuracy and F1-score, to evaluate the performance of the Fed-RAC. We also introduce a new performance metric, namely rounds-to-reach x%." Let I(x%) denotes the symbolic representation of the metric. I(x%) counts the number of iterations (or rounds) required for achieving the performance of x%. We finally useleave-one-out-test metric that trains the model for all class labels except for one randomly chosen class label. .
§.§ Results
§.§.§ Impact of resource aware clustering
This experiment aims to assess the efficacy of resource-aware clustering. The resource vectors of the devices used in the experiment are shown in Table <ref>. The resource vector comprises processing capacity, transmission rate, and memory, and is obtained from a survey conducted on 128 smartphone users, with prior permission obtained from the relevant authorities. From this survey, we randomly select 40 users to create different clusters using the Fed-RAC approach, as discussed in Section <ref>. Communication rounds are set to 200, and other parameters are described in Section <ref>. The effectiveness of resource-aware clustering is evaluated using three types of resource vectors. The first type uses unnormalized resource vectors of the participants, whereas the second type uses normalized vectors with λ_1=λ_2=λ_3=1/3. The third type is similar to the second, but with λ_1=0.4, λ_2=0.4, and λ_3=0.2.
Table <ref> presents the results of evaluating the impact of normalizing resource vectors on estimating the optimal number of clusters. The findings show that un-normalized vectors yield a limited number of clusters, namely 4 (C_1-C_4), using Dunn Indices. This is due to the dominance of the transmission rate resource over other resources, resulting in non-optimal clusters. By applying unit-based normalization, all resource values are scaled into the range of [0,1]. The normalized values generate an optimal number of clusters using Dunn Indices, as it removes resource biasness. We obtained 6 clusters (C_1-C_6) by assigning equal contributions of all resources, i.e., λ_1 (processing capacity) = λ_2 (transmission rate) = λ_3 (memory) = 1/3. When we set the contribution based on the analysis given in <cit.>, λ_1=0.4, λ_2=0.4, and λ_3=0.2, we obtained 5 clusters (C_1-C_5).
Table <ref> presents the performance achieved by the Fed-RAC approach using different types of resource vectors on MNIST, HAR, CIFAR-10, and SHL datasets. The results show that normalizing the resource vector leads to improved performance compared to using unnormalized vectors. The normalization process is essential because when using unnormalized vectors, clustering relies on the dominating resource, leading to non-optimal clusters. These clusters may contain participants with non-identical resources that converge at irregular intervals, resulting in reduced cluster performance. Moreover, when the contributions of processing capacity (λ_1) and transmission rate (λ_2) are greater than memory (λ_3), i.e., λ_1=λ_2=0.4>λ_3=0.2, the cluster performance is high.
Observation: The first observation is that the normalization of the resource vector is essential to determine the optimal number of clusters. We next observed that processing capacity and transmission rate are more crucial than available memory while estimating the optimal number of clusters.
§.§.§ Impact of clusters compaction
Table <ref> illustrates the impact of cluster compaction on the performance of Fed-RAC using MNIST, HAR, CIFAR-10, and SHL datasets. Table <ref>(a) demonstrates the cluster accuracy when all five clusters, estimated in Section <ref>, are available. The results depicted that the slave clusters, C_2-C_5, achieved comparable performance in contrast with C_1 (master cluster). Moreover, cluster C_3 achieved a higher performance than C_1. This performance enhancement is due to the distillation of knowledge from master to slave clusters during training. The details experiment on the impact of using knowledge distillation is elaborated in Section <ref>. Apart from Table <ref>(a), Table <ref>(b) illustrates the performance of different clusters on considered datasets after compaction. The results showed a clear margin of improvement in the global and cluster-wise performance while using the clusters compaction in the Fed-RAC. It is due to the increment in the number of participants in each cluster.
Observation: An interesting observation from this experiment is that the performance of models in each cluster improves with cluster compaction. Moreover, the improvement in performance is more significant for clusters with a larger number of participants compared to those with fewer participants.
§.§.§ Impact of communication rounds
This experiment investigates the impact of different datasets on the convergence of the Fed-RAC and considered baselines. All 40 participants were involved in the FL operation, and thus FedAvg and FedProx utilized the smallest slave model to ensure deployment and training on all participants. The communication rounds for Fed-RAC were determined as the rounds required for the convergence of the master model plus the maximum rounds required for the convergence of the slowest slave model.
Fig. <ref> shows the impact of the considered datasets, namely MNIST, HAR, CIFAR-10, and SHL, on the convergence of Fed-RAC, FedAvg, HeteroFL, FedProx, and Oort. The learning curve depicted in the figure displays a classic shape with a two-step behavior. Initially, the performance improves steeply until it reaches a plateau value after some communication rounds. Then, the accuracy increases with more communication rounds. Fed-RAC outperforms the existing approaches on all communication rounds during the experiment. The participants in the master cluster (C_1) quickly converge due to sufficient resources to train a large size model. The Fed-RAC approach also incorporates KD to train the models at the participants, leading to well-behaved optimization steps compared to non-KD based training and reduced communication rounds. On the MNIST dataset, all approaches achieved convergence at lower rounds with marginal improvement afterwards, as shown in Fig. <ref>. This is due to the balanced and sufficient number of instances for all classes in MNIST. FedAvg achieved slower convergence with minimal accuracy due to incompetence in handling heterogeneity among the participants and using a small size model to accommodate all 40 participants during training. HeteroFL achieved comparable performance to the Fed-RAC due to the strategy in addressing heterogeneity.
Observation: Firstly, the convergence curve of training exhibits a typical trend, with a steep increase in performance at the start, followed by a gradual plateau. Secondly, KD has a significant impact on the model's performance.
§.§.§ Impact of master-slave technique
In this experiment, we aim to evaluate the performance improvement of the slave models assigned to each cluster (other than the master cluster) using the master-slave technique discussed in Section <ref>. We consider the four clusters, C_1-C_4, obtained from the compaction in the previous result. The communication round is fixed at 200. However, to ensure brevity, we only present the results on HAR and CIFAR-10.
Fig. <ref> illustrates the impact of the master slave technique on the performance of models in different slave clusters. Clusters C_2-C_4 gain significant improvement in performance due to the distillation of knowledge from the master model in C_1, as shown in Fig. <ref>(b) and Fig. <ref>(d). The results demonstrated that the improvement in the model's performance is significant at low resource clusters (C_4) and reduced gradually to C_2. It is because if the size of the cluster model is small then the logits difference between master and slave is higher. Contrarily, if the difference between the size of the cluster model and the master model is less, the logit difference is limited; thus, the performance gain is low. Cluster C_4 gains accuracy of ≈ 8% for HAR and ≈ 11% for CIFAR-10 datasets, whereas the performance gain for C_2 is ≈ 2% for both datasets. Furthermore, in FL-based training, we considered participants with heterogenous resources; thus, participants with the highest and lowest resources, respectively, achieved colossal and most minor performances. It also creates a significant difference between the performance of the models in the largest and smallest clusters, which aggregately results in performance compromise despite clustering. Therefore, KD is incorporated to enhance the performance of models in the smaller clusters.
Observation: An interesting observation from our experiment is that the master-slave technique leads to a significant improvement in the performance in the slave clusters. Furthermore, the extent of improvement in the performance of the slave clusters depends on their respective model sizes.
§.§.§ Impact of rounds-to-reach x%
The objective of this experiment is to investigate the effectiveness of the proposed Fed-RAC in achieving a global accuracy of x% within a certain number of communication rounds. To achieve this, we have set the value of x to be 96, 92, 88, and 85 for MNIST, HAR, CIFAR-10, and SHL datasets, respectively, taking into account the convergence rates of these datasets. Fed-RAC involves training the model in the master cluster followed by parallel training of models in the slave clusters. As such, we define the Total Required Rounds (TRR) for complete training as the sum of rounds required to train the model in the master cluster (C_1) and the maximum rounds required to train the model in any of the slave clusters (max rounds {C_2, C_3, C_4}).
Table <ref> presents the results of the rounds-to-reach x% performance metric on the considered datasets and illustrates the impact of this metric on the Fed-RAC approach. The results indicate that the Fed-RAC approach (cluster-wise with KD) outperforms the baseline approaches, including cluster-wise without KD. This can be attributed to two main reasons. Firstly, the participants in the master cluster (C_1) have sufficient resources to train large models, which leads to quicker convergence. Secondly, the Fed-RAC approach incorporates KD to train the models at the participants, resulting in well-behaved optimization steps compared to non-KD.
Regarding the convergence of cluster-wise without KD, the results are not reported for models in clusters C_3 and C_4 on HAR, CIFAR-10, and SHL datasets. This is because, in the absence of KD, the participants in clusters C_3 and C_4 are unable to achieve the desired x% accuracy within the cap of 200 communication rounds. Furthermore, we used small models in FedAvg, Oort, and FedProx to involve all 40 participants. Although the use of KD appears to incur higher computational costs compared to the baselines that do not incorporate KD, Fed-RAC achieves the desired performance in fewer communication rounds, thus reducing the overall computational cost. Moreover, the number of local epochs required for convergence decreases with the cluster size, which also reduces the computational cost in Fed-RAC.
Observation: We observe that KD from the large-size model to the lightweight model not only improves the performance but also reduces the communication rounds for convergence.
§.§.§ Leave-one-out
The objective of this experiment was to assess the overall performance of Fed-RAC and several baseline approaches in a scenario where instances of a randomly selected class label were not included in the training but appeared in the testing. The class label with the highest number of instances was selected as the leave-out class during the experiment. The communication rounds were set to 200, and the parameters and local epochs were determined according to the implementation details discussed in Section <ref>.
In Fig. <ref>, the impact of removing instances of one class label from the training of all participants in FL is demonstrated. The results show that Fed-RAC outperforms the existing approaches, which is consistent with the performance pattern observed in previous results. The approach that does not use KD clustering (referred to as the "without KD clustering approach") achieved the lowest performance, likely due to the small size models trained on slave clusters with a limited number of participants in each cluster. This negatively impacted the overall performance of the approach. The MNIST dataset achieved the highest performance due to the large number of instances for classes other than the excluded one. Conversely, the SHL dataset had the lowest performance due to the excluded class having the highest number of instances.
Observation: An interesting observation is that excluding training instances for certain class labels leads to a deterioration in performance. The decline in performance is more pronounced when a class with a large number of instances is excluded. Furthermore, the absence of KD results in a more rapid degradation in performance compared to the KD.
§.§.§ Learning rate
This experiment aimed to investigate how the learning rate affects the performance of Fed-RAC. MNIST, HAR, CIFAR-10, and SHL datasets were used, and the communication rounds were set to 5, 10, 20, and 20, respectively. The rounds were restricted as the approach converged at any learning rate at higher communication rounds.
Table <ref> depicts the impact of distinct learning rates on the accuracy of model in the master cluster of the Fed-RAC using MNIST, HAR, CIFAR-10, and SHL datasets. The results demonstrated the efficacy of the Fed-RAC on a smaller learning rate (e.g., 0.002). We obtained the lowest accuracy for the learning rate of 0.010 due to faster convergence. The model converged sub-optimally at higher learning rate; thus, it suffered from performance compromise. Fed-RAC converged faster for the MNIST; hence, we achieved accuracy beyond 90% for all the datasets at different learning rates only at 5 communication rounds. The achieved accuracy of the Fed-RAC approach follows a linear pattern for all the datasets; however, we also observed plateaued behavior for learning rates between 0.006 to 0.008. Additionally, the difference between cluster accuracy at the learning rate of 0.002 and 0.010 is more than 8%, which signifies the importance of selecting an optimal learning rate during training.
Observation: The results indicated that the learning rate is a critical factor in achieving higher performance. A smaller learning rate is beneficial for longer communication rounds, while a larger learning rate is better for shorter rounds.
§ DISCUSSION AND FUTURE WORK
In this section, various issues are discussed that need to be addressed in future work in conjunction with the proposed approach. The approach uses a master-slave technique where logits from the master cluster model are sent to the remaining clusters. However, this could potentially expose private training data or enable participants to reconstruct models. To address these privacy concerns, future work on incorporating security aspects is necessary, as motivated by previous work on differential privacy in FL <cit.>. Furthermore, while Fed-RAC considers participant heterogeneity, it does not account for noise in data instances and labels. Therefore, future work will involve incorporating such noise in the model training process. Additionally, the approach independently trains a local model for each cluster without leveraging information from models in other clusters, except for logit vectors from the master cluster model. To address this limitation, future work will focus on developing mechanisms for aggregating information on trained models from different clusters.
§ CONCLUSION
In this paper, a federated learning approach called Fed-RAC is proposed to address the negative impact of heterogeneous participants. Unlike previous studies, Fed-RAC trains local models on all participants despite differences in heterogeneity and training time. The approach first identifies the optimal number of clusters based on available devices and networking resources, then generates and randomly initializes a model that is used for compression to obtain models for all clusters. A participant assignment mechanism and a master-slave technique are introduced to improve the performance of lightweight models using knowledge distillation. Experimental evaluation is conducted to verify the approach's effectiveness on existing datasets, leading to several key findings: successful federated learning requires proper management of participant heterogeneity, resource-aware clustering helps identify the optimal number of clusters, the number of data instances significantly affects cluster performance, and the master-slave technique enhances performance based on model size.
IEEEtran
|
http://arxiv.org/abs/2306.02776v1
|
20230605110100
|
Cheap-fake Detection with LLM using Prompt Engineering
|
[
"Guangyang Wu",
"Weijie Wu",
"Xiaohong Liu",
"Kele Xu",
"Tianjiao Wan",
"Wenyi Wang"
] |
cs.CV
|
[
"cs.CV"
] |
Cheap-fake Detection with LLM using Prompt Engineering
This work is partially supported by the Sichuan Provincial Key Laboratory of Intelligent Terminals (Grant Number: SCITLAB-20016). Corresponding authors are Xiaohong Liu ([email protected]) and Wenyi Wang ([email protected]).
Guangyang Wu
University of Electronic Science
and Technology of China
Chengdu, China
Weijie Wu
University of Electronic Science
and Technology of China
Chengdu, China
Xiaohong Liu
Shanghai Jiao Tong University
Shanghai, China
Kele Xu
National University of Defense Technology
Changsha, China
Tianjiao Wan
National University of Defense Technology
Changsha, China
Wenyi Wang
University of Electronic Science
and Technology of China
Chengdu, China
July 31, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The misuse of real photographs with conflicting image captions in news items is an example of the out-of-context (OOC) misuse of media. In order to detect OOC media, individuals must determine the accuracy of the statement and evaluate whether the triplet ( i.e., the image and two captions) relates to the same event. This paper presents a novel learnable approach for detecting OOC media in ICME'23 Grand Challenge on Detecting Cheapfakes. The proposed method is based on the COSMOS structure, which assesses the coherence between an image and captions, as well as between two captions. We enhance the baseline algorithm by incorporating a Large Language Model (LLM), GPT3.5, as a feature extractor. Specifically, we propose an innovative approach to feature extraction utilizing prompt engineering to develop a robust and reliable feature extractor with GPT3.5 model. The proposed method captures the correlation between two captions and effectively integrates this module into the COSMOS baseline model, which allows for a deeper understanding of the relationship between captions. By incorporating this module, we demonstrate the potential for significant improvements in cheap-fakes detection performance. The proposed methodology holds promising implications for various applications such as natural language processing, image captioning, and text-to-image synthesis. Docker for submission is available at https://hub.docker.com/repository/docker/mulns/ acmmmcheapfakes.
Large Language Models, Prompt Engineering, Cheap-fakes
§ INTRODUCTION
The emergence of social media has dramatically altered the process by which people access information, resulting in a significant upswing in available information, hastening the spread of fake news and other types of misleading data. On social media, two principal forms of deceptive information regularly encountered are deepfakes and cheapfakes <cit.>. While "deepfakes" refer to videos that have been doctored using machine learning or other AI-based methods to create or combine human faces and bodies, "cheap fakes" are a far less costly variant of doctored videos generated through readily available software tools such as Photoshop, PremierePro, among others. Cheapfakes are often created through manipulating image captions, image editing or by adjusting video speed.
Due to their ease of creation, cheapfakes are perceived to be more ubiquitous and damaging than deepfakes. The out-of-context use of images, where unaltered photos are put to new and deceptive contexts, is one of the main reasons cheapfakes pose such a danger. This phenomenon occurs when an image is sourced from various locations with contradictory or conflicting captions. The detection of misinformation based on out-of-context images is particularly arduous since the visual content is unchanged, and the misleading or incorrect information is only conveyed through the image-text combination.
The COSMOS baseline framework <cit.> employs a two-step method to address the problem of detecting cheap-fake images. The first step involves an image-text matching module that evaluates the coherence between an image and its caption. The second step utilizes an out-of-context (OOC) detection module to predict the final outcome. The approach relies on semantic textual similarity (STS) scores to determine whether an image-caption pair is OOC or not (NOOC). S-BERT is used to calculate the semantic similarity between two captions, where the input is a pair of captions and the output is a similarity score in the range from 0 to 1. If the similarity score is less than the pre-defined threshold, the triplet is predicted as out-of-context.
However, there are certain scenarios where the STS scores may not perform well. For instance, if two captions are contradictory, the STS model may produce a high score if the ratio of similar words is high. Conversely, a pair of entailment captions may have a low STS score if one caption is much more detailed than the other. Therefore, a more comprehensive evaluation of the relationship between two captions is critical. Tran et al. <cit.> proposed using a Natural Language Inference (NLI) model to determine whether the given "hypothesis" and "premise" logically follow (entailment) or unfollow (contradiction) or are undetermined (neutral) with each other. However, the performance of this model is still limited and may produce unreliable results for challenging cases.
In recent years, Large Language Models (LLMs) have emerged, which possess enhanced semantic comprehension abilities compared to conventional BERT-based models. This development has motivated the use of LLM as a powerful tool for evaluating the coherence between two captions. To enhance detection accuracy in cases where previous methods may not be effective, we leverage the GPT3.5 model. This model has shown remarkable results in various NLP tasks and can provide a more comprehensive evaluation of the relationship between two captions. By utilizing the GPT3.5 model, we can address the limitations of previous methods and enhance the overall performance of the COSMOS framework. Nevertheless, two key challenges must be addressed. Firstly, the full parameters of GPT3.5 is not currently openly accessible, and its usage is limited to the OpenAI API. Secondly, the adaptive nature of GPT3.5 and its frequent updates can result in dynamic and potentially unstable outcomes.
This study presents an innovative approach to feature extraction utilizing prompt engineering to develop a robust and reliable feature extractor. The proposed method captures the correlation between two captions and effectively integrates this module into the COSMOS baseline model. Our study emphasizes the significance of prompt engineering in feature extraction, which allows for a deeper understanding of the relationship between captions. By incorporating this method into the baseline model, we demonstrate the potential for significant improvements in cheap-fakes detection performance. The proposed methodology holds promising implications for various applications such as natural language processing, image captioning, and text-to-image synthesis.
§ RELATED WORK
§.§ Cheapfakes Detection
The baseline method, COSMOS <cit.>, utilizes image-text matching and a BERT-based module to detect cheap-fakes. Akgul et al. <cit.> have generated a dataset of 200,000 images with 450,000 textual captions to train the image-text matching model. The COSMOS use a heuristic pipeline to determine whether an image-caption triplet is out-of-context (OOC).
In the "ACMMM 2022 Grand Challenge on Detecting Cheapfakes", La et al. <cit.> proposed to use image captioning method to enhance accuracy <cit.>. They convert the image to caption using a image captioning model,
and extract the features with the RoBERTa model. Afterwards, they use an ANN or SVM for binary classifying according to features of three captions. Moreover, La et al. <cit.> proposed a Visual Semantic Reasoning (VSRN) method to enhance the image-text matching module. They use a pre-trained DeBERTa model to obtain additional semantic information between two captions. Tran et al. <cit.> proposed using a Natural Language Inference (NLI) model to determine whether the given "hypothesis" and "premise" logically follow (entailment) or unfollow (contradiction) or are undetermined (neutral) with each other. Furthermore, they use online-search to address the hard cases for image-caption matching.
§.§ Fact-Checking
Fact-checking is a vital task aimed at evaluating the veracity of statements made by prominent individuals, including politicians, pundits, and other public figures <cit.>. In recent years, researchers have proposed various techniques to address the problem of fact-checking. Shi et al. <cit.> proposed a discriminative path-based approach for fact-checking in knowledge graphs. The method incorporates connectivity, type information, and predicate interactions to identify true statements accurately. Jin et al. <cit.> introduced a novel Recurrent Neural Network (RNN) with an attention mechanism (att-RNN) to integrate multimodal features for rumor detection effectively. Ferreira et al. <cit.> introduced the Emergent dataset, which is a digital journalism project aimed at debunking rumors. This dataset has been used to develop effective stance detection techniques.
§ METHODS
This section presents a novel approach for detecting cheapfakes that incorporates both textual and visual information using a learnable multimodal method. We build upon the baseline algorithm of the challenge <cit.>, COSMOS, which is a two-step approach that evaluates the coherence between an image and two captions, followed by the coherence between the two captions themselves. Our proposed method enhances the baseline by introducing a feature extractor, built using a Large Language Model (LLM) called GPT3.5, to improve detection accuracy. The method is designed to solve the first task of the ICME'23 Grand Challenge <cit.> on Detecting Cheapfakes, which focuses on detecting whether an image-caption triplet is in or out of context. Figure <ref> depicts an overview of our proposed method.
§.§ Coherence between Image and Captions
Our method utilize the Image-text matching module, which was introduced in a prior work referred to as COSMOS <cit.>. This module is designed to take as input an image-caption pair, and to generate a bounding box with an image-caption coherence score. A higher score value indicates a greater level of coherence between the input image and caption. The bounding box produced by this module frames the region that corresponds to the maximum coherence score.
The Image-text matching module operates by first taking the caption as input, which is then processed by the Universal Sentence Encoder (USE) to produce a 512-dimensional vector. This vector is then further processed by the Text Encoder to obtain a 300-dimensional vector. Concurrently, the image is inputted into a pre-trained Mask-RCNN to generate up to 10 object bounding boxes. For each object, the Object Encoder is used to produce a 300-dimensional embedding vector.
To evaluate the coherence between the image and caption, the intersection-over-union (IoU) score is calculated through a dot product of embedding vectors. If the value of the IoU score is lower than a pre-defined threshold, the triplet is predicted as NOOC. In contrast, if the IoU score value is greater than the threshold, further estimation is carried out for the captions in the triplet. In this work, we set the threshold value to 0.25, which is determined based on prior studies <cit.>.
§.§ Coherence between captions
Once the threshold is met, the proposed method evaluates the textual semantic relationship between the two captions in the triplet for further classification. In addition to the similarity of the two sentences estimated by BERT-based models, we introduce a feature vector that represents the coherence of the two sentences. Specifically, we first estimate a similarity vector [S_base, S_large] using SBERT and BERT-large models for each pair of captions following <cit.>, which represents the semantic similarity between them.
Next, we utilize the GPT3.5 model (also denoted as ChatGPT) to generate a discriminated vector [c_1, c_2, …, c_6] that represents the semantic relation between the captions regarding various features, including the probability of being out of context, the consistency of subject matters, the consistency of broader context, coherence, information completeness, and semantic similarity. In order to guide the GPT3.5 model to conclude these features, we carefully designed prompts as shown below:
“Given two sentences, I am going to ask you six questions. You should provide a final answer in a python list of length 6 where each component is a rate value (integer ranging from 0 to 9).
* The first question: Determine whether these two sentences are out of context. Rate your judgment by an integer number ranging from 0 to 9, where 9 refers to being completely out of context, and 0 refers to being completely in context.
* The second question: Determine whether the subject matters of these two sentences are the same. Rate your judgment by an integer number ranging from 0 to 9, where 9 indicates that the subject matters are completely different, and 0 indicates that the subject matters are completely the same
* The third question: Determine whether the broader context of these two sentences refer to are the same. Rate your judgment by an integer number ranging from 0 to 9, where 9 indicates that the broader context is completely different, and 0 indicates that the broader context is completely the same
* The fourth question: Determine whether these two sentences cohere together. Please rate your judgment by an integer number ranging from 0 to 9, where 9 indicates that the two sentences are not coherent at all, and 0 indicates that the two sentences are highly coherent
* The fifth question: Determine whether any information is missing that could help to explain the relationship between the two sentences. Please rate your judgment by an integer number ranging from 0 to 9, where 9 indicates that important information is missing, and 0 indicates that there is no information missing.
* The sixth question: Determine the semantic similarity between the two sentences. Semantic similarity should be rated by an integer number ranges from 0 to 9, where 0 refers to semantically identical, and 9 refers to completely semantic different.
The two sentences are [CAPTION1, CAPTION2]. You should output the python list only without explanations.”
To produce a stable output, we leverage the OpenAI API to implement the feature extractor using the pre-trained model `gpt-3.5-turbo-0301'. Specifically, this model is a snapshot of `gpt-3.5-turbo' from March 1st 2023. Unlike `gpt-3.5-turbo', this model will not receive updates, and will only be supported for a three month period ending on June 1st 2023. Furthermore, the temperature value also effect the randomness of the model. Specifically, the higher temperature make the output more random, while lower values will make it more focused. Therefore, we set the temperature to 0 to obtain deterministic results. Once the similarity vector [s_base, s_large] and the GPT vector [c_1, …, c_6] are computed for each input sample, they are passed as input to an ensemble classifier, AdaBoost <cit.>. The AdaBoost classifier is a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset but where the weights of incorrectly classified instances are adjusted such that subsequent classifiers focus more on difficult cases. This classifier utilizes the extracted features to generate predictions of either 0 for NOOC or 1 for OOC.
§ EXPERIMENTS
In this section, we describe the training details and the testing results compared with previous methods.
§.§ Training Details
To train the binary classifier for predicting the OOC or NOOC label based on the feature vector, we partitioned the public testing dataset of the ICME'23 Grand Challenge on Detecting Cheapfakes into a training set (50%) and a testing set (50%). The public testing dataset comprised 1000 samples, each of which consisted of an image and two captions as inputs, along with the corresponding OOC or NOOC labels. Due to the limited number of samples available for training, we adopted a set of simple classifiers to prevent overfitting while achieving superior performance. We trained the classifiers using the 5-fold cross-validation method.
§.§ Experimental Results
Table <ref> presents a comparison of the results obtained from various classifiers, including Support Vector Machine (SVM), Random Forest (RF), and AdaBoost, with those obtained from previous methods. The second column of the table indicates the accuracy on the testing dataset (i.e., half of the public test dataset), while the third column represents the accuracy on the entire public test dataset. The GPT+AdaBoost classifier was selected as the final solution since it achieved the highest score on the testing dataset and demonstrated better generalization ability than GPT+RF.
§ CONCLUSION
In conclusion, our proposed method uses the COSMOS structure to evaluate the coherence between an image and two captions, and between two captions. The method employs first estimate the image-caption coherence represented by an IoU value, and S-BERT and BERT-large models to estimate a similarity vector. Additionally, we use the GPT-3.5 model to generate a discriminative vector that represents the semantic relation between the captions based on a set of carefully designed features. The use of these methods and models results in an accurate and stable representation of the coherence between image-caption triplets, and improve the baseline model by large margin.
IEEEbib
|
http://arxiv.org/abs/2306.06431v1
|
20230610125455
|
Computational Complexity of Covering Disconnected Multigraphs
|
[
"Jan Bok",
"Jiří Fiala",
"Nikola Jedličková",
"Jan Kratochvíl",
"Michaela Seifrtová"
] |
cs.DM
|
[
"cs.DM",
"math.CO"
] |
Sliding Window Neural Generated Tracking Based on Measurement Model
Haya Ejjawi
Sorbonne University-Abu Dhabi
Department of Science and Engineering
Abu Dhabi, UAE
[email protected]
Amal El Fallah Seghrouchni
Mohammed VI Polytechnic University
Ai movement
Rabat, Morocco
Sorbonne University
Paris, France
[email protected]
Frederic Barbaresco
Thales Group
Paris, France
[email protected]
Raed Abu Zitar
Thales/SCAI Senior Scientist
Sorbonne University-Abu Dhabi
Abu Dhabi, UAE
[email protected]
978-1-6654-9032-0/23/$31.00 2023 IEEE
Accepted . Received
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The notion of graph covers is a discretization of covering spaces introduced
and deeply studied in topology. In discrete mathematics and theoretical
computer science, they have attained a lot of attention from both the
structural and complexity perspectives. Nonetheless, disconnected graphs were
usually omitted from the considerations with the explanation that it is
sufficient to understand coverings of the connected components of the target
graph by components of the source one. However, different (but equivalent)
versions of the definition of covers of connected graphs generalize to
non-equivalent definitions for disconnected graphs. The aim of this paper is to
summarize this issue and to compare three different approaches to covers of
disconnected graphs: 1) locally bijective homomorphisms, 2) globally
surjective locally bijective homomorphisms (which we call surjective covers), and
3) locally bijective homomorphisms which cover every vertex the same number
of times (which we call equitable covers). The standpoint of our comparison is
the complexity of deciding if an input graph covers a fixed target graph. We
show that both surjective and equitable covers satisfy what certainly is a natural and
welcome property: covering a disconnected graph is polynomial-time decidable
if such it is for every connected component of the graph, and it is
NP-complete if it is NP-complete for at least one of its components. We further argue that the third variant, equitable covers, is the most natural one,
namely when considering covers of colored graphs. Moreover, the complexity of
surjective and equitable covers differ from the fixed parameter complexity point of
view.
In line with the current trends in topological graph theory, as well as its applications in mathematical physics, we consider graphs in a very general sense: our graphs may contain loops, multiple edges and also semi-edges. Moreover, both vertices and edges may be colored, in which case the covering projection must respect the colors. We conclude the paper by a complete characterization of the complexity
of covering 2-vertex colored graphs, and show that poly-time/NP-completeness dichotomy holds true for this case.
We actually aim for a stronger dichotomy. All our polynomial-time algorithms work for arbitrary input graphs, while the NP-completeness theorems hold true even in the case of simple input graphs.
§ INTRODUCTION
The notion of graph covering is motivated by the notion of covering of topological spaces. It has found numerous applications in graph theory, in construction of highly symmetric graphs of requested further properties (cf. <cit.>), but also in models of local computation (<cit.>). The application in computer science led Abello, Fellows, and Stilwell <cit.> to pose the problem of characterizing those (multi)graphs for which one can decide in polynomial time if they are covered by an input graph. They have pointed out that because of the motivation coming from topology, it is natural to consider graphs with multiple edges and loops allowed. Kratochvíl, Proskurowski, and Telle <cit.> further showed that in order to fully characterize the complexity of covering simple graphs, it is necessary but also sufficient to characterize the complexity of covering colored mixed multigraphs of minimum degree at least three. In modern topological graph theory it has now become standard to consider graphs with semi-edges since these occur naturally in algebraic graph reductions (informally, a semi-edge has, in contrast to normal edges and loops, only one endpoint.). Bok et al. initiated the study of the computational complexity of covering graphs with semi-edges in <cit.>.
In all the literature devoted to the computational aspects of graph covers, only covers of connected graphs have been considered so far. The authors of <cit.> justify this by claiming in Fact 2.b that “For a disconnected graph H, the H-cover problem is polynomially solvable
(NP-complete) if and only if the H_i-cover problem is polynomially solvable (NP-complete)
for every (for some) connected component H_i of H.” Though this
seems to be a plausible and desirable property, a closer look shows that the validity of this statement depends on the exact definition of covers for disconnected graphs.
The purpose of this paper is to give this closer look at covers of disconnected graphs in three points of view: the definition, complexity results, and the role of disconnected subgraphs in colored multigraphs. In Section <ref> we first discuss what are the possible definitions of covers of disconnected graphs – locally bijective homomorphisms are a natural generalization from the algebraic graph theory standpoint, globally surjective locally bijective homomorphisms (which we call surjective covers) seem to have been understood by the topological graph theory community as the generalization from the standpoint of topological motivation, and a novel and more restrictive definition of equitable covers, in which every vertex of the target graph is required to be covered by the same number of vertices of the source one. The goal of the paper is to convince the reader that the most appropriate definition is the last one. In Section <ref> we inspect the three possible definitions under the
magnifying glass
of computational complexity. The main result is that the above mentioned Fact 2.b is true for surjective covers, and remains true also for the newly proposed definition of equitable covers of disconnected graphs. The NP-hardness part of the statement is proven for instances when the input graphs are required to be simple. Lastly, in Section <ref> we review the concept of covers of colored graphs and show that in this context the notion of equitable covers is indeed the most natural one. We justify our approach by providing a characterization of polynomial/NP-complete instances of the H-Cover problem for colored graphs with two vertices. It is worth noting that in Section <ref> we
do not only summarize the definitions and results on covers of connected graphs, but
also introduce a new notion of being stronger, a relation between connected graphs that generalizes the covering relation and which we utilize in the NP-hardness reductions in Section <ref>. We believe that this notion is interesting on its own and that its further study would deepen the understanding of graph covers.
§ COVERS OF CONNECTED GRAPHS
In this section we formally define what we call graphs, we review the notion of a covering projection for connected graphs and we introduce a quasi-ordering of connected graphs defined by the existence of their simple covers.
§.§ Graphs with multiple edges, loops and semi-edges
Recall that we allow graphs to have multiple edges, loops and semi-edges.
A very elegant description of
this notion of a graph through the concept of darts is used in more algebraic-based papers on covers.
The following formal definition is inspired by
the one given in <cit.>.
A graph is a triple (D,V,Λ), where D is a set of darts,
and V and Λ are each a partition of D into disjoint sets.
Moreover, all sets in Λ have size one or two, while in V we allow any number of empty sets (they correspond to isolated vertices).
Vertices are the sets of darts forming the partition V.
The set of links Λ splits into three disjoint sets Λ=E∪ L ∪ S, where E represents the normal edges, i.e., those links of Λ
that intersect two distinct vertices from V,
L are the loops, i.e., those 2-element sets of Λ that are subsets of some set from V,
and S are the semi-edges, i.e., the 1-element sets from Λ.
The standard terminology that a vertex v∈ V is incident with a link (edge) e∈Λ
or that distinct vertices u and v are adjacent can be expressed as v∩ e ∅
and as ∃ e ∈Λ: u∩ e∅ v∩ e∅, respectively.
In the standard model a graph is usually defined as an ordered triple (V,Λ,ι), for Λ=E∪ L∪ S, where ι is the incidence mapping ι:Λ⟶ V∪V2 such that ι(e)∈ V for all e∈ L∪ S and ι(e)∈V2 for all e∈ E.
We use both approaches in this paper and employ advantages of each of them in different situations. See an illustrative example in Figure <ref>.
To see that both approaches are equivalent we show how the dart representation of a graph can be converted into the incidence one, and vice versa. First, given G=(D,V,Λ), we define
ι(e)={v: e∩ v ∅}. For the reverse transformation, given G=(V,E∪ L∪ S,ι), we define the set of darts as D={(ι(e),e): e∈ S∪ L}∪{ (v,e): v∈ι(e), e∈ E} (with a slight abuse of notation, for every loop e∈ L, we actually add two copies of (ι(e),e) into D), and then the partition V is given by the equivalence relation ∼_V: (v_1,e_1)∼_V(v_2,e_2) if v_1=v_2, and the partition Λ by ∼_Λ: (v_1,e_1)∼_Λ(v_2,e_2) if e_1=e_2.
The degree of a vertex v∈ V is (v)=|v|.
The fact that a loop contributes 2 to the degree of its vertex may seem not automatic at first sight, but becomes natural when graph embeddings on surfaces are considered.
The multiedge between distinct vertices u and v is the inclusion-wise maximal subset of links that connect u and v, i.e. {e∈ E: e∩ v ∅ v∩ e∅} and the size of this set is the multiplicity of the (normal) edge uv.
In a similar way we define the multiplicity of a loop or of a semi-edge.
A graph is simple if it has no loops or semi-edges and if every edge has multiplicity one. In this case we use also the standard notation for an edge as e=uv and write G=(V,E).
A graph H is a subgraph of a graph G if V(H)⊆ V(G), E(H)⊆ E(G), L(H)⊆ L(G), S(H)⊆ S(G) and ι_H(e)=ι_G(e) for every e∈Λ(H). The subgraph H is induced if it is inclusion-wise maximal with respect to the set of links on the set V(H) of vertices. The subgraph of G induced by a set W of vertices is denoted by G[W]. (Note that for the definition of these two notions we have moved to the standard model of graphs, where they are easier to define.)
A path in a graph G is a sequence …,v_i,e_i,v_i+1,e_i+1, … of distinct
vertices and links such that for each consecutive triple v_i,e_i,v_i+1, ι(e_i)={v_i,v_i+1}, and for each consecutive triple e_i,v_i+1,e_i+1, {v_i+1}=ι(e_i)∩ι(e_i+1). Moreover, if the path starts or ends with a link, then this link is a semi-edge; all inner links are normal edges. The path is closed if it starts and ends with vertices, it is open if it starts and ends with semi-edges, and it is half-way in the remaining cases.
By a component of a graph we mean an inclusion-wise maximal induced subgraph such that every two of its vertices are connected by a subgraph isomorphic to a path.
We say that a graph is connected if it has only a single component.
It shall be useful for our purposes to specifically denote one-vertex and two-vertex graphs. Let us denote by F(b,c) the one-vertex graph with b semi-edges and c loops and by W(k,m,ℓ,p,q) the two-vertex graph with k semi-edges and m loops at one vertex, p loops and q semi-edges at the other one, and ℓ>0 multiple edges connecting the two vertices (these edges are referred to as bars). In other words, W(k,m,ℓ,p,q) is obtained from the disjoint union of F(k,m) and F(q,p) by connecting their vertices by ℓ parallel edges. Note that the graph in Figure <ref> is in fact W(2,2,2,1,1). We denote by G+H the disjoint union of (isomorphic copies of) graphs G and H, e.g., W(k,m,0,p,q)=F(k,m)+F(q,p).
§.§ Covers of connected graphs
Though there is no ambiguity in the definition of graph covers of connected graphs, the standard definition used e.g. in <cit.> or <cit.> becomes rather technical especially when semi-edges are allowed. The following simple-to-state yet equivalent definition was introduced in <cit.>.
We say that a graph G=(D_G,V_G,Λ_G) covers a connected graph H=(D_H,V_H,Λ_H) if there exists a surjective mapping
f: D_G→ D_H such that:
* For every u∈ V_G, there is a u'∈ V_H such that the restriction of f onto u is a bijection between u and u'.
* For every e∈Λ_G, there is an e'∈Λ_H such that f(e)=e'.
We write G⟶ H to express that G covers H when H is a connected graphs.
This compact and succinct definition emphasizes the usefulness of the dart definition of graphs
in contrast with the lengthy and technical definition of covers in the standard way which is recalled in the following proposition. Note that it follows straightforwardly from the definition that the mapping of vertices induced by a covering projection is degree-preserving.
A graph G covers a graph H if and only if G allows
a pair of mappings f_V:V(G)⟶ V(H) and f_Λ:Λ(G)⟶Λ(H) such that
* f_Λ(e)∈ L(H) for every e∈ L(G)
and f_Λ(e)∈ S(H) for every e∈ S(G),
* ι(f_Λ(e))=f_V(ι(e)) for every e∈ L(G)∪ S(G),
* for every link e∈Λ(G) such that f_Λ(e)∈ S(H)∪ L(H) and ι(e)={u,v}, we have ι(f_Λ(e))=f_V(u)=f_V(v),
* for every link e∈Λ(G) such that f_Λ(e)∈ E(H) and ι(e)={u,v} (note that it must be f_V(u)≠ f_V(v)), we have ι(f_Λ(e))={f_V(u),f_V(v)},
* for every loop e∈ L(H), f^-1(e) is a disjoint union of loops and cycles spanning all vertices u∈ V(G) such that f_V(u)=ι(e),
* for every semi-edge e∈ S(H), f^-1(e) is a disjoint union of edges and semi-edges spanning all vertices u∈ V(G) such that f_V(u)=ι(e), and
* for every edge e∈ E(H), f^-1(e) is a disjoint union of edges (i.e., a matching) spanning all vertices u∈ V(G) such that f_V(u)∈ι(e).
For the convenience of the reader, we add another alternative view on graph covering projections, again formulated in the standard model.
A graph G covers a graph H if and only if G allows
a pair of mappings f_V:V(G)⟶ V(H) and f_Λ:Λ(G)⟶Λ(H) such that
* the mappings f_V and f_Λ are incidence preserving,
* the preimage f^-1_Λ(e) of a normal edge e∈ E(H) such that ι(e)={u,v} is a matching in G spanning f^-1_V(u)∪ f^-1_V(v), each edge of the matching being incident with one vertex in f^-1_V(u) and one in f^-1_V(v),
* the preimage f^-1_Λ(e) of a loop e∈ L(H) such that ι(e)={u} is a disjoint union of cycles in G spanning f^-1_V(u) (both a double edge and a loop are considered to be cycles as well),
* the preimage f^-1_Λ(e) of a semi-edge e∈ S(H) such that ι(e)={u} is a disjoint union of semi-edges and normal edges in G spanning f^-1_V(u).
It follows from Proposition <ref>.2. that the preimages of two adjacent vertices have the same size. More generally, if G covers a connected graph H via a covering projection f, then |f^-1(u)|=k for every u∈ V(H), where k=|V(G)|/|V(H)| is an integer. Here the connectedness of H is crucial.
As the first examples, we include the following observations whose proofs are based on König-Hall and Petersen theorems on factorization of regular graphs (cf. <cit.>).
* For every non-negative integer k, a simple graph G covers F(k,0) if and only if G is k-regular and k-edge-colorable.
* For every non-negative integer k, a simple graph G
covers F(1,k) if and only if G is (2k+1)-regular and contains a perfect matching.
* For every non-negative integer k, a simple graph G covers W(0,0,k,0,0) if and only if G is bipartite and k-regular.
The computational problem H-Cover in whose complexity we are mainly interested in, is defined as follows:
H-CoverA graph G.Does G cover H?
§.§ A special relation regarding covers
Graph covering is a transitive relation among connected graphs. Thus when A⟶ B for connected graphs A and B, every graph G that covers A also covers B. Surprisingly, the conclusion may hold true also in cases when A does not cover B, if we only consider simple graphs G. To describe this phenomenon, we introduce the following definition, which will prove useful in several reductions later on.
Given connected graphs A,B, we say that A is stronger than B, and write A▹ B, if every simple graph that covers A also covers B.
The smallest nontrivial example of such a pair of graphs are two one-vertex graphs: F(2,0) with a pair of semi-edges and F(0,1), one vertex with a loop. While F(0,1) is covered by any cycle, only cycles of even length cover F(2,0). So F(2,0)▹ F(0,1). More generally, for every k,p≥ 0 and h>0, F(k+2h,p)▹ F(k,h+p).
It follows from the definition, that whenever A is simple, then (A▹ B) if and only if (A⟶ B).
One might also notice that ▹ is transitive and thus defines a quasi-order on connected graphs. Many pairs of graphs are left incomparable with respect to this relation, even those covering a common target graph. On the other hand, the equivalence classes of pair-wise comparable graphs may be nontrivial, and the graphs within one class might have different numbers of vertices. For example, W(0,0,2,0,0) and
F(2,0) form an equivalence class of ▹, as for both of these graphs, the class of simple graphs covering them is exactly the class of even cycles. We believe the relation of is a concept interesting on its own. In particular, the following question remains open and seems relevant.
Do there exist two ▹-equivalent graphs such that none of them covers the other one?
So far all examples of A▹ B we know are such that either A⟶ B or A contains semi-edges. In the open problem session of GROW 2022, we have formulated this as a conjecture:
(<cit.>)
If A has no semi-edges, then A ▹ B if and only if A⟶ B.
This conjecture has been justified for both one-vertex cubic graphs B=F(3,0) and B=F(1,1) (and arbitrary A) in <cit.>, as well as for A=W(0,0,k,0,0) and arbitrary k and B. Simple graphs which are witnesses for A▹̸B are called generalized snarks in there. This is explained by the fact that snarks are 2-connected cubic graphs which are not 3-edge-colorable. It is well known that every 2-connected cubic graph contains
a perfect matching, and thus snarks are witnesses of F(1,1)▹̸F(3,0) (cf. Observation <ref> and 1. and 2. therein).
§ WHAT IS A COVER OF A DISCONNECTED GRAPH?
Throughout this section and the rest of the paper
we assume that we are given two (possibly disconnected) graphs G and H and we are interested in determining whether G covers H. In particular in this section we discus what it means that G covers H. We assume that G has p components of connectivity, G_1, G_2, …, G_p, and H has q components, H_1, H_2,…, H_q. It is reasonable to request that a covering projection must map each component of G onto some component of H, and this restricted mapping must be a covering. The questions we are raising are:
* Should the covering projection be globally surjective, i.e., must the preimage of every vertex of H be nonempty?
* Should the preimages of the vertices of H be of the same size?
Both these questions are the first ones at hand when trying to generalize graph covers to disconnected graphs, since the answer is “yes" in the case of connected graphs. (and it is customary to call a projection that covers every vertex k times a k-fold cover).
Let G and H be graphs and let us have a mapping f G⟶ H.
* We say that f is a locally bijective homomorphism of G to H if for each component G_i of G, the restricted mapping f|_G_i:G_i ⟶ H is a covering projection of G_i onto some component of H. We write G⟶_lbH if such a mapping exists.
* We say that f is a surjective covering projection of G to H if for each component G_i of G, the restricted mapping f|_G_i:G_i ⟶ H is a covering projection of G_i onto some component of H, and f is surjective. We write G⟶_surH if such a mapping exists.
* We say that f is an equitable covering projection of G to H if for each component G_i of G, the restricted mapping f|_G_i:G_i ⟶ H is a covering projection of G_i onto some component of H, and for every two vertices u,v∈ V(H), |f^-1(u)|=|f^-1(v)|. We write G⟶_equit H if such a mapping exists.
A useful tool both for describing and discussing the variants, as well as for algorithmic considerations, is introduced in the following definition.
Given graphs G and H with components of connectivity G_1, G_2, …, G_p, and H_1, H_2,…, H_q, respectively, the covering pattern of the pair G,H is the weighted bipartite graph
Cov(G,H)=({g_1,g_2,…,g_p,h_1,h_2,…,h_q},{g_ih_j:G_i⟶ H_j})
with edge weights
r_ij=r(g_ih_j)=|V(G_i)|/|V(H_j)|.
The following observation follows directly from the definitions, but will be useful in the computational complexity considerations.
Let G and H be graphs. Then the following statements are true.
* We have G⟶_lbH if and only if the degree of every vertex g_i, i=1,2,…,p in Cov(G,H) is greater than zero.
* We have G⟶_surH if and only if the degree of every vertex g_i, i=1,2,…,p in Cov(G,H) is greater than zero and Cov(G,H) has a matching of size q.
* We have G⟶_equitH if and only if Cov(G,H) has a spanning subgraph Map(G,H) such that every vertex g_i,i=1,2,…,p has degree 1 in Map(G,H) and for every vertex h_j of Cov(G,H),
∑_i:g_ih_j∈ E(Map(G,H))r_ij=k,
where k=|V(G)|/|V(H)|.
§ COMPLEXITY RESULTS
We feel the world will be on the right track if H-Cover is polynomial-time solvable whenever H_i-Cover is polynomial-time solvable for every component H_i of H, while H-Cover is NP-complete whenever H_i-Cover is NP-complete for some component H_i of H.
To strengthen the results, we allow arbitrary input graphs (i.e., with multiple edges, loops and/or semi-edges) when considering polynomial time algorithms, while we restrict the inputs to simple graphs when we aim at NP-hardness results. This is in line with the Strong Dichotomy Conjecture, stated in <cit.>. In some cases we are able to prove results also from the Fixed Parameter Tractability standpoint.
In those cases we consider both the source and the target graphs to be a part of the input, and the parameter is typically the maximum size of a component of the target one.
The following lemma is simple, but useful. Note that though we are mostly interested in the time complexity of deciding G⟶ H for a fixed graph H and input graph G, this lemma assumes both the source and the target graphs to be part of the input. The size of the input is measured by the number of edges plus the number of vertices of the input graphs.
Let φ(A,B) be the best running time of an algorithm deciding if A⟶ B for connected graphs A and B, and let φ(n,B) be the worst case of φ(A,B) over all connected graphs A of size n. Then for given input graphs G and H with components of connectivity G_1, G_2, …, G_p, and H_1, H_2,…, H_q, respectively, the covering pattern Cov(G,H) can be constructed in time
O(pq·max_j=1^qφ(n,H_j))=O(n^2·max_j=1^qφ(n,H_j)),
where n is the input size, i.e., the sum of the numbers of edges and vertices of G and H.
Constructing the covering pattern of input graphs G and H is in the complexity class XP when parameterized by the maximum size of a component of the target graph H, provided the H_j-Cover problem is polynomial-time solvable for every component H_j of H.
Suppose the size of every component of H is bounded by M. Let P_M be the class of all connected graphs B of size at most M such that B-Cover is decidable in polynomial time. By the assumption, every component H_j of H belongs to P_M. The class P_M is finite (its size depends on M), and so there are well defined positive integers K_M, t_M such that φ(n,B)≤ K_M· n^t_M for every B∈ P_M.
Hence φ(n,H_j)≤ K_M· n^t_M for every j=1,2,…,q, and Cov(G,H) can be constructed in time O(n^2· n^t_M) by the preceding lemma.
The covering pattern of input graphs G and H can be constructed in polynomial time provided all components of H have bounded size and the H_j-Cover problem is solvable in polynomial time for every component H_j of H.
In the following subsections, we discuss and compare the computational complexity of deciding the existence of locally bijective homomorphisms, surjective covers, and equitable covers. The corresponding decision problems are denoted by LBHom, SurjectiveCover, and EquitableCover. If the target graph is fixed to be H, we write H-LBHom, H-SurjectiveCover, and H-EquitableCover, respectively.
§.§ Locally bijective homomorphisms
The notion of locally bijective homomorphisms is seemingly the most
straightforward
generalization of the fact that in a graph covering projection to a connected graph “the closed neighborhood of every vertex of the source graph is mapped bijectively to the closed neighborhood of its image”. However, we show in this subsection that it does not behave as we would like to see it from the computational complexity perspective.
Proposition <ref> shows
that there are infinitely many graphs H with only two components each such that H-LBHom is polynomial-time solvable, while H_i-LBHom is NP-complete for one component H_i of H. The polynomial part of the desired properties is, however, fulfilled, even in some cases when both graphs are part of the input:
If H_i-Cover is polynomial-time solvable for every component H_i of H, then
* the H-LBHom problem is polynomial-time solvable,
* the LBHom problem is in XP when parameterized by the maximum size of a component of the target graph H,
* the LBHom problem is solvable in polynomial time, provided the components of H have bounded sizes.
i) If H is fixed, it has by itself bounded size, and thus the covering pattern Cov(G,H) can be constructed in polynomial time by Corollary <ref>. As noted in Observation <ref>, G⟶_lbH if and only if _Cov(G,H)g_i≥ 1 for all i=1,2,…,p, which certainly can be checked in polynomial time, once Cov(G,H) has been constructed.
ii) Deciding if G allows a locally bijective homomorphism into H is not harder than constructing the covering pattern Cov(G,H), and this task is in XP when parameterized by the maximum size of a component of the target graph H, as shown in Corollary <ref>.
iii) Follows straightforwardly from ii).
However, it is not true that H-LBHom is NP-complete whenever H_j-Cover is NP-complete for some component H_j of H. Infinitely many examples can be constructed by means of the following proposition. These examples provide another argument for our opinion that the notion of locally bijective homomorphism is not the right generalization of graph covering to covers of disconnected graphs.
Let H_1⟶ H_2 for connected components H_1, H_2 of H=H_1+H_2, and suppose that H_2-Cover is polynomial-time solvable. Then H-LBHom is polynomial-time solvable regardless the complexity of H_1-Cover.
Under the assumption H_1⟶ H_2, any input graph G allows a locally bijective homomorphism to H if and only if each of its components covers H_2. On one hand, if each component of G allows a locally bijective homomorphism to H_2, the union of these mappings is a locally bijective homomorphism of G into H. On the other hand, if G allows a locally bijective homomorphism into H, each component of G covers H_1 or H_2. However, every component that covers H_1 also covers H_2.
There are many examples of pairs of connected graphs H_1, H_2 such that H_1 covers H_2, H_1-Cover is NP-complete and H_2-Cover is solvable in polynomial time. In a certain sense it is more interesting that a similar phenomenon as in Proposition <ref> may occur even when H_1 and H_2 are incomparable by covering.
For graphs H_1=F(3,0) and H_2=F(1,1), neither H_1 covers H_2 nor H_2 covers H_1, yet for their disjoint union H=H_1+H_2, H-LBHom is polynomial-time solvable for simple input graphs, while H_1-Cover is NP-complete for simple input graphs. (The graph H is depicted in Figure <ref>.)
A connected simple graph covers F(3,0) if and only if it is cubic and is 3-edge-colorable, in which case it also covers F(1,1) (edges of any two colors form a disjoint union of cycles, which itself covers the loop of F(1,1)). Hence a simple graph allows a locally bijective homomorphism to F(3,0)+F(1,1) if and only if each of its components covers F(1,1), which can be decided in polynomial time (a connected graph covers F(1,1) if and only if it is cubic and contains a perfect matching).
This example is a concrete instance of a more general pattern, which in fact has been the reason for introducing the relation ▹ in Definition <ref>.
Let H=H_1+H_2 for connected graphs H_1 and H_2 such that H_1▹ H_2. Then H-LBHom for simple input graphs
is polynomially reducible to
H_2-LBHom for simple input graphs. In particular, if H_2-LBHom is polynomial-time decidable, then so is H-LBHom as well.
Every component of the input graph, which is assumed to be simple, allows a locally bijective homomorphism into H if and only if it covers H_2, by the assumption H_1▹ H_2. Hence for a simple graph G, we have G⟶_lb H if and only if G⟶_lb H_2.
§.§ Surjective covers
The notion of surjective covers is favored by topologists since it captures the fact that every vertex (point) of the target graph (space) is covered [Nedela, private communication 2020]. We are happy to report that this notion behaves as we
would like to see from the point of view of computational complexity.
If H_i-Cover is polynomial-time solvable for every connected component H_i of H, then
(i) the H-SurjectiveCover problem is polynomial-time solvable,
(ii) the SurjectiveCover problem is in XP when parameterized by the maximum size of a component of the target graph H, and
(iii) the SurjectiveCover problem is solvable in polynomial time if the components of H have bounded sizes.
(i) If H is fixed, the covering pattern Cov(G,H) can be constructed in polynomial time by Corollary <ref>. As noted in Observation <ref>, G⟶_surH if and only if _Cov(G,H)g_i≥ 1 for all i=1,2,…,p and Cov(G,H) has a matching of size q, which can be checked in polynomial time, once Cov(G,H) has been constructed (e.g., by network flow algorithms).
(ii) Again we construct the covering pattern Cov(G,H), which task is in XP when parameterized by the maximum size of a component of the target graph H, as shown in Corollary <ref>. Checking the degrees of Cov(G,H) as well as checking if Cov(G,H) has a matching of size q can be done in time polynomial in p+q and hence also in the size of the input.
(iii) Follows straightforwardly from (ii).
For surjective covers, the NP-hardness of the problem of deciding if there is a covering of one component of H propagates to NP-hardness of deciding if there is a surjective covering of entire H, even when our attention is restricted to simple input graphs.
The H-SurjectiveCover problem is NP-complete for simple input graphs if H_i-Cover is NP-complete for simple input graphs for at least one connected component H_i of H.
Without loss of generality suppose that H_1-Cover is NP-complete for simple input graphs. Let G_1 be a simple connected graph for which G_1⟶ H_1 is to be tested. We show that there exists a polynomial-time reduction from H_1-Cover to H-SurjectiveCover.
For every j=2,…,q, fix a simple connected graph G_j that covers H_j such that G_j⟶ H_1 if and only if H_j▹ H_1 (in other words, G_j is a witness which does not cover H_1 when H_j is not stronger than H_1). Note that the size of each G_j, j=2,…,q, is a constant which does not depend on the size of the input graph G_1.
Note also, that since H is a fixed graph, we do not check algorithmically whether H_j ▹ H_1 when picking G_j. We are only proving the existence of a reduction, and for this we may assume the relation H_j ▹ H_1 to be given by a table.
Let G be the disjoint union of G_j, j=1,…,q.
We claim that G⟶_sur H if and only if G_1⟶ H_1. The “if” part is clear. We map G_j onto H_j for every j=1,2,…,q by the covering projections that are assumed to exist. Their union is a surjective covering projection of G to H.
For the “only if” direction, suppose that f V(G)⟶ V(H) is a surjective covering projection. Since f must be globally surjective and G and H have the same number of components, namely q, different components of G are mapped onto different components of H by f. Define f by setting f(i)=j if and only if f maps G_i onto H_j. Then f is a permutation of {1,2,…,q}. Consider the cycle containing 1. Let it be (i_1=1, i_2, i_3, …, i_t), which means that G_i_j⟶ H_i_j+1 for j=1,2,…,t-1, and G_i_t⟶ H_i_1. By reverse induction on j, from j=t down to j=2, we prove that H_i_j▹ H_1. Indeed, for j=t, G_i_t⟶ H_1 means that H_i_t is stronger than H_1, since we would have set G_i_t as a witness that does not cover H_1 if it were not. For the inductive step, assume that H_i_j+1▹ H_1 and consider G_i_j. Now G_i_j covers H_i_j+1 since f(i_j)=i_j+1. Because G_i_j is a simple graph and H_i_j+1 is stronger than H_1, this implies that G_i_j⟶ H_1. But then H_i_j must itself be stronger than H_1, otherwise we would have set G_i_j as a witness that does not cover H_1. The inductive proof concludes; we proved that H_i_2▹ H_1, and hence G_1⟶ H_1 follows from the fact that the simple graph G_1 covers H_i_2.
§.§ Equitable covers
As already announced, we wish to argue that equitable covers form the right generalization of covers of connected graphs to covers of disconnected ones. Not only they capture the crucial properties of covers of connected graphs, but they also behave nicely from the computational complexity point of view.
The H-EquitableCover problem is polynomial-time solvable if H_i-Cover is polynomial-time solvable for every component H_i of H.
First construct the covering pattern Cov(G,H). Since H is a fixed graph, this can be done in time polynomial in the size of the input, i.e., G, as it follows from Corollary <ref>.
Using dynamic programming, fill in a table M(s,k_1,k_2,…,k_q), s=0,1,…,p, k_j=0,1,…,k=|V(G)|/|V(H)| for j=1,2,…,q, with values true and false. Its meaning is that M(s,k_1,k_2,…,k_q)= if and only if G_1∪ G_2∪…∪ G_s allows a locally bijective homomorphism f to H such that for every j and every u∈ V(H_j), |f^-1(u)|=k_j. The table is initialized by setting
M(0,k_1,…,k_q) = {[ k_1=k_2=…=k_q=0; ].
In the inductive step assume that all values for some s are filled in correctly, and move on to s+1. For every edge g_s+1h_j of Cov(G,H) and every q-tuple k_1,k_2,…,k_q such that M(s,k_1,k_2,…,k_q)=, set M(s+1,k_1,k_2,…,k_j+r_s+1,j,…,k_q)=, provided k_j+r_s+1,j≤ k. Clearly, the loop invariant is fulfilled, and hence G is a k-fold (equitable) cover of H if and only if M(p,k,k,…,k) is evaluated true.
The table M has (p+1)· (k+1)^q=O(n^q+1) entries and the inductive step changes O((k+1)^q· q) values. So processing the table can be performed in O((k+1)^q(1+pq))=O(n^q+1) steps.
We will show in Theorem <ref> that q in the exponent cannot be avoided if both G and H are part of the input. For this situation, we provide a simpler result.
The EquitableCover problem is in XP when parameterized by the number q of connected components of H plus the maximum size of a component of the target graph H, provided H_i-Cover is polynomial-time solvable for every component H_i of H.
The algorithm as described in Theorem <ref> is in XP when parameterized by the number q of components of H (needed for processing the table M) plus the maximum size of a component of H (needed for computing the covering pattern).
The EquitableCover problem in W[1]-hard when parameterized by the number of connected components of H, even if the sizes of components of the target graph H are bounded and each H_i-Cover is polynomial-time solvable for every component H_i of H.
We reduce from Bin Packing in Unary parameterized by the number of bins. Given p non-negative integers x_1,x_2,…,x_p, the task is to partition this set into q disjoint subsets S_1,S_2,…,S_q⊂{1,2,…,p} so that for each j=1,2,…,q, the sum ∑_i∈ S_jx_i of the numbers in each set equals ∑_i=1^px_i/q. Deciding if this is possible is a W[1]-hard problem when parameterized by the number q of the bins, even if the numbers x_i are encoded in unary <cit.>.
Given p,q and the numbers x_i,i=1,2,…,p, we set the target graph H to be the disjoint union of q one-vertex graphs, each having one loop incident with its vertex (and no other links). For each i=1,2,…,p, G_i will be a cycle of length x_i, and G will be the disjoint union of G_i, i=1,2,…,p. The components H_j of H are of bounded size (one vertex plus one edge), and for each j, H_j-Cover is solvable in polynomial time, since exactly cycles (of arbitrary lengths) cover H_j.
The covering pattern Cov(G,H) is thus the complete bipartite graph K_p,q with edge weights r_ij=x_i. Hence G⟶_equit H if and only if the input of the Bin Packing problem is feasible.
The NP-hardness theorem holds true as well:
The H-EquitableCover problem is NP-complete for simple input graphs if H_i-Cover is NP-complete for simple input graphs for at least one connected component H_i of H.
We proceed in a similar way as in the proof of Theorem <ref>. Suppose without loss of generality that H_1-Cover is NP-complete. We show that there exists a polynomial-time reduction from H_1-Cover to H-EquitableCover. For every j=2,…,q, fix a simple connected graph G_j that covers H_j such that G_j⟶ H_1 if and only if H_j▹ H_1 (in other words, G_j is the witness which does not cover H_1 when H_j is not stronger than H_1). For every j=2,…,q, we have integers k_j=|V(G_j)|/|V(H_j)| which are constants independent of G_1.
Now suppose we are given a simple graph G_1 whose covering of H_1 is to be tested. Compute k=|V(G_1)|/|V(H_1)|, which can be done in time polynomial in the size of G_1. This k should be an integer, since otherwise we conclude right away that G_1 does not cover H_1. Set K to be the least common multiple of k,k_2,…,k_q, then K=Θ(k)=Θ(|V(G_1|). Define G to be the disjoint union of K/k copies of G_1 with K/k_j copies of G_j for all j=2,…,q. (Note that the number of connected components of G is
p=K/k+∑_j=2^qK/k_j and the size of G is Θ(|V(G_1)|+|Λ(G_1)|).)
We claim that G_1 covers H_1 if and only if G equitably covers H, and in that case G is a K-fold cover of H.
The “only if” part is clear. We map each copy of G_j onto H_j for every j=1,2,…,q by the covering projections that are assumed to exist. Their union is a surjective covering projection of G to H. To show that this is an equitable covering projection, we do just a little bit of counting. Since G_j is a k_j-fold cover of H_j (here and in the sequel, we write k_1=k) and we have K/k_j copies of G_j in G, the preimage of each vertex of H in this mapping has size K.
For the “if” part, assume that f G⟶ H is a K-fold covering projection. Every connected component of G must map onto one connected component of H, but it may happen that different copies of the same G_j map onto different components of H. Still, as we argue below, we can again find a sequence of indices i_1=1, i_2, …, i_t such that for every j=1,2,…,t-1, some copy of G_i_j is mapped to H_i_j+1 by f, and some copy of G_i_t is mapped onto H_1. Then the proof of G_1⟶ H_1 proceeds exactly as in the proof of Theorem <ref>.
If some copy of G_1 is mapped onto H_1, then t=1 and G_1 covers H_1. Suppose this is not the case. Let S⊆{1,2,…,q} and let S be an inclusion-wise minimal set of components of G such that:
* all copies of G_1 are in S,
* if a component from S is mapped onto H_j by f, then j∈ S, and
* if j∈ S, then all copies of G_j are in S.
The sets S and S are uniquely defined by application of the rules a), b), and c). It follows that if 1∈ S, then a sequence i_1=1,i_2,…, i_t exists. If 1∉S, then f restricted to S is a surjective cover of the disjoint union of H_j,j∈ S. But it cannot be a K-fold cover, because the union of the components in S has K/k|V(G_1)|+∑_j∈ SK/k_j|V(G_j)|=K· |V(H_1)|+K∑_j∈ S|V(H_j)| vertices, while ⋃_j∈ S H_j has ∑_j∈ S|V(H_j)| vertices. This concludes the proof.
§ COVERING COLORED TWO-VERTEX GRAPHS
In this section we introduce
the last generalization and consider coverings of graphs which come with links and vertices equipped with additional information, which we simply refer to as a color. The requirement is that the covering projection respects the colors, both on the vertices and on the links. This generalization is not purposeless as it may seem. It is shown in <cit.> that to fully characterize the complexity of H-Cover for simple graphs H, it is necessary and suffices to understand the complexity of H-Cover for colored mixed multigraphs of valency greater than 2. The requirement on the minimum degree of H gives hope that the borderline between the easy and hard instances can be more easily described. We will first describe the concept of covers of colored graphs with semi-edges in detail in Subsection <ref>, where we also give our final argument in favor of equitable covers. Then we extend the characterization of the computational complexity of covering colored 2-vertex graphs without semi-edges presented in <cit.> to general graphs in Subsection <ref>.
§.§ Covers of colored graphs
In this section we return to the dart model of graphs, as it is more convenient for describing colored mixed graphs (graphs that allow both undirected and directed links).
We say that a graph G is colored, if it is equipped with a function
c D ∪ V →ℕ.
A colored graph covers a colored graph H if G covers H via a mapping f and this mapping respects the colors, i.e., c_G=c_H∘ f on D and every u∈ V_G satisfies
c_G(u)=c_H(f(u)).
We say that a graph G is colored, if it is equipped with a function
c:D ∪ V →ℕ. Furthermore, a colored graph covers a colored graph H if G covers H via a mapping f which respects the colors, i.e., c_G=c_H∘ f on D and every u∈ V_G satisfies
c_G(u)=c_H(f(u)).
Note that one may assume without loss of generality that all vertices are of the same color, since we can add the color of a vertex as a shade to the colors of its darts. However, for the reductions described below, it is convenient to keep the intermediate step of coloring vertices as well.
The final argument that equitable covers are the most proper generalization to disconnected graphs is given by the following observation. (Note that color-induced subgraphs of a connected graph may be disconnected.)
Let a colored graph H be connected and let f:G⟶ H be a covering projection. Then f|_G_i,j:G_i,j⟶ H_i,j is an equitable covering projection for every two (not necessarily distinct) colors i,j, where G_i,j,H_i,j denote the subgraphs of G and H induced by the links e such that c(e)={i,j} (note that c(e) is the set of colors of darts that belong to e, i.e., c(e)={c(d_1),c(d_2)} if the link e contains two darts d_1 and d_2, and c(e)={c(d)} if e is a semi-edge containing the dart d).
Kratochvíl et al. <cit.> proved that the existence of a covering between two (simple) graphs can be reduced to the existence of a covering between two colored graphs of minimum degree three. Their concept of colored directed multigraph is equivalent to our concept of colored graphs (without semi-edges), namely:
* The vertex color encoding the collection of trees (without semi-edges) stemming from a vertex is encoded as the vertex color in the exactly same way.
* The link color encoding a subgraph isomorphic to colored induced path between two vertices of degree at least three is encoded as the pair of colors of the edge or a loop
that is used for the replacement of the path.
* When the path coloring is symmetric, we use the same color twice for the darts of the replaced arc which could be viewed as an undirected edge of the construction of <cit.>.
* On the other hand, when the coloring is not symmetric and the
replaced arc hence needed to be directed in <cit.>, we use a pair of distinct colors on the two darts, which naturally represents the direction.
When semi-edges are allowed we must take into account one more possibility.
The color used on the two darts representing a symmetrically colored path with an even number of vertices may be used also to represent a half-way path with the identical color pattern ended by a semi-edge. A formal description follows:
By a pattern P we mean a finite sequence of positive integers (p_1,…,p_k).
A pattern is symmetric if p_i=p_k+1-i, and the reverse pattern is defined as P=(p_k,…,p_1).
The pattern of a closed path u_0,{d_1,d_2},u_1,…,{d_2k-1,d_2k},u_k in a colored graph G is the sequence of colors
c(u_0),c(d_1),c(d_2),c(u_1),c(d_3),c(d_4), c(u_2), …, c(d_2k),c(u_k).
Analogously we define patterns of open and half-way paths.
Now, a half-way path of pattern P that starts in a vertex of degree 3 and ends by a semi-edge will be replaced by a semi-edge whose color is identical to that used for the two darts forming a normal edge used for the replacement of closed paths whose pattern
is the concatenation PP, see Figure <ref>.
§.§ Two-vertex graphs
Kratochvíl et al. <cit.> completely characterized the computational complexity of the H-Cover problem for colored graphs H with at most two vertices without semi-edges. Their result implies the following:
Let H be a connected colored graph on at most two vertices without semi-edges.
The H-Cover problem is polynomial-time solvable if:
* the graph H contains only one vertex, or
* H is not regular, or
*
* for every color i∈ℕ, the H_i-EquitableCover problem is solvable in polynomial time, where H_i is the colored subgraph of H induced by the links colored by i, and
* for every pair of colors i,j∈ℕ, the H_i,j-EquitableCover problem is solvable in polynomial time, where H_i,j is the colored subgraph of H induced by the links l∈Λ such that c(l)={i,j}.
Otherwise, the H-Cover problem is NP-complete.
Informally, the NP-completeness persists if and only if H has two vertices which have the same degree in every color, and the NP-completeness appears on a monochromatic subgraph (either undirected or directed). Such a subgraph must contain both vertices and be connected. Note explicitly that H_i is a monochromatic undirected subgraph of H, while H_i,j is often referred to as a monochromatic directed subgraph of H. (If i<j, we interpret the links of H_i,j as colored by the color (i,j), and the links are directed from the dart colored i to the dart colored j. In general, H_i,j may contain normal edges and loops, but it contains no semi-edges.)
We extend the characterization from Proposition <ref> to include semi-edges as well. Because of the results of Section <ref>, we restrict our attention to connected target graphs.
Let H be a connected colored graph on at most two vertices.
The H-Cover problem is polynomially solvable if:
* The graph H contains only one vertex and
for every i, H_i-Cover is solvable in polynomial time, where H_i is the subgraph of H induced by the loops and semi-edges colored by i,
or
* H is not regular and
for every i and each vertex u∈ V_H, the H_i^u-Cover problem is solvable in polynomial time, where H_i^u is the colored subgraph of H induced by the loops and semi-edges incident with u colored by i,
or
* H is regular on two vertices and
* for every color i∈ℕ, the H_i-EquitableCover problem is solvable in polynomial time, where H_i is the colored subgraph of H induced by the links colored by i, and
* for every pair of colors i,j∈ℕ, the H_i,j-EquitableCover problem is solvable in polynomial time, where H_i,j is the subgraph of H induced by the links l∈Λ such that c(l)={i,j}.
Otherwise, the H-EquitableCover problem is NP-complete.
This theorem shows that colored graphs with two vertices exemplify a similar phenomenon as surjective or equitable covers of disconnected graphs – a polytime/NP-completeness dichotomy applies, and the H-Cover problem is NP-hard if and only if some monochromatic subgraph induces an NP-hard covering problem. This is in sharp contrast with larger graphs. It has been shown in <cit.> that the 3-vertex graph consisting of 2 undirected triangles, each colored by a different color, defines an NP-hard covering problem, while for one color, K_3-Cover is polynomial-time solvable.
Of Theorem <ref>.
We first discuss the polynomial cases:
* The graph H has only one vertex and H_i-Cover is solvable in polynomial time for each H_i. We accept the input if and only if all H_i,j-Cover problems accept the corresponding restricted inputs G_i,j. Note that for i≠ j, H_i,j consists of some number of directed loops incident with the vertex of H, and such H_i,j always defines a polynomial-time solvable covering problem. For undirected monochromatic graphs with one vertex, i.e., i=j, F(b,c)-Cover is polynomial-time solvable if and only if b≤ 1 or b=2 and c=0.
In such an admissible case, the overall covering projection f G→ H is the union of all partial covering projections G_i,j→ H_i,j.
* The graph H has two vertices, it is not regular and H_i^u-Cover is solvable in polynomial for every i and u. Let the two vertices of H be v and w. They can be distinguished:
* by their vertex color, and/or
* by the number of incident darts of some color i.
We perform the same separation on the vertices of G into sets V_v and V_w. Namely, V_v contains those vertices of G that have the same color as v and the same number of incident darts of every color as v and analogously for V_w. In particular we reject the input if V_v∪ V_w V(G).
We define the vertex mapping V_G→ V_H by mapping the entire V_v to v, and V_w to w.
Then, as in the previous case, we check if G[V_v] covers H^v, and if G[V_w] covers H^w. This can be done in polynomial time according to the assumption.
Lastly we check the covering of edges e incident with both v and w.
They can be covered only by edges incident with one vertex in V_v and one in V_w.
When v and w are connected by an undirected multiedge
of color i (or a directed one of bi-color (i,j)) and multiplicity k, then a covering may exist if and only if edges of color
i (or bi-color (i,j), respectively) between V_v and V_w induce a k-regular subgraph.
This necessary condition is also sufficient as every k-regular bipartite graph
can be split into k perfect matchings and these yield the dart mapping of a cover.
* The graph H has two vertices, is regular, and H_i,j-EquitableCover is solvable in polynomial time for every i,j. Let V(H)={v,w}. For every vertex u∈ V(G), we introduce a Boolean variable
x_u. Based on the structure of G and H we compose a formula φ in CNF with clauses of size two whose satisfying assignments are in one-to-one correspondence with covering projections from G to H. For a covering projection f G → H,
x_u is evaluated to true if f(u)=v, and x_u is evaluated to false if f(u) = w.
The constitution of φ relies on the characterization of polynomially solvable cases
of the H-Cover problem for symmetric graphs with two vertices given by Kratochvíl et al. <cit.> and Bok et al. <cit.>. Luckily, all the polynomially solvable cases can be solved via 2-Sat, and hence the formula φ is simply obtained as the conjunction of subformulas for pairs of (not necessarily distinct) colors i,j.
Now we proceed to the NP-complete cases. We will only prove the dichotomy result as stated, i.e., for general inputs, not to overwhelm the reader with technical details of proving NP-hardness for simple input graphs. This will be subject of a strong dichotomy result for a larger class of target graphs in a forthcoming paper <cit.>. The proof below is based on the description of monochromatic graphs that define NP-hard instances of the covering problem:
The problem F(a,b)-Cover is NP-complete even for simple input graphs when a≥ 2 and a+b≥ 3 <cit.>,
The problem W(k,m,ℓ,p,q)-Cover is NP-complete even for simple bipartite graphs when ℓ≥ 1, k+2m=q+2p>0 and k+2m+ℓ≥ 3 <cit.>,
The problem WD(m,ℓ,m)-Cover is NP-complete for ℓ≥ 1, m>0 and m+ℓ≥ 3 <cit.> (here WD(m,ℓ,m) is the directed graph with two vertices, m directed loops incident with each of the vertices, and ℓ directed edges in each direction between the two vertices).
* The graph H has only one vertex. Let H_i be the subgraph of H for which the H_i-Cover problem is NP-complete.
By H' we denote the complement of H_i in H. Let G_i be the graph for which
the covering to H_i is questioned. We create a graph G from G_i and |V(G_i)| copies of H'
by identifying the vertex of each copy of H' with a vertex of G_i. Clearly the size of G is Θ(G_i). We claim that G covers H if and only if G_i covers H_i
When G_i covers H_i, then we extend the covering to each copy of H' by the identity mapping on H'. On the other hand the restriction of a covering projection G ⟶ H to the subgraph G_i is a covering projection to H_i.
* The graph H has two vertices and it is not regular. Let H^u_i be a monochromatic subgraph induced by one of the vertices of H that defines an NP-complete covering problem. We apply the same approach as in the previous case, we just use H_i^u instead of H_i.
* The graph H has two vertices and it is regular.
In this case, we will exploit the concept of a (“categorical”) graph product: For a colored graph G, the product G× 2 has as the dart set the Cartesian product D(G) ×{1,2}. To simplify our expressions we use d_1 for (d,1) and u_1 for u×{1} when the use of indices cannot be misinterpreted.
The two darts d_1,d_2 have the same color as d.
Every vertex
u∈ V(G) gives rise to two vertices u_1 and u_2 of the same color as u.
Every semi-edge s={d}∈ S gives rise to a normal edge {d_1,d_2}, while every loop or normal edge
{d,d'}∈ L ∪ E gives rise to two normal edges {d_1,d'_2} and {d'_1,d_2}. Note that G× 2 has no semi-edges nor loops, but it may have multiple normal edges.
Observe that mapping both d_1,d_2 onto d for every d∈ D, we get a covering projection G× 2 ⟶ G. The graph G× 2 is also referred to as the canonical double cover of G, since any bipartite graph covers G if and only if it covers G× 2.
* Let H_i be a spanning monochromatic subgraph of H for which H_i-EquitableCover is NP-complete. Let H' be the complement of H_i in H and let G_i be the graph for which
the covering to H_i is questioned.
* For connected H_i, we may assume that G_i is connected as well. We take two copies of G_i, one copy of G_i × 2
and |V(G_i)| copies of H'× 2. If u is a vertex of G_i, let u' and u” be the two vertices corresponding to u in the two copies of G_i, while u”'_1 and u”'_2
be the vertices that arouse from u in G_i × 2. Analogously let vertices v_1,v_2,w_1,w_2 be the vertices obtained from v and w of H' in H'× 2. We form G by choosing for each u∈ V(G_i)
a unique copy of H'× 2 and identifying
the pairs
(u',v_1), (u”,v_2), (u”'_1,w_1) and (u”'_2,w_2),
where v_1,v_2,w_1,w_2 are taken from the chosen copy of H'× 2,
see Figure <ref>
Observe that if a bipartite graph Γ covers the regular 2-vertex graph H via a mapping f: Γ⟶ H, then the companion mapping (which we call the swap) f defined on vertices by f(u)=w iff f(u)=v, also determines a covering projection f:Γ⟶ H (because f is a degree-obedient vertex mapping, cf. <cit.>).
We claim that G covers H if and only if G_i covers H_i.
If G_i covers H_i via a covering projection f, then use this f on the 2 copies of G_i and use its swap on G_i × 2, and denote this vertex mapping by g. For every u∈ V(G_i), we get either g(u')=g(u”)=v and g(u_1”')=g(u_2”')=w, or vice versa. Thus on each copy of H'× 2 we have obtained a mapping that can be extended to a covering projection. Their union, together with g, is a covering projection from G to H.
On the other hand, the restriction of a covering projection G⟶ H to G_i yields a covering projection G_i⟶ H_i.
Since H_i is connected, this is an equitable covering.
(Disconnected H_i would yield only a locally
bijective homomorphism G_i⟶_lb H_i.)
* For disconnected H_i, we first recall that the H_i-EquitableCover is polynomially solvable if each component of H_i is incident with at most one semi-edge or the degree of H_i is two.
Let H_i^+ be the component of H_i with the maximum number of semi-edges, i.e., at least two, let v be the vertex of H_i^+ and let H_i^- be its complement in H_i. We reduce from the NP-complete problem H_i^+-Cover. (Note that H_i^--Cover could be polynomially solvable.)
Let us have a
connected graph G_i as an instance of H_i^+-Cover.
We use G_i together with |V(G_i)| copies of H_i^- and
|V(G_i)| copies of H'.
For each vertex u∈ V(G_i), we take a copy of H' and identify its vertex v with u, while the other vertex w of this copy of H' is identified with the vertex of H_i^-, see Figure <ref>. This concludes the construction of G.
Again we claim that G⟶ H if and only if G_i⟶ H_i^+. For the “only if" direction,
observe that a copy of H_i^- may cover H_i^+ only if these graphs are isomorphic. Hence we may assume that every copy of H_i^- is mapped on the subgraph H_i^-, and then
the copy of G_i is mapped onto H_i^+.
On the other hand, if G_i⟶ H_i^+, the desired covering projection G⟶ H is constructed from this mapping on G_i, combined with mapping every copy of H' and every copy H_i^- by identity mappings onto H' and H_i^-, respectively.
* Let H_i,j-EquitableCover be NP-complete for some bi-colored subgraph H_i,j, i≠ j. Note that H_i,j is a directed graph, and hence it is connected (since one-vertex directed graphs determine polynomial-time solvable instances of graph covers).
Then we perform the same construction of G
from two copies of the instance G_i,j, a copy of G_i,j× 2 and
|V(G_i,j)| copies of H'× 2 that are merged in the same way as in the connected subcase of 3.a). The arguments are then identical.
§ CONCLUSION
The main goal of this paper was to point out that the generalization of the notion of graph covers of connected graphs to covers of disconnected ones is not obvious. We have presented three variants, depending on whether the projection should be or does not need to be globally surjective, and if all vertices should be or do not need to be covered the same number of times. We argue that the most restrictive variant, which we call equitable covers, is the most appropriate one, namely from the point of view of covers of colored graphs.
We have compared the computational complexity aspects of these variants and show that two of them, surjective and equitable covers, possess the naturally desired property that H-Cover is polynomially solvable if covering each component of H is polynomially solvable, and NP-complete if covering at least one component of H is NP-complete.
In the last section we review the extension of graph covers to covers of colored graphs, recall that colors can be encoded by non-coverable patterns in simple graphs, and discuss this issue in detail for the case when semi-edges are allowed. With this new feature we conclude the complete characterization of the computational complexity of covering 2-vertex colored graphs, initiated (and proved for graphs without semi-edges) 24 years ago in <cit.>.
Last but not least, some of the hardness reductions are based on a newly introduced notion of ▹ order of connected graphs, which expresses inclusions among classes of simple covers of the graphs. We believe that a better understanding of this relation would shed more insight into the concept of graph covers as a whole, and state two open problems about this relation.
§ ACKNOWLEDGMENTS
* Jan Bok: Supported by the ANR project GRALMECO (ANR-21-CE48-0004).
* Nikola Jedličková: Supported by research grant GAČR 20-15576S of the Czech Science Foundation, by SVV–2020–260578, and GAUK 1580119.
* Jiří Fiala and Jan Kratochvíl: Supported by research grant GAČR 20-15576S of the Czech Science Foundation.
* Michaela Seifrtová: Supported by research grant GAČR 19-17314J of the Czech Science Foundation.
The authors thank Ondra Suchý for valuable comments, namely for pointing out the reference <cit.>.
|
http://arxiv.org/abs/2306.09693v1
|
20230616085217
|
Matching Fields in Macaulay2
|
[
"Oliver Clarke"
] |
math.CO
|
[
"math.CO",
"math.AC",
"68W30, 14M25, 52B20, 52B40"
] |
Linear convergence of with the strong convexityThis work was supported by Grant No.YSBR-034 of CAS and Grant No.12288201 of NSFC.
[
July 31, 2023
==================================================================================================================================
This article introduces the package MatchingFields for Macaulay2 and highlights some open problems.
A matching field is a combinatorial object whose data encodes a candidate toric degeneration of a Grassmannian or partial flag variety of type A. Each coherent matching field is associated to a certain maximal cone of the respective tropical variety. The MatchingFields package provides methods to construct matching fields along with their rings, ideals, polyhedra and matroids. The package also supplies methods to test whether a matching field is coherent, linkage and gives rise to a toric degeneration.
§ INTRODUCTION
A matching field comes in two flavours: a Grassmannian (k,n) matching field is an ordering of the elements of each k-subset of [n] := {1, …, n } and a flag (j_1, …, j_k; n) matching field is a set {L_1, …, L_k} where L_i is a Grassmannian matching field for (j_i, n) for each i ∈ [k].
Grassmannian matching fields were introduced by Sturmfels and Zelevinsky <cit.> to study the Newton polytope of a product of maximal minors.
In recent work, matching fields are used to parametrise a family of projective toric varieties, which can be thought of as candidates for the special fiber of a toric degeneration of a Grassmannian or flag variety. See <cit.>. A matching field is said to be coherent if it is induced by a weight matrix w.
In this case, the matching field is said to give rise to a toric degeneration if the Plücker forms are a SAGBI basis for the Plücker algebra with respect to the weight order w. Whenever this happens, the image w of w under the tropical Stiefel map <cit.> lies in the relative interior of a top-dimensional prime cone of the tropical Grassmannian <cit.> or flag variety with respect to the trivial valuation.
The toric variety associated to a matching field is defined by its ideal,
see Definition <ref>, or in terms of the normal fan of the matching field polytope,
see Section <ref>. For some families of matching fields, it is known that the matching field polytopes are related by sequences of combinatorial mutations <cit.>. The property of a matching field giving rise to a toric degeneration has formulations in terms of the matching field ideal, and properties of its polytope that are invariant under mutation. See Propositions <ref>, <ref>, <ref>, and <ref>
The question of determining which matching fields give rise to toric degenerations is an open problem. For Grassmannians (2,n) and (3,m) with m ∈{6,7,8}, it is possible to compute the tropical Grassmannian explicitly. More generally, the use of combinatorial mutations has led to the construction of families of matching fields that give rise to toric degenerations. Examples of toric degenerations also arise from representation theory. For example, the Gelfand-Tsetlin degeneration and Fang-Fourier-Littleman-Vinberg degeneration both have a description in term of matching fields <cit.>.
In this article, we introduce the package MatchingFields for Macaulay2 <cit.>. The package facilitates working with matching fields, their ideals and polytopes, and provides methods for testing whether they are coherent and give rise to toric degenerations. Additionally, the package allows the user to construct: matroid subdivisions; algebraic matroids; and tope fields. We give examples that show how to use the package and provide exposition about techniques used to perform computations. We highlight some open problems about matching fields; for example, the matching field description of the algebraic matroid of the Grassmannian and a tope description of the free resolution of the matching field ideal. See Conjecture <ref> and Remark <ref>, respectively.
Overview.
In Section <ref>, we fix our setup for Plücker algebras, matching field ideals, and polytopes. In Section <ref> we recall the Plücker embedding of type-A partial flag varieties into a product of projective spaces and fix our notation for the Plücker algebra and Plücker ideal. In Section <ref>, we recall the definition of matching field ideals and algebras. In particular, we recall what it means for a matching field to give rise to a toric degeneration of the partial flag variety. If this happens, then we say L is toric, see Definition <ref>. In Section <ref>, we recall the definition of the matching field polytope and Newton-Okounkov body. We prove Proposition <ref>, which shows that a coherent matching field is toric if and only if the matching field polytope has maximal volume.
In Section <ref>, we introduce the package MatchingFields.
In Section <ref>, we show how to construct matching fields and view their basic properties. In particular, we define the weight matrix cone of a matching field, which admits a test for whether a matching field is coherent, see Definition <ref> and Proposition <ref>. In Section <ref>, we construct the ideals and rings associated to matching fields. In particular, we explain how to check directly whether the Plücker forms are a SAGBI basis using the package SubalgebraBases <cit.>. In Section <ref>, we showcase the other functionality of the package. We explain the construction of: matching field polytopes and Newton-Okounkov bodies; matroid subdivisions of the hypersimplex induced by points in the Dressian; matching field matroids that decompose the algebraic matroid of the Grassmannian; and tope fields and their amalgamations.
§ BACKGROUND
In this section, we recall the basic definitions and results about toric degenerations arising from matching fields. Further details can be found in <cit.>. Our conventions for weighted polynomial rings are as follows. Let K be a field and Y = K[y_1, …, y_n] a polynomial ring . A weight for Y is a vector w ∈^n. The weight of a monomial cy^u ∈ Y with coefficient c ∈ K \{0} and exponent u ∈^n is the dot product w(c y^u) = u · w of u and w. The weight w(f) of a polynomial f ∈ Y is the minimum weight of a term of f. The initial form (or leading terms) of a polynomial f = ∑_u c_u y^u is the sum of minimum-weight terms of f:
_w(f) = ∑_u · w = w(f) c_u y^u.
Note that the initial form of a polynomial need not be a monomial. A monomial order ≺ is said to refine a weight order w if for any polynomial f, we have _≺(f) = _≺(_w(f)).
§.§ Plücker algebras
Throughout, we fix the following setup and define the Plücker algebra for partial flag varieties of type A. Let R = [x_i,j i ∈ [n-1], j ∈ [n]] be a polynomial ring whose variables are arranged into an (n-1) × n matrix X = (x_i,j).
Fix an indexing set J = {j_1 < … < j_k }⊆ [n-1].
The partial flag variety F = (j_1, …, j_k; n), as a set, is the collection of chains of vector subspaces of ^n:
F =
{V_1 ⊂ V_2 ⊂…⊂ V_k V_i ⊆^n and (V_i) = j_i for all 1 ≤ i ≤ k}.
If k = 1, then F = (j_1; n) = (j_1, n) is the Grassmannian of j_1-dimensional subspaces of ^n.
We embed F into a product of projective spaces via the Plücker embedding.
Explicity, for each chain of vector subspaces V = (V_1 ⊂…⊂ V_k) ∈ F we fix an (n-1) × n matrix M_V such that V_i is the row-span of rows 1,2, …, j_i of M_V. We map M_V into
:= ^nj_1 - 1×^nj_2 - 1×…×^nj_k - 1
as follows. For each j ∈ J and each j-subset I of [n], the I-th coordinate of the image of M_V in the factor ^nj - 1 of is the minor of M_V on the columns indexed by I and rows indexed by 1, 2, …, j. A little linear algebra shows that the map F → taking V to the point in described above is injective and well-defined, i.e., the map does not depend on the choice of matrices M_V. Therefore, the map defines the multi-projective Plücker embedding of F. Sometimes, it is convenient for us to consider F as a projective variety. Concretely, we compose the Plücker embedding with the Segre embedding of into projective space. The coordinates of the embedding are the products of the transversals of coordinates of .
The Plücker algebra A is the coordinate ring of F under the Plücker embedding. Explicilty, we take A to be the subalgebra of R = [x_i,j] given by
A = [(X_I) I ⊆ [n], |I| ∈ J] ⊆ R
where X_I is the submatrix of X with columns indexed by I and and rows indexed by 1,2, …, |I|.
We will also consider the presentation of A as the quotient S / where
S := [P_I I ⊆ [n], |I| ∈ J] is a polynomial ring and := (R → S P_I ↦(X_I))
is an ideal.
We refer to as the Plücker ideal, which is the vanishing ideal of F = V() ⊆.
§.§ Matching field ideals
A matching field for the Grassmannian (k,n) is an ordering of the elements of each k-subset of [n].
The ordering of a subset {i_1, i_2, …, i_k}⊆ [n] is a tuple (i_1, i_2, …, i_k) of the matching field. A flag matching field for (j_1, …, j_k; n) is a collection of matching fields L = {L_1, …, L_k} where L_i is a matching field for (j_i, n). The set of tuples of L is the union of the set of tuples of each L_i.
Fix a matching field L = {L_1, …, L_k} for the partial flag variety (j_1 < … < j_k; n) and write J = {j_1, …, j_k}. For each tuple (i_1, …, i_ℓ) of L, with underlying set I = {i_1, …, i_ℓ},
we define the monomial m_I = (-1)^c x_1, i_1 x_2, i_2… x_ℓ, i_ℓ∈ R where c = |{(a, b) ∈ [ℓ] × [ℓ] a < b, i_a > i_b }| is the number of descents of the tuple. Equivalently, the coefficient (-1)^c of m_I is such that m_I is a term of (X_I).
Recall the rings R = [x_i,j] and S = [P_I]. With the above setup, we define the monomial algebra of the matching field
[L] := [m_I I ⊆ [n], |I| ∈ J] ⊆ R.
The matching field ideal of L is the presentation ideal of [L] given by _L := (S → R P_I ↦ m_I).
It is helpful to imagine [L] as a `candidate initial algebra' of the Plücker algebra A. We say that a matching field L is coherent if there is a weight matrix w ∈^(n-1) × n for the polynomial ring R such that _w((X_I)) = m_I for each subset I ⊆ [n] with |I| ∈ J. Note, if a weight matrix w exists then it uniquely identifies all tuples of the matching field L. In this case, we say that L is the matching field induced by w.
Let L be a coherent matching field induced by a weight matrix w. We say that L gives rise to a toric degeneration of F if the initial algebra _w(A) := [_w(f) f ∈ A] of the Plücker algebra is equal to [L] the algebra of the matching field. For ease of notation, we say L is toric whenever L gives rise to a toric degeneration of F. Equivalently, with the language of Remark <ref>, L is toric if the generators of A form a SAGBI basis with respect to weight order w.
Note that the choice of weight matrix w does not affect whether L is toric. That is, if w' is another weight matrix that induces L, then we have _w(A) = [L] = _w'(A). So, the property of being toric is a well-defined property of L.
The property of being toric has an equivalent formulation in terms of the Plücker ideal and matching field ideal _L. Given a weight matrix w that induces a coherent matching field L, observe that w is a weight for R = [x_i,j]. We define the induced weight vector w, for the polynomial ring S, by
w(P_I) = w(m_I). The following is an application of <cit.>.
Let L be a coherent matching field induced by a weight matrix w. Then L is toric if and only if _w() = _L.
The diagonal matching field is defined so that the entries of each tuple are increasing. For instance, the diagonal matching field L for (3,6) has tuples:
(1,2,3),
(1,2,4),
(1,3,4),
(2,3,4),
(1,2,5), …, (4,5,6).
For each 3-subset I ⊆ [6], the monomial m_I is the leading diagonal term of the maximal minor (X_I):
m_123 = x_1,1x_2,2x_3,3,
m_124 = x_1,1x_2,2x_3,4, …,
m_456 = x_1,4x_2,5x_3,6.
In general, diagonal matching fields are coherent as they are induced by the weight matrix
w = [ 0 0 0 … 0; n n-1 n-2 … 1; 2n 2(n-1) 2(n-2) … 2; ⋮ ⋮ ⋮ ⋱ ⋮; (n-2)n (n-2)(n-1) (n-2)(n-2) … n-2 ].
The induced weight vector for the diagonal matching field of (3,6) is given by
w_123 = 13, w_124 = 11, w_134 = 10, w_234 = 10, w_125 = 9, …, w_456 = 4.
The diagonal matching field is toric for any Grassmannian and partial flag variety <cit.>. This toric degeneration is well-studied and naturally arises from the representation theory of algebraic groups <cit.>. It is commonly known as the Gelfand-Tsetlin degeneration.
In the MatchingFields package, we use the characterisation in Proposition <ref> to test whether a matching field is toric. This is because Macualay2 is specialised at computing Gröbner bases. In particular, our implementation computes a partial Gröbner basis for _w(). The matching field ideal _L is a toric ideal so we efficiently compute it using the software package 4ti2 <cit.>.
Given a finite set of polynomials f_1, …, f_s of a polynomial ring equipped with a fixed term order. If the initial forms generate the initial algebra K[(f_1), …, (f_s)] = (K[f_1, …, f_s]), then f_1, …, f_s is called a SAGBI (Subalgebra Analogue of Gröbner Bases for Ideals) basis for K[f_1, …, f_s]. More generally, SAGBI bases are defined for quotients of polynomial rings <cit.> and finitely generated algebras equipped with discrete valuations <cit.>. The name SAGBI basis is typically used for subrings of polynomial rings or quotients of polynomial rings and Khovanskii Basis for algebras with valuations. However, the literature is varied in its naming conventions and also includes canonical bases and subalgebra bases.
§.§ Matching field polytopes and Newton-Okounkov bodies
Fix a matching field L = {L_1, …, L_k} for the partial flag variety F = (j_1 < … < j_k; n) and let J = {j_1, …, j_k}. For each i ∈ [k], define the polytope P_i ⊆^(n-1) × n as the convex hull of the exponent vectors of the monomials m_I for each subset I ⊆ [n] with |I| = j_i. The matching field polytope of L is the Minkowski sum P_L = P_1 + P_2 + … + P_k. Observe that P_L is a lattice polytope, i.e., all its vertices lie in ^(n-1) × n.
We recall the definition of the Ehrhart polynomial. Let Q ⊆^d be a lattice polytope. The Ehrhart polynomial E_Q(n) ∈[n] is the polynomial such that for each n ∈, the value E_Q(n) = |nQ ∩^d| is the number of lattice points of the nth dilate of Q.
The matching field polytope gives a characterisation of toric matching fields. The result below follows directly from <cit.> and <cit.>.
Let L and L' be coherent matching fields for the same flag variety and assume L is toric. Then L' is toric if and only if the Ehrhart polynomials E_P_L and E_P_L' coincide.
Typically the toric matching field is taken to be the diagonal matching field. We note that the following stronger version of this result holds.
Let L be a coherent matching field for F. Then L is toric if and only if the volume of P_L is equal to the volume of the diagonal matching field polytope (Gelfand-Tsetlin polytope) for F, which is maximal among all coherent matching fields for F.
The proof of this proposition is most easily seen from the perspective of Newton-Okounkov bodies, so we postpone its proof.
In <cit.>, the proof of Proposition <ref> has two parts. First, the Hilbert function of S/_L is equal to the Ehrhart polynomial of P_L. Second, the Hilbert functions of S/_w() and S/ are equal and, by <cit.>, we have that _w() ⊆_L. So L if toric if and only if E_P_L is equal to the Hilbert function of the coordinate ring S/. By Proposition <ref>, it suffices to check only the volume of the P_L, i.e., the leading coefficient of the Ehrhart polynomial. Moreover, only the matching fields whose polytopes have maximal volume, such as the Gelfand-Tsetlin polyotope <cit.>, have the toric property.
Newton-Okounkov bodies.
Fix positive integers k and n and write 0 = (0, …, 0) ∈^k for the all-zeros vector and 1 = (1, …, 1) ∈^k for the all-ones vector.
Let E ⊆^k ×^n ⊂^k ×^n be an affine semigroup, i.e., for all u, v ∈ E we have u+v ∈ E, and assume that E ∩ (0×^n) = ∅. For each u = (u_1, …, u_k+n) ∈ E, we call (u_1, …, u_k) the degree of u. The Newton-Okounkov body of E is
Δ(E) := (E) ∩ (1×^n),
where S is the Euclidean closure of S. The Newton-Okounkov body encodes information about the limiting behaviour of E <cit.>.
Consider the Plücker algebra A ⊆ R for the partial flag variety (j_1, …, j_k; n) and fix a term order ≺ on R. For each f ∈ A, we define its degree d(f) ∈^k by first defining d((X_I)) = e_i ∈^k the ith standard basis vector for each I ⊆ [n] with |I| = j_i. The initial term _≺(f) is a monomial that appears in the expansion of some product of determinants ∏_i (X_I_i). We define d(f) := ∑_i d((X_I_i)). It is straightforward to show that d(f) is well-defined, i.e., it does not depend on the choice of the sets I_i.
The affine semigroup associated to A and ≺ is the set of exponent vectors of initial terms of elements of A:
E(A, ≺) := {(d(f), e) ∈^k ×^(n-1) × n f ∈ A and _≺(f) = x^e}.
Suppose that L is a coherent matching field induced by a weight matrix w. Let R be the ambient polynomial ring containing the Plücker algebra A = [(X_I)] ⊆ R and ≺ be any monomial order on R that refines the weight order w. Let E = E(A, ≺) be the affine semigroup above. Observe that the initial forms _w((X_I)) = _≺((X_I)) generate the initial algebra _≺(A) if and only if the exponents of _w((X_I)) are the rays that generate the cone over E. In other words, we have the following.
The matching field L is toric if and only if the matching field polytope P_L coincides with the Newton-Okounkov body Δ(E).
We now give a proof of Proposition <ref>.
Let w be a weight vector that induces the matching field L and ≺ be any monomial order that refines the weight order w. Let E = E(A, ≺) be the semigroup defined above. The matching field polytope P_L ⊆Δ(E) is a subset of the Newton-Okounkov body. The normalised volume of Δ(E) coincides with the degree of F under the Plücker embedding, hence it does not depend on the choice of w. So, by Proposition <ref>, the matching field L is toric if and only if P_L and Δ(E) have the same volume. Since the diagonal matching field is toric, the volume of diagonal matching field polytope is equal to Δ(E). In particular, it is maximal among all polytopes of coherent matching fields.
§ MATCHING FIELDS IN MACAULAY2
We introduce the package MatchingFields for Macaulay2. There are two main types of objects introduced by the package: and , which represent Grassmannian and flag matching fields respectively. The code throughout is collected in the file , which accompanies this article.
§.§ Constructing matching fields
The diagonal matching field is defined with the function . The tuples of a matching field are listed with the function and appear in reverse lexicographic order on the underlying set. For flag matching fields, the subsets are first ordered by size.
Let D be the diagonal matching field for (3,6)
and D' be the diagonal matching field for (1,2,3; 6). The tuples of D' are: (i) for i ∈ [6]; (i,j) with 1 ≤ i < j ≤ 6; and (i,j,k) with 1 ≤ i < j < k ≤ 6. The matching fields D and D' are defined and their tuples listed as follows.
[caption = Diagonal matching field]
i1 : needsPackage "MatchingFields"
o1 = MatchingFields
o1 : Package
i2 : D = diagonalMatchingField(3, 6)
o2 = Grassmannian Matching Field for Gr(3, 6)
o2 : GrMatchingField
i3 : getTuples D
o3 = 1, 2, 3, 1, 2, 4, 1, 3, 4, 2, 3, 4, 1, 2, 5, 1, 3, 5, 2, 3, 5, 1, 4, 5, 2, 4, 5, 3, 4, 5, 1, 2, 6, 1, 3, 6, 2, 3, 6, 1, 4, 6, 2, 4, 6, 3, 4, 6, 1, 5, 6, 2, 5, 6, 3, 5, 6, 4, 5, 6
o3 : List
i4 : D' = diagonalMatchingField(1,2,3, 6)
o4 = Flag Matching Field for Fl(1, 2, 3; 6)
o4 : FlMatchingField
i5 : getTuples D'
o5 = 1, 2, 3, 4, 5, 6,
1, 2, 1, 3, 2, 3, 1, 4, 2, 4, 3, 4, 1, 5, 2, 5, 3, 5, 4, 5, 1, 6, 2, 6, 3, 6, 4, 6, 5, 6,
1, 2, 3, 1, 2, 4, 1, 3, 4, 2, 3, 4, 1, 2, 5, 1, 3, 5, 2, 3, 5, 1, 4, 5, 2, 4, 5, 3, 4, 5, 1, 2, 6, 1, 3, 6, 2, 3, 6, 1, 4, 6, 2, 4, 6, 3, 4, 6, 1, 5, 6, 2, 5, 6, 3, 5, 6, 4, 5, 6
o5 : List
The function constructs a matching field B_σ, described in <cit.>, for some permutation σ∈ S_n. These matching fields are induced by a weight matrix that is based on the diagonal weight matrix, as in Example <ref>, with the entries in the second row permuted by σ. The function shows the weight matrix used to induce the matching field.
Let σ = (1,2,3,6,5,4) be a permutation. Consider the matching field B_σ for (3,6) from <cit.>. The matching field is induced by the weight matrix
M_σ = [ 0 0 0 0 0 0; 1 2 3 6 5 4; 30 24 18 12 6 0 ].
We construct B_σ using the package as follows.
[caption = Matching field from a permutation]
i6 : L = matchingFieldFromPermutation(3, 6, 1,2,3,6,5,4)
o6 = Grassmannian Matching Field for Gr(3, 6)
o6 : GrMatchingField
i7 : getWeightMatrix L
o7 = | 0 0 0 0 0 0 |
| 1 2 3 6 5 4 |
| 30 24 18 12 6 0 |
3 6
o7 : Matrix ZZ <— ZZ
The matching fields B_σ parametrised by permutations generalise the family of block diagonal matching fields, which were originally defined in <cit.>. The two-block diagonal matching field B_i for some i ∈ [n] is the matching field associated to the permutation (i, i-1, …, 2,1,n, n-1, …, i+2, i+1).
Block diagonal matching fields are known to give rise to toric degenerations of: Grassmannians and their Schubert and Richardson varieties <cit.> and flag varieties <cit.>.
Moreover, the polytopes of these matching fields are related by combinatorial mutations <cit.>, which are certain piecewise linear maps that preserve the Ehrhart polynomial.
The functions and construct a matching fields induced by a weight matrix for the Grassmannian and flag variety respectively. For the Grassmannian (k,n), the parameters k and n are determined by the number of rows and columns of the matrix respectively. For the flag variety (j_1, …, j_k; n), the list j_1, …, j_k must be supplied as the first argument.
Let L_1 be the matching field for (2, 6) induced by the weight matrix w_1 and let L_2 be the matching field for (1,2; 3) induced by the weight matrix w_2 where
w_1 =
[ 0 0 0 0 0 0; 2 4 1 3 6 5 ] and
w_2 =
[ 0 0 0; 3 1 2 ].
These matching fields are constructed and their tuples computed as follows.
[caption = Matching fields from weight matrices]
i8 : L1 = grMatchingField matrix 0,0,0,0,0,0, 2,4,1,3,6,5
o8 = Grassmannian Matching Field for Gr(2, 6)
o8 : GrMatchingField
i9 : getTuples L1
o9 = 2, 1, 1, 3, 2, 3, 4, 1, 2, 4, 4, 3, 5, 1, 5, 2, 5, 3, 5, 4, 6, 1, 6, 2, 6, 3, 6, 4, 5, 6
o9 : List
i10 : getWeightMatrix L1
o10 = | 0 0 0 0 0 0 |
| 2 4 1 3 6 5 |
2 6
o10 : Matrix ZZ <— ZZ
i11 : L2 = flMatchingField(1,2, matrix 0,0,0, 3,1,2)
o11 = Flag Matching Field for Fl(1, 2; 3)
o11 : FlMatchingField
i12 : getWeightMatrix L2
o12 = | 0 0 0 |
| 3 1 2 |
2 3
o12 : Matrix ZZ <— ZZ
i13 : getTuples L2
o13 = 1, 2, 3, 1, 2, 1, 3, 3, 2
o13 : List
Matching fields are directly constructed from their tuples using the function and . The tuples may be supplied in any order. If a matching field is constructed from its tuples, then the resulting matching field may not be coherent and any subsequent functions that require a coherent matching field will produce an error. If a matching field is coherent, then a weight matrix is automatically constructed for it when required. The function is used to check whether a matching field is coherent.
Let L_3 be the matching field for (2,4) with tuples T_3 and L_4 be the matching field for (1,2;3) with tuples T_4 where
T_3 = {
(1,2), (1,3), (4,1), (2,3), (4,2), (3,4)} and
T_4 = {
(1), (2), (3),
(1,2), (1,3), (3,2)
}.
The matching field L_3 is not coherent. To see this, assume that a weight w induces L_3. By adding constant vectors to each column of w, we do not change the induced matching field. So, we may assume that
w =
[ 0 0 0 0; a b c d ]
for some a,b,c,d ∈. Since (1,2) is a tuple, it follows that a < b. Similarly, the tuples (2,3), (3,4) and (4,1) allow us to deduce that a < b < c < d < a, a contradiction. On the other hand, the matching field L_4 is coherent. We perform these computations and find a weight that induces L_4 as follows.
[caption = Matching fields from tuples]
i14 : L3 = grMatchingField(2, 4, 1,2, 1,3, 4,1, 2,3, 4,2, 3,4)
o14 = Grassmannian Matching Field for Gr(2, 4)
o14 : GrMatchingField
i15 : isCoherent L3
o15 = false
i16 : getWeightMatrix L3
storicio:20:1:(3): error: expected a coherent matching field
i16 : L4 = flMatchingField(1,2, 3, 1, 2, 3, 1,2, 1,3, 3,2)
o16 = Flag Matching Field for Fl(1, 2; 3)
o16 : FlMatchingField
i17 : isCoherent L4
o17 = true
i18 : getWeightMatrix L4
o18 = | 0 0 0 |
| 0 -2 -1 |
2 3
o18 : Matrix ZZ <— ZZ
The method used for checking whether a matching field is coherent is as follows.
Fix a matching field L for the partial flag variety (j_1, …, j_k; n). For each tuple T = (i_1, …, i_s) of L and permutation σ = (σ_1, …, σ_s) ∈ Sym(T) of the entries of T, we define the half-space
H(T, σ) := {∑_a = 1^s x_a, i_a≤∑_a = 1^s x_a, σ_a}⊆^(n-1) × n.
The weight matrix cone is the intersection of all such half spaces _L = ⋂_(T, σ) H(T, σ).
The weight matrix cone is constructed using the package with the function and can be used to test whether a matching field is coherent.
Let L be a matching field. The weight matrices that induce L are the interior points of the weight matrix cone _L. In particular, L is coherent if and only if _L is full-dimensional.
The proof of this proposition follows immediately from the definitions of the weight matrix cone and of coherent matching field.
§.§ Ideals and algebras of matching fields
Let L be a coherent matching field. We use the function to construct the matching field ideal _L. We require that L is coherent as the ambient polynomial rings R and S are equipped with the weight orders w and w respectively, where w is the weight matrix inducing L. The Plücker ideal is constructed with the function . To test whether a matching field is toric, we use the function , which checks if _w() = _L. See Proposition <ref>.
Let D be the diagonal matching field for (2,4). The Plücker ideal is a principal ideal generated by f = P_14 P_23 - P_13P_24 + P_12P_34. Since D is toric, the matching field ideal _D is generated by the initial form _w(f) = P_14P_23-P_13P_24. These ideals are constructed as follows.
[caption = Matching field ideals]
i1 : needsPackage "MatchingFields";
i2 : D = diagonalMatchingField(2, 4);
i3 : matchingFieldIdeal D
o3 = ideal(p p - p p )
2,3 1,4 1,3 2,4
o3 : Ideal of QQ[p ..p , p , p , p , p ]
1,2 1,3 2,3 1,4 2,4 3,4
i4 : J = plueckerIdeal D
o4 = ideal(p p - p p + p p )
2,3 1,4 1,3 2,4 1,2 3,4
o4 : Ideal of QQ[p ..p , p , p , p , p ]
1,2 1,3 2,3 1,4 2,4 3,4
i5 : ideal leadTerm(1, J) == matchingFieldIdeal D
o5 = true
i6 : isToricDegeneration D
o6 = true
It is possible test directly whether a matching field is toric with the SubalgebraBases package <cit.>, which allows us to compute the initial algebra of the Plücker algebra. The function produces the Plücker algebra A = [(X_I)] ⊆ R. We recall the function , from the package SubalgebraBases, which produces an object whose generators are a (partial) SAGBI basis for the subalgebra.
We continue with Example <ref>.
Since D is toric, the six Plücker forms (X_I) form a SAGBI basis for the Plücker algebra A.
[caption = Initial algebra of the Plücker algebra]
i7 : S = plueckerAlgebra D
o7 = QQ[p_0..p_5], subring of QQ[x_(1,1)..x_(2,4)]
o7 : Subring
i8 : transpose gens S
o8 = -2 | x_(1,1)x_(2,2)-x_(1,2)x_(2,1) |
-2 | x_(1,1)x_(2,3)-x_(1,3)x_(2,1) |
-2 | x_(1,2)x_(2,3)-x_(1,3)x_(2,2) |
-2 | x_(1,1)x_(2,4)-x_(1,4)x_(2,1) |
-2 | x_(1,2)x_(2,4)-x_(1,4)x_(2,2) |
-2 | x_(1,3)x_(2,4)-x_(1,4)x_(2,3) |
6 1
o8 : Matrix (QQ[x ..x ]) <— (QQ[x ..x ])
1,1 2,4 1,1 2,4
i9 : transpose gens sagbi S
o9 = -2 | x_(1,1)x_(2,2)-x_(1,2)x_(2,1) |
-2 | x_(1,2)x_(2,3)-x_(1,3)x_(2,2) |
-2 | x_(1,1)x_(2,3)-x_(1,3)x_(2,1) |
-2 | x_(1,3)x_(2,4)-x_(1,4)x_(2,3) |
-2 | x_(1,2)x_(2,4)-x_(1,4)x_(2,2) |
-2 | x_(1,1)x_(2,4)-x_(1,4)x_(2,1) |
6 1
o9 : Matrix (QQ[x ..x ]) <— (QQ[x ..x ])
1,1 2,4 1,1 2,4
Let L be the matching field for (3,6) induced by the weight matrix
M = [ 0 0 0 0 0 0; 18 3 15 6 9 12; 35 28 21 14 7 0 ].
The matching field L is not toric since it is an example of a hexagonal matching field for (3,6) <cit.>. So, for any monomial order on R = [x_i,j] that refines the weight order M, any SAGBI basis for the Plücker algebra has more than 20 generators. We perform these computations as follows.
[caption = Initial algebra for a hexagonal matching field]
i10 : M = matrix 0,0,0,0,0,0,18,3,15,6,9,12,35,28,21,14,7,0;
3 6
o10 : Matrix ZZ <— ZZ
i11 : L = grMatchingField M;
i12 : T = plueckerAlgebra L;
i13 : numgens T
o13 = 20
i14 : numgens sagbi T
o14 = 21
§.§ Polyhedra and other functions
In this section we explain how to use the MatchingFields package to compute: matching field polytopes and Newton-Okounkov bodies; matroidal subdivisions of hypersimplices arising from the Dressian; algebraic matroids of matching fields, which decompose the algebraic matroid of the Grassmannian; and tope fields and their amalgamations. The matching field polytopes and Newton-Okounkov bodies can be computed for both Grassmannians and flag matching fields. However, the other constructions are for Grassmannian matching fields only. In each of the following parts, we provide the necessary background and explain how to perform the computations using the package.
Polyhedra.
We construct matching field polytopes and Newton-Okounkov bodies, described in Section <ref>, with the functions and respectively. The function uses the SubalgebraBases package to compute a SAGBI basis for the Plücker algebra.
Consider the hexagonal matching field L induced by the weight matrix M from Example <ref>. Let ≺ be the monomial order obtained by refining the weight order M by the graded reverse lexicographic order with respect to
x_1,1 > x_1,2 > … > x_1,6 > x_2,1 > x_2,2 > … > x_3,6.
Let E = E(A, ≺) be the semigroup of the initial algebra _≺(A). See Section <ref>.
Since L is not toric, the matching field polytope P = P_L is a strict subset of a Newton-Okounkov body Q = Δ(E). We compute P and Q, their normalised volumes, and show that P ⊆ Q using the package as follows.
[caption=Polyhedra associated L. The output and have been trimmed for brevity. The output shows that the vertices of the Newton-Okounkov body Q are the vertices of the matching field polytope P together with one more vertex, which comes from a degree-2 generator of _≺(A).]
i1 : needsPackage "MatchingFields";
i2 : L = grMatchingField matrix 0,0,0,0,0,0,18,3,15,6,9,12,35,28,21,14,7,0;
i3 : P = matchingFieldPolytope L
o3 = P
o3 : Polyhedron
i4 : vertices P
o4 = | 1 1 1 0 1 0 0 1 1 0 1 0 0 0 1 1 0 0 1 0 |
...
| 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 |
18 20
o4 : Matrix QQ <— QQ
i5 : (volume P) * (dim P)!
o5 = 38
o5 : QQ
i6 : Q = NOBody L
o6 = Q
o6 : Polyhedron
i7 : vertices Q
o7 = | 1 1 1 0 1 0 0 1 1 0 1 0 0 0 1 1 0 0 1 0 1/2 |
...
| 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1/2 |
18 21
o7 : Matrix QQ <— QQ
i8 : (volume Q) * (dim Q)!
o8 = 42
o8 : QQ
i9 : (vertices Q)_0 .. 19 == vertices P
o9 = true
Dressians.
The Dressian (k, n) <cit.> is the intersection of the tropical hypersurfaces defined by the 3-term Plücker relations. A tropical polynomial (f) is a piecewise-linear convex function defined over := ∪{∞} obtained from a polynomial f ∈[x_1, …, x_n] by replacing addition with minimum and multiplication with addition. So (f) is evaluated as the minimum of a set of linear forms. We call each such linear form a tropical monomial of (f). Given a tropical polynomial (f) : ^n →, its tropical hypersurface ((f)) ⊆^n is the set of points where the minimum in (f) is attained by at least two tropical monomials. The Dressian is given explicitly by
(k, n) := ⋂_(I, a, b, c, d)(
min(
P_I ∪{a, b} + P_I ∪{c, d },
P_I ∪{a, c} + P_I ∪{b, d },
P_I ∪{a, d} + P_I ∪{b, c }
))
⊆^ nk
where the intersection is taken over all (k-2)-subsets I ⊆ [n] and all a < b < c < d in [n] \ I. On the other hand, the tropical Grassmannian ((k,n)) = ⋂_f ∈((f)) is the intersection of all tropical hypersurfaces where f runes over every element of the Plücker ideal .
When k = 2, the Dressian coincides with the tropical Grassmannian. For k ≥ 3 and n ≥ 6, the Dressian strictly contains the tropical Grassmannian.
The Dressian admits serveral combinatorial descriptions. We focus on the description in terms of matroidal subdivisions of hypersimplices. The hypersimplex Δ_k,n⊆^n is the convex hull of the characteristic vectors of the k-subsets of [n]. Suppose that w∈^ nk is any weight vector. We say that the regular subdivision of Δ_k,n given by w is matroidal if, for each maximal cell of the subdivision, the sets indexing the vertices of the cell are the bases of a matroid. The set of weights that give matroidal subdivions of the hypersimplex Δ_k,n are exactly the points of the Dressian (k,n) <cit.>.
In the package MatchingFields, given a coherent matching field L induced by a weight matrix w, its induced weight vector is displayed with the function . The subsets associated to the coordinates are listed in reverse-lexicographic order, which coincides with the order of the tuples displayed with . See Section <ref> and Example <ref>. The matroidal subdivision obtained from the induced weight vector w is computed with the function . The output is a list {_1, _2, …, _s} where _i is the list of bases for the ith cell of the subdivision.
Let L be the matching field for (3,5) induced by the weight matrix
w =
[ 0 0 0 0 0; 1 3 2 5 4; 10 0 20 40 30 ].
The matroidal subdivision of Δ_3,5 with respect to the induced weight vector w has 3 maximal cells, which are computed as follows.
[caption = Matroidal subdivision of the hypersimplex Δ_3,5]
i10 : L = grMatchingField matrix 0,0,0,0,0,1,3,2,5,4,10,0,20,40,30;
i11 : getWeightPluecker L
o11 = 1, 1, 12, 2, 1, 12, 2, 14, 4, 24
o11 : List
i12 : netList matroidSubdivision L
+———+———+———+———+———+———+———+———+
o12 = |1, 2, 3|1, 2, 4|1, 2, 5|2, 3, 4|2, 3, 5|1, 3, 4|1, 3, 5| |
+———+———+———+———+———+———+———+———+
|1, 2, 4|1, 2, 5|2, 3, 4|2, 3, 5|2, 4, 5|1, 3, 4|1, 3, 5|1, 4, 5|
+———+———+———+———+———+———+———+———+
|2, 3, 4|2, 3, 5|2, 4, 5|1, 3, 4|1, 3, 5|1, 4, 5|3, 4, 5| |
+———+———+———+———+———+———+———+———+
Algebraic matroids. Let V ⊆ K^n be an irreducible affine algebraic variety. The algebraic matroid M_V of V is the matroid whose independent sets are the subsets S ⊆ [n] such that the image of V under the coordinate projection π : K^n → K^S is full-dimensional. If V is not contained in any coordinate hyperplane, i.e., its ideal is monomial free, then, by <cit.>, the algebraic matroid is preserved under tropicalisation.
Consider the case of the cone over the Grassmannian (2, n), which is an affine variety in ^ n2. By <cit.>, the algebraic matroid of (2,n) is fully determined by the maximal cones of ((2,n)) whose associated metric trees <cit.> are caterpillar graphs. It is a straightforward observation that the caterpillar graph cones are exactly the cones associated to coherent matching fields.
More generally, consider the cone over the Grassmannian (k,n). Fix a coherent matching field L induced by a weight matrix w that is toric for the Grassmannian. Let V ⊆^ nk be the linear span of the cone of the Gröbner fan of the Plücker ideal containing w within its relative interior. The algebraic matroid M_L of L is the matroid on the ground set of k-subsets of [n] realised by V. By <cit.>, it follows that each basis of M_L is a basis of M_(k,n). For the reverse direction, note that not all maximal cones of the tropical Grassmannian arise from matching fields. For example, in (2,n) the non-caterpillar graphs index precisely these cones. However, for all small examples that can be currently be computed, we can verify that the algebraic matroids of matching fields are enough to construct the algebraic matroid of the Grassmannian.
Every basis of the algebraic matroid of the Grassmannian is a basis of M_L for some coherent matching field L, i.e.,
(M_(k,n)) = ⋃_L coherent(M_L).
The algebraic matroid of L is computed using the function . The object returned by this function uses the ground set {0,1,…, nk-1}. To view the circuits and bases of the algebraic matroid in terms of their k-subsets, we use the functions and respectively.
The ground set of the algebraic matroid M_(2,6) is the edge set E(K_6) of the complete graph K_6. We say that a cycle with labelled vertices v_1, v_2, …, v_m is alternating if v_1 < v_2, v_2 > v_3, v_3 < v_4, …, v_m-1 < v_m, and v_m > v_1. The independent sets of M_(2,n) are the subgraphs H ⊆ K_6 for which there exists a labelling of the vertices such that H does not contain an alternating cycle <cit.>.
Let L be the diagonal matching field for (2,6). The algebraic matroid M_L is realised by the vertices of the matching field polytope P_L. It is straightforward to show that M_L is the graphic matroid of the bipartite graph in Figure <ref>. The graph has 3 connected components and 12 vertices, so M_L has rank 9. Its circuits are the cycles of the graph. We construct the matroid in Macaulay2, show that it has 576 bases, and display seven of its circuits as follows.
[caption = Algebraic matroids of matching fields]
i13 : L = diagonalMatchingField(2, 6);
i14 : algebraicMatroid L
o14 = a "matroid" of rank 9 on 15 elements
o14 : Matroid
i15 : #algebraicMatroidBases L
o15 = 576
i16 : netList (algebraicMatroidCircuits L)_0 .. 6
+—————————————————-+
o16 = |set 1, 3, 1, 4, 2, 3, 2, 4 |
+—————————————————-+
|set 1, 3, 1, 5, 2, 3, 2, 5 |
+—————————————————-+
|set 1, 4, 1, 5, 2, 4, 2, 5 |
+—————————————————-+
|set 1, 4, 1, 5, 3, 4, 3, 5 |
+—————————————————-+
|set 1, 3, 1, 5, 2, 3, 2, 4, 3, 4, 3, 5|
+—————————————————-+
|set 1, 3, 1, 4, 2, 3, 2, 5, 3, 4, 3, 5|
+—————————————————-+
|set 2, 4, 2, 5, 3, 4, 3, 5 |
+—————————————————-+
Tope fields.
A tope field is a generalisation of a matching field. Below we give a concise introduction to tope fields, however, a thorough exposition can be found in <cit.>. Our setup is modified so that it aligns with the implementation in the MatchingFields package. A tope field for (k,n) of type t = (t_1, …, t_s) where k = ∑_i t_i, is a collection of bipartite graphs called topes on the vertices ( :=[n]) ⊔ ( := [s]) such that the following hold: the collection has one bipartite graph G for each k-subset I ⊆ [n]; the degree vector of the vertices in , called the left-degree vector, is equal to the characteristic vector of I; and the degree vector of the vertices in , called the right-degree vector, is equal to the type t. The tope fields associated to matching fields are the tope fields of type (1,1,…, 1). In such a case, the tuple (i_1, …, i_k) of a matching field corresponds to the bipartite graph with edges (i_j, j) for each j ∈ [k] =:.
We say that a tope field is linkage if, for each (k+1)-subset S ⊆ [n], the bipartite graph on L ⊔ R whose edges are the union of all bipartite graphs of the tope field whose left-degree vector is supported on S is a forest. See <cit.>.
We encode a tope field as a pair (L, t) where L is a Grassmannian matching field and t = (t_1, …, t_s) is the type. Let T = (i_1,1, i_1,2, …, i_1,t_1, i_2,1, i_2,2, …, i_s, t_s) be a tuple of L. The bipartite graph corresponding to T has edges (i_a,b, a) for each a ∈ [s] and b ∈ [t_a]. A tope field is defined from a matching field with the function . To check that the tope field is linkage, we use the function . Given a linkage tope field T = (L, t), for each i ∈ [s], the ith amalagamation of T is a certain linkage tope field (L', t + e_i) where e_i is the ith standard basis vector and L' is a Grassmannian matching field for (k+1, n).
Let L be the matching field for (3,5) with tuples
132, 142, 152, 341, 135, 145, 342, 235, 245 and 345.
The bipartite graphs associated to the tuples 132, 142, 341 and 342 are shown in Figure <ref>. This matching field is linkage so the union of these graphs is a forest. We verify this using the MatchingFields package and compute the amalgamations as follows.
[caption = Tope fields and amalgamations]
i17 : L = grMatchingField(3, 5, 1,3,2, 1,4,2, 1,5,2, 3,4,1, 1,3,5, 1,4,5, 3,4,2, 2,3,5, 2,4,5, 3,4,5);
i17 : T = topeField L
o18 = Tope field: n = 5 and type = 1, 1, 1
o18 : TopeField
i19 : isLinkage T
o19 = true
i20 : T2 = amalgamation(2, T)
o20 = Tope field: n = 5 and type = 1, 2, 1
o20 : TopeField
i21 : getTuples T2
o21 = 1, 3, 4, 2, 1, 3, 5, 2, 1, 4, 5, 2, 1, 3, 4, 5, 2, 3, 4, 5
o21 : List
i22 : T23 = amalgamation(3, T2)
o22 = Tope field: n = 5 and type = 1, 2, 2
o22 : TopeField
i23 : getTuples T23
o23 = 1, 3, 4, 2, 5
o23 : List
Given a matching field L, it is conjectured that the collection of all sequences of amalgamations of L contain the data necessary to write down the minimal free resolution of the matching field ideal _L. In particular, the set of amalgamations are conjectured to give a combinatorial characterisation of the toric property.
plain
Oliver Clarke Email: [email protected] Address: School of Mathematics, University of Edinburgh, United Kingdom
|
http://arxiv.org/abs/2306.11026v1
|
20230619154213
|
On the "Hysteresis Effects" observed by AMS02 in Cosmic Ray Solar Modulations
|
[
"Paolo Lipari",
"Silvia Vernetto"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.SR",
"physics.space-ph"
] |
[email protected]
INFN, Sezione Roma “Sapienza”,
piazzale A.Moro 2, 00185 Roma, Italy
[email protected]
INAF, Osservatorio Astrofisico di Torino,
via P.Giuria 1, 10125 Torino, Italy
INFN, Sezione Torino,
via P.Giuria 1, 10125 Torino, Italy
The AMS02 collaboration has recently published high precision daily measurements of the spectra of cosmic ray protons, helium nuclei and electrons taken during a time interval of approximately 10 years from 2011 to 2020. Positron spectra averaged over distinct 27 days intervals have also been made public. The AMS02 collaboration has shown some intriguing "hysteresis" effects observed comparing the fluxes of protons and helium nuclei or protons and electrons. In this work we address the question of the origin of these effects. We find that the spectral distortions generated by propagation in the heliosphere are significantly different for particles with electric charge of opposite sign (an effect already well established), with different behaviour before and after the solar magnetic field polarity reversal at solar maximum. This results in hysteresis effects for the p/e^- comparison that follow the 22–year solar cycle. On the other hand particles with electric charge of the same sign suffer modulations that are approximately equal. The hysteresis effects observed for a helium/proton comparison can then be understood as the consequence of the fact that the two particles have interstellar spectra of different shape, and the approximately equal spectral distortions generated by propagation in the heliosphere have a rigidity dependence that is a function of time. These hysteresis effects can in fact be observed studying the time dependence of the shape of the spectra of a single particle type, and also generate short time loop–like structures in the hysteresis curves correlated with large solar activity events such as coronal mass ejections (CME's). A description of solar modulations that includes these effects must go beyond the simple Force Field Approximation (FFA) model. A minimal, two–parameter generalization of the FFA model that gives a good description of the observations is presented.
On the “Hysteresis Effects” observed by AMS02 in Cosmic Ray Solar Modulations
Silvia Vernetto
July 31, 2023
=============================================================================
§ INTRODUCTION
The time dependence of the fluxes of Galactic cosmic rays generated by
solar modulations <cit.> has been studied for several decades.
During most of this time, the fundamental instrument to study these effects
has been the neutron monitor <cit.>, but in recent years
the
PAMELA <cit.>
and AMS02 <cit.>
detectors, located on satellites,
have obtained precise direct measurements of the CR spectra
that allow much more detailed analysis.
The AMS02 collaboration has published
measurements of the spectra for four different particle types
(protons, helium nuclei, electrons and positrons)
averaged in time during 79 Bartels rotation of the Sun
(each lasting 27 days) <cit.>,
and more recently daily spectra
for protons
<cit.>,
helium nuclei
<cit.>
and electrons
<cit.>
that extend
for several years: 2824 spectra taken during a time period
of 8.44 yr for p and He, and 3193 spectra taken
during a period of 10.45 yr for e^-,
with both data sets starting on 2011-05–20.
These data contain an enormous amount of information about the
dynamics of the heliosphere and the properties of propagation
of relativistic charged
particles in it, and are the object of multiple studies.
In their most recent papers the AMS02 collaboration has discussed
some intriguing “hysteresis effects” observed comparing the time dependence
of the fluxes for different particle types.
In <cit.> the ratio of helium and proton
fluxes in some fixed rigidity ranges
is studied as a function of the helium flux.
Comparing moving averages of the two quantities
with an integration time interval of 378 days (14 Bartels rotations)
and one day step, the authors find that one
value of the helium flux does not correspond to a unique value of the He/p ratio
(and therefore to a unique value of the proton flux).
The time averaged He/p ratio is found to be higher after solar maximum,
and the authors conclude that at low rigidity the modulation
of the helium to proton flux ratio
is different before and after the solar maximum of 2014.
In <cit.> a similar study is performed
for electron and proton spectra, comparing the time
dependence of the two fluxes in the same rigidity intervals.
Also in this case it is observed that one value
of the p flux does not correspond to a unique
value of the e^- flux.
For long averaging time intervals
(such as T = 378 days or 14 Bartels rotations) one observes that,
for the same proton flux, the electron flux is significantly
smaller after solar maximum. The effect is similar
to the one observed comparing the helium and proton spectra,
but it is one order of magnitude larger.
The study of moving averages of the fluxes with shorter integration times
reveals additional structures in the time dependence
for the e^-/p ratio that
appears to be associated to the presence of
transients of solar activity, that are also the cause of rapid
time variations of the fluxes of both particles.
Studies of moving averages of the He/p ratio with shorter time intervals
have not been discussed in the AMS02 publications, but also this
ratio exhibits time structures similar to those observed for
the e^-/p case.
In the following we want to address the problem of the origin
of the “hysteresis” effects observed by AMS02. We will
show that two essentially different mechanisms are operating.
One mechanism is relevant for the long time scale dependence of the
e^-/p ratio, and has its origin in the well established fact
that CR particles with electric charge of opposite sign
travel along different trajectories that are confined in different
regions of the heliosphere, and this results in different modulations effects.
The heliospheric trajectories depend on the polarity of the solar
magnetic field, and the reversal of the polarity at solar maximum
is the origin of the large differences in the e^-/p ratio before and after
the solar maximum of 2014.
A second more subtle physical mechanism is at the origin of the hysteresis effects
observed for the He/p ratio,
In this case the solar modulations for the two particle types
are (in a sense that will be made more precisely below) in good approximation equal,
and the hysteresis effects are the result of the fact that
the spectral distortions generated by the modulations
can have different rigidity dependences at different times.
This second mechanism can be observed, and is in fact more easily understood,
studying the time dependence of the fluxes of one single particle type in
two distinct rigidity intervals. The point is that
observations of the same value of the flux at rigidity _1 can correspond
(for different observations times) to different values of the flux at
the rigidity _2. Therefore the plot of one flux versus the other
[J(_2, t) versus J(_1, t)] can exhibit non trivial “hysteresis”
structures.
This mechanism generates similar effects also in the comparison
of the fluxes J_p(, t) and J_ He (,t)
of protons and helium at the same value of rigidity,
even if the spectra of the two particles suffer the same modulations.
This is because the effects of modulations must
be understood not as an energy (or rigidity) dependent absorption effect,
but instead as a distortion that acts on the local interstellar (LIS) spectra,
and depends not only on state of the heliosphere, but also on the shape
of the LIS spectra, that are different for protons and helium nuclei.
A simplified way to understand and model the solar modulations
is to describe them as the effect of an average energy loss Δ E
suffered by particles during propagation in the heliosphere.
The hysteresis effects observed comparing the spectra of protons
and electrons are due
to the fact that the heliospheric energy losses for
p and e^- are different and change in different ways during the solar cycle.
On the contrary, the hysteresis effects observed comparing the spectra
of protons and helium nuclei are due to the fact the their
heliospheric energy losses
(that have in good approximation the same time
and rigitity dependences being related by the simple equation
Δ E_ He = 2 Δ E_p)
have a non trivial rigidity dependence that takes different shapes at different times.
This paper is organized as follows.
In the next section we show how “hysteresis effects” are present
in the AMS02 daily spectra measurements of all three particles
(protons, helium nuclei and electrons) and can be observed studying
the fluxes of each single particle type, with no need
to compare different particle types.
In the following section
we introduce a very simple parametrization for the rigidity (or energy) spectra,
that can describe surprisingly well the data for p, He, and e^∓.
This parametrization has two time independent parameters:
a normalization and a spectral index that together define
a simple power law in rigidity,
and two time dependent parameters that determine a
rigidity dependent potential that controls the modulation effects.
Section <ref> presents the time dependence of the potentials
for the different particles during the extended time interval
of the PAMELA and AMS02 observations.
Section <ref> discuss the physical meaning of the modulation
potential we have introduced, and the shape of the local interstellar (LIS)
spectra of the CR particles.
Section <ref> discusses the “loops” of different periods
that emerge from different hysteresis studies.
The final section summarizes the results.
§ FLUX CORRELATIONS FOR A SINGLE PARTICLE TYPE
The effect we want to investigate here is the shape of the
distortions generated by solar modulations on the
rigidity (or energy) spectrum of one particle type at different times.
It is well known that at low rigidity the CR fluxes are time
dependent, and for example the proton flux at
≃ 1 GV changes being highest
(lowest) at the minimum (maximum) of solar activity.
The question we want to address is
if the value of the p flux at 1 GV
determines the entire spectrum at all rigidities or not.
This is in fact the case in models
that describe solar modulations
in terms of only one time dependent parameter, such as the commonly used
Force Field Approximation (FFA) <cit.>,
where a measurement of the flux at one rigidity
(if it is in the range where the effects of the modulations are not negligible)
is sufficient to determine the entire spectrum.
The AMS02 data however show that the
assumption that solar modulation can be described
by a single time dependent parameter is not correct.
This conclusion emerges directly from the data, without any analysis.
An illustration of this is presented in Fig. <ref>
that shows the proton spectra
measured by AMS02 <cit.> during two different
days (2014–07–31 and 2016-06–25).
The average fluxes measured during these two days
are approximately equal for a rigidity of order 1 GeV,
but differ by (20± 1) % in the rigidity bin
[4.88–5.37] GV.
Fig. <ref> also shows the spectra of
helium nuclei measured by AMS02 <cit.>
during the same two days.
One can note that the
effects of solar modulations for protons and helium nuclei
have the same qualitative features, as
also the helium spectra are approximately equal ar ≃ 1 GV,
and differ by ∼ 20% at ≃ 5 GV.
A more quantitative study, presented below,
will show that the distortions to the proton and helium spectra
(and in fact also to the positron spectrum)
generated by solar modulations are in fact in very good approximation equal.
The observation that the flux at one rigidity _1 can correspond,
at different times, to different fluxes at a second rigidity _2,
suggests to explore the possibility to observe “hysteresis” effects
such as those discussed by AMS02 (for the fluxes of two different particles
measured in the same rigidity interval) <cit.>,
also for the fluxes of one single particle type in two distinct rigidity intervals.
Some results of this type of study are illustrated in
Fig. <ref>. The three panels in the top row of
the figure show the time dependence of the flux of
protons <cit.>,
helium nuclei <cit.>
and electrons <cit.>
measured during different days in one fixed interval of rigidity
(=[1,1.16] GV for p,
[1.71,1.92] GV for He and
[1,1.71] GV for e^-).
The measurements for protons and helium are taken
for 2717 different days [The AMS02 data has released
2824 proton and helium spectra, but in 107 of them the
measurements are given only for > 2.97 GV.]
from 2011-05-20 to 2019–10–29,
while the measurements for electrons are taken for 3193 days
from the same initial day and extending to 2021–11-02.
The time interval of the daily spectra
measurements covers a large part of
the 24th solar cycle that extends from the minimum in December 2008 to
the next minimum in December 2019 passing through a maximum
around April 2014 (the e^- measurements cover also the beginning
of cycle 25th).
The fluxes for all three particle
exhibit significant time variations on a variety of time scales,
with the most prominent effect associated to
the 11 year solar cycle.
An important point is to note that, superimposed on
the general trend of decreasing fluxes
before solar maximum and increasing fluxes after maximum,
other significant time variation structures associated to
phases of enhanced or suppressed solar activity are present.
For example, during the first part of the cycle
(increasing solar activity and decreasing CR fluxes) one can
identify three main local maxima of the flux, and during the
second part of the cycle
(decreasing solar activity and increasing CR fluxes)
a prominent minimum of the fluxes is present in September 2017.
In the three plots the colors and the vertical lines identify some
time intervals associated with prominent time structures.
These time intervals are labeled with a letter,
with intervals (a), (b) and (c) roughly centered on
the three local maxima in the first part of the cycle;
intervals (d) and (e) covering the solar maximum part of the cycle;
while the time interval around the prominent local minimum of
September 2017 is labeled (h). The same colors are used in subsequent plots
to identify the same time intervals.
It is interesting to note that while the time dependences
of the flux for the three particles (p, He and e^-) are
qualitatively similar, the similarity is remarkably accurate
for protons and helium nuclei, while there is an evident
large difference between electrons and the two positively charged
particles. To illustrate this point, the time dependences
for helium nuclei and electrons are shown in the form ϕ(t)/⟨ϕ⟩ -1
(where the average is taken during the same time interval for
all particles), and compared with the time evolution for protons.
The nine panels in the lower part of Fig. <ref>
show the time evolution of the fluxes
of protons, helium nuclei and electrons measured
simultaneously in two distinct rigidity intervals.
This is achieved studying the “trajectory” in time
of the pair {_1(t), _2 (t)}, where
_1(t) and _2 (t) are the fluxes measured at time t
in the two different rigidity intervals
[_1, _2] and
[_1^', _2^' ].
The three columns in Fig. <ref>
shows the trajectories of pairs of fluxes for
protons (left), helium nuclei (middle) and electrons (right).
In all three cases the lower rigidity interval is the same
used to show the time evolution of the fluxes in the top row
(=[1,1.16] GV for p,
[1.71,1.92] GV for He and
[1,1.71] GV for e^-),
while the second rigidity intervals are:
=[2.97,3.29] GV for p,
[3.64,4.02] GV for He and
[2.97,4.02] GV for e^-.
In the second row of panels, the time evolution of the
pair of fluxes {J_1(t), J_2 (t)}
is shown as a broken line that connects the daily measurements.
There is of course a strong correlation between J_1(t) and
J_2(t). This is expected because both fluxes are large (small)
in periods of weak (strong) solar activity, however
the trajectory is not limited to a narrow band, as expected
if one assumes that the value of the flux J_2(t) is determined by the
value of J_1(t). The spread of values of J_2(t) for a
fixed value of J_1(t) is larger than the
errors on the measurement
(that are of order 1%, 1.5% and 2% for p, He and e^- and are not
shown to avoid cluttering), and therefore is physically significant.
The trajectories that describe the daily flux measurements
have a rich and complex structure that
encodes very valuable information about CR propagation in the heliosphere,
but because of their complexity are also difficult to interpret,
and for this reason it is interesting to perform
moving averages of the measurements, even if this
procedure erases significant information
about the time evolution of the spectra.
The third row of panels in Fig. <ref> shows
moving averages of the trajectories {J_1(t), J_2(t)}
for an averaging time interval of 81 days (3 Bartels rotations)
and one day step.
The panels in the bottom row
show the same moving averages of the flux pairs
but in a slightly different form, replacing
the value of _2 (t) with its deviation from an
average value (shown as a dashed line in the two panels above).
The resulting trajectories are much simpler, and
reveal interesting structures in the time evolution of the
spectra that are analogous (and in fact encode the same effects)
of what has been observed by the AMS02 collaboration in the study
of the He/p and e^-/p ratios.
The qualitative feature that is most evident in the figure
is the presence of “hysteresis loops” in the trajectories
that trace the evolution of the flux pairs {_1(t), _2(t)}.
Inspecting Fig. <ref>
one can identify three such loops during the first part
of the solar cycle (when solar activity is going toward maximum)
that corresponds to the time intervals (a), (b) and (c)
(following the notation indicated in the top row of the figure),
and one loop during the second part of the solar cycle
(when solar activity is decreasing after solar maximum),
and corresponds to the time interval (h).
The “loops” are related to strong perturbations of the
interplanetary magnetic field, superimposed
to the more gradual 11 year solar cycle.
The loops in the first part of the cycle are formed
when the general decreasing trend of the two fluxes
_1(t) and _2(t) is inverted
and both fluxes increase during a short time interval
before returning to their normal behavior of gradual decrease.
The effect is faster and relatively larger
for the flux in the high rigidity bin,
generating a clockwise loop in the trajectory.
In the second part of the cycle (after solar maximum)
a prominent loop is present around September 2017,
when some large coronal mass ejections (CME)
generate a large suppression of the CR fluxes during a time interval
of several months. Also in this case the response of the flux
in the high rigidity bin is larger and faster, resulting
again in a clockwise loop in the trajectory of the flux pair.
It is straightforward to see how these loop structures
in the time evolution of the CR spectra are also visible
comparing the fluxes of two different particle types,
as done in the AMS02 papers <cit.>.
The study of the rigidity dependence of solar modulations has been studied
for decades, in particular in association with
the so called Forbush decreases, sudden drops of the CR spectra
(associated to CME's or high-speed streams from coronal holes)
first observed in 1937 <cit.>.
Most of these studies have been performed with ground–based neutron monitor (NM) detectors.
These instruments are located in regions with different geomagnetic cutoffs,
and therefore can observe CR flux variations integrating over different rigidity ranges.
Comparisons of the counting rates of different NM detectors have allowed to observe
already in the 1970's the presence of “hysteresis loops” associated to the
rigidity dependent modulations <cit.>.
In more recent times spaceborne detectors placed in near Earth orbit
have been able to measure directly the CR spectra of different particles
(protons, helium nuclei and electrons by PAMELA
<cit.>
and electrons and positrons by DAMPE <cit.>)
during major Forbush decreases, obtaining evidence that the CR fluxes
recovery times are rigidity dependent and shorter at higher .
The AMS02 data, thanks to its large statistics, high precision and a
an extended data taking is of great value to develop a more complete
understanding of the effects of perturbations in the
interplanetary environment on the CR spectra.
§ A TWO–PARAMETER PHENOMENOLOGICAL DESCRIPTION OF SOLAR MODULATIONS
The discussion in the previous section, as in the
papers that present the AMS02 measurements,
has been developed studying the time dependence of directly
measured fluxes.
This approach has the merit of avoiding the introduction of model
dependent quantities and concepts, however it has also significant limitations.
This is in part because it is not “economic”, since there
are infinite ways to choose the rigidity or energy intervals used to
study the evolution of the spectra, moreover, such a discussion cannot completely capture
the properties of the modulation mechanism that generates distortions to
the shape of the CR spectra.
In the following we will attempt to develop a simple parametrization of the
CR spectra with the goal of extracting from the data few quantities that
can capture the main effects of solar modulations.
A convenient starting point is the widely used
and very successful model of the Force Field Approximation (FFA)
introduced by Gleeson and Axford <cit.>.
The fundamental assumption in the model is that
CR particles traversing the heliosphere suffer a time dependent energy loss
Δ E = |q| V(t) proportional to
the absolute value of their electric charge.
In the original version of the FFA, the same potential V is valid for
all particle types, but it is now well established that particles with
electric charge of opposite sign propagate in different regions of the
heliosphere and therefore “see” different potentials.
The question if the same potential can describe the modulations of
all particles that have electric charge of the same sign, should of course
be tested experimentally.
If the LIS spectra at the boundary of the heliosphere are, as expected,
isotropic and constant in time, it is then
straightforward to derive an expression
for the energy spectrum observable at the Earth at time t:
ϕ(E, t) = p^2/p_0^2 ϕ_0 [E + |q| V(t)]
In this expression ϕ_0 (E) is the LIS spectrum,
and p and p_0 are the 3–momenta
that correspond to the energies E and E + |q| V(t),
that are the energies of a CR particle when detected at the
Earth and entering the heliosphere.
In the FFA model the solar modulations are calculated
in terms of the LIS spectrum ϕ_0 (E), but the validity of the model
can be tested without any knowledge of this spectrum,
simply comparing spectra that are directly measurable at the Earth.
In fact Eq. (<ref>) implies that
the spectra
ϕ_1(E) = ϕ[E, V(t_1)]
and ϕ_2(E) = ϕ[E, V(t_2)]
observed at times t_1 and t_2 are related to each other by:
ϕ_1(E) = p_1^2/p_2^2 ϕ_2[E + |q| Δ V(t_1,t_2)]
where Δ V(t_,t_2) = V(t_1)-V(t_2) is the difference between the
modulation potentials at times t_1 and t_2,
and p_1 and p_2 are the momenta
that correspond to the energies E and E + Δ V.
The important point of Eq. (<ref>) is that
the two functions that enter the equality are
directly measurable, and this allows to test the validity
of the model without a knowledge of the LIS spectrum.
It is instructive to consider the ideal case of a LIS spectrum that
is a simple power law in rigidity: J_0 () = K ^-α.
The modulated spectrum of a massless particle takes then the form:
(, V) = K ^2 ( + |Z| V)^-(2+α)
(with Z = q/e).
This flux grows quadratically in for low
rigidities, reaches a maximum at ^* = 2 |Z| V /α, and
for large rigidities becomes asymptotically
a simple power law with constant spectral index α.
For a particle with mass m the modulated flux takes the form:
(, V) = K |q|^α+3 ^3/E+ m
(E + m + |q| V)
(E+ |q| V)^-(α+3)/2 (E+ 2 m +|q| V)^-(α+3)/2
(where E = √(( q )^2 + m^2) - m is the kinetic energy that corresponds
to rigidity ). This form has shape similar to the massless case,
with a flux that grows rapidly for small ,
reaches a maximum (at a rigidity that grows with V)
and then, for large , becomes a power law of spectral index α.
Expressing the spectrum in terms of kinetic energy, it takes the form:
ϕ(E,V) = K |q|^α -1
E (E + 2 m )
(E + m +|q| V)
(E + |q| V )^-(α+3)/2 (E+ 2 m +|q| V)^-(α+3)/2 .
It should be stressed that the expressions (<ref>) and (<ref>)
for a rigidity or kinetic energy spectrum might appear as rather complicated, but
they describe a very simple model: an exact power law in rigidity (with normalization K and
spectral index α) modulated by a constant energy loss |q| V.
Adopting these expressions to fit the time dependent rigidity spectra
measured by AMS02 and PAMELA is surprisingly successful.
Fitting the 2717 daily proton and helium daily spectra
with data in the rigidity ranges [1–100] GV for p (30 bins), and
[1.71–100] GV for helium (26 bins) with the form (<ref>)
and allowing all three parameters (K, α and V)
to be time dependent, one obtains reasonably good fits
with global χ^2_ min/ d.o.f. = 0.86 for protons
and 0.79 for helium nuclei.
In the case of helium one has the problem that the flux is
formed by a mixture of the two isotopes ^4He
and ^3He <cit.>.
In this paper we have neglected the rigidity dependence of
the isotopic composition, and assumed a constant ratio
^3He/^4 He≃ 0.2.
For the electron daily spectra data, the AMS02 collaboration has
released 3193 spectra in the rigidity range [1–42] GV. Selecting the smaller
rigidity range < 10 GV,
the data can be successfully fitted with the expression
(<ref>) obtaining a global χ^2_ min/ d.o.f. = 0.68.
In this case the range of the fit must be reduced because the e^- spectrum has a hardening
that begins at ≃ 10 GV <cit.>.
Fitting the CR spectra with the form (<ref>) and three time
dependent parameters can be useful, but it is not entirely satisfactory, because
it is not obvious how to interpret the time dependence of the three parameters
K, α and V. If one tries to test a “minimal model” based on
the FFA model, with K and α constant and a time dependent
(but constant in rigidity) potential one obtains fits that describe
the data reasonably well, with deviations of order 10%, however, because
of the remarkable accuracy of the AMS02 and PAMELA measurement (with errors
of order 1–3%), the quality of the fits are poor.
This suggests to introduce a simple generalization of the FFA model,
that is always based on expression Eq. (<ref>) to
fit the rigidity spectra, but keeping K and α as time independent
(because they are considered as parameters associated to the LIS spectra)
and introducing a rigidity dependence for the potential V(t).
For this purpose we introduce the form:
V(, t) = V_0 (t) + [V_∞ (t) - V_0(t)] (1 -e^-/^* )
that contains two time dependent parameters V_0(t) and V_∞ (t)
that can be interpreted as the average energy losses (divided by |q|)
during propagation in the heliosphere for particles that arrive at the Earth with
very small and very large rigidities.
It is also possible to express the potential in
terms of V_1 = V(_1) and V_2 = V(_2), that are the values of V
for two (arbitrary, but conveniently chosen) rigidities:
V(, t) = e^-/^*/e^_1 / ^* -e^_2 / ^* [
V_1(t) (
e^( +_1 )/^* -e^(_1 + _2)/^* )
-V_2(t) (
e^( +_2 )/^* -e^(_1 + _2)/^* )
]
The AMS02 data are published for rigidities larger than 1 GV, and
the effect of modulations are small and difficult to measure for ≫ 10 GV,
and therefore in the present paper, we have chosen to parametrize the energy dependence of
the potential with V_1 = V(1 GV) and V_2 = V(10 GV).
The potential in Eq. (<ref>) also contains the additional parameter
^*, that is kept constant with value ^* = 6 GV [Considering
^* as a free parameter improve significantly the quality
of the fits only for a small number of the spectra in the AMS02 data set.].
In the remaining of this paper we will fit the lower rigidity
part of the CR spectra for protons, helium nuclei electrons and positrons
with the scheme we have outlined, that is using
Eq. (<ref>) with a
a time dependent potential of form (<ref>).
For the two parameters K and α that describe the
power law spectra, we have used the average values ⟨ K⟩
and ⟨α⟩ obtained from fits
to all AMS02 spectra
based on Eq. (<ref>)
with all three parameters K, α and V free (and V constant in rigidity).
The results are:
K = 2.94, 0.426, 0.743 and 5.01× 10^-3 [in units (cm^2 s sr GV)^-1]
and
α = 2.90, 2.80, 4.10 and 3.42
for p, He, e^- and e^+ respectively.
With this scheme one obtains reasonably good fits to all AMS02 and PAMELA observations.
For example, the global chi squared of fits to the AMS02 daily spectra are
χ^2_ min/ d.o.f. = 0.82, 0.76 and 0.71 for p, He and e^-.
These values are approximately equal to those obtained using
Eq. (<ref>) with three time dependent parameters:
K, α and (constant in rigidity) V, but the
interpretation of the parameters is now more natural.
Four examples of fits to the AMS02 measurements (two for p spectra, and
two for He spectra)
are shown in Fig. <ref>,
where one can see that they give a good description of the data.
The rigidity dependent potentials with form
(<ref>) that enter the expression for the rigidity spectrum
of Eq. (<ref>) are shown in Fig. <ref>,
where the points show the best fit values of the parameters V_1 and V_2.
One can see that the potentials have a modest but significant
rigidity dependence with a form that is different for
spectra observed at different times.
It is remarkable that the potentials
obtained fitting the p and He spectra measured the same day are (within errors)
equal to each other. This is in fact a result that is in general valid for all the p
and He daily spectra measured by AMS02, indicating that the solar modulations
for protons and helium nuclei are in good approximation equal.
It should be noted that the potential that enters the expression
of Eq. (<ref>) for the modulations is multiplied by the
absolute value of the electric charge of the particles,
therefore this result can also be stated saying that
(in an appropriate sense) the effects of solar modulations are two times larger
for helium (that has charge number Z=2) .
Some other examples of fits to AMS02 and PAMELA spectra
for protons, helium nuclei, electrons and positrons
calculated in the scheme we are discussing here
are shown in Fig. <ref>.
In the figure the spectra and their fits are shown, as function of kinetic energy, in four
separated panels, where the power law rigidity spectra that enters
the expression of Eq. (<ref>) are also shown as dashed lines.
In three panels (for p, He and e^-)
we also show the measurements obtained by Voyager 1
after crossing the heliopause at a distance of approximately 120 AU from the Sun
<cit.> that are considered as representative of the CR spectra in the
local interstellar medium.
Each one of the panels include three spectra from AMS02.
For p, He and e^- the three spectra are
the highest, the lowest and an intermediate one, chosen among the daily measurements
<cit.>.
For positrons, the three spectra are again the highest, lowest and an intermediate one, but
chosen among the measurements obtained averaging over
one Bartels rotation <cit.>.
In the panels for p and e^- we include two spectra
(the highest and lowest) obtained by PAMELA <cit.>
with longer averaging times.
The PAMELA results are of great interest because they cover
a different time interval (June–2006 to January–2018)
and because they are available in a kinematic range
that extends to lower rigidities.
Our model gives a good description also of the lower rigidity
observations of PAMELA, with significant deviations
only for the electron spectra at E ≲ 200 MeV.
§ TIME DEPENDENCE OF THE POTENTIALS
The time dependence of the potentials obtained fitting the daily spectra measured by
AMS02 for protons, helium nuclei and electrons are shown in Fig. <ref>.
The potentials in the figure include the subtraction of a constant shift
that depends on the particle type:
(Δ V_ LIS =0.29, 0.30 and 1.14 GV for p, He and e^- respectively)
that will be discussed in the next section.
The top–left panel in Fig. <ref> shows the
potential V_1 = V_[1 GV] for protons and electrons.
The two potentials have significantly different time dependences,
and with the shifts that we have introduced
are approximately equal during the time interval,
in the middle of 2014, that corresponds
to the reversal of the polarity of the solar magnetic field.
One also has
(for both rigidities 1 GV and 10 GV) the inequalities:
V^(e^-) (t) < V^(p) (t) for t < t_ reversal
V^(e^-) (t) > V^(p) (t) for t > t_ reversal
At the reversal time t_ reversal
the solar magnetic field polarity changes from negative (A = -1) to
positive (A = +1).
During a phase of negative polarity particles with electric charge
q < 0 arrive at the Earth from the heliospheric poles, while
particles with q > 0 arrive travelling close to the heliospheric equator
and the wavy current sheet.
The situation is reversed after the flip of the magnetic field polarity.
Our results are therefore consistent with the expectation that the energy losses
during propagation in the heliosphere are larger for particles
that arrive from the heliospheric equator <cit.>.
The top–right panel in Fig. <ref> shows the
differences between the potentials at rigidity 1 GV
of electrons and protons and of helium nuclei and protons.
It is striking that the potentials of p and He are approximately equal.
This result has important implications, because it validates the
idea of using a potential to describe solar modulations, and
is consistent with models where protons and helium nuclei
of equal rigidities follow (approximately) equal trajectories
in the heliosphere.
The bottom–left panel in Fig. <ref> shows the time
dependence of the potential differences Δ V = V_[10 GV] - V_[1 GV]
for protons and electrons.
The rigidity dependence of the potentials is rather small
(with |Δ V | ≲ 0.25 GV), so that a simple FFA parametrization can be considered,
for many applications, a reasonable approximation, validating many studies performed in the past,
however the introduction of a rigidity dependence is necessary to obtain good quality fits.
It is also important to note that Δ V can be either positive or negative at different times,
so that the modulated spectra can have different shapes at different times.
The bottom–right panel in Fig. <ref> shows the difference
between Δ V for electrons and protons, and for helium nuclei and protons.
One can note that the rigidity dependences of the potentials for electrons
and protons are strongly correlated but not identical. This can be understood
as the consequence of the facts that in general the properties of the
(different) regions of the heliosphere where particles of opposite electric
charge propagate are correlated, for example because the same CME's
can perturb both regions.
The difference in Δ V between protons and helium nuclei is much smaller,
and again indicates that the solar modulation effects are in good approximation
equal for the two particles.
To study positron solar modulation we have fitted
the AMS02 measurements of p, He, e^- and e^+ spectra
obtained averaging over 27 days <cit.>.
The results are shown in Fig. <ref>.
In the top–left panel the proton potential at rigidity ≃ 1 GV
is compared to the one obtained fitting the daily spectra
to show the consistency of the results.
In the top–right panel the potentials (always at 1 GV) for
the four particles (p, He, e^- and e^+) are shown together,
with the potential for positrons shifted by Δ V_ LIS^(e^+)≃ 0.176 GV).
The potentials for the three positively charged particles
(p, He and e^+) are in good approximation equal,
while the potential for e^- is significantly different.
It should be noted that one expects that the modulations of particles
with the same electric charge but different mass cannot be identical,
with differences that increase in importance for low rigidities.
The differences in modulation are expected because the relation between energy
and rigidity is mass dependent, so that particles of different
mass that enter the heliosphere at the same point with the same initial _i
will develop different rigidities due to energy losses, and travel along different trajectories.
In addition, particles with identical rigidity but different mass will have
different velocities and therefore different propagation times in the heliosphere,
and this can also result in different
modulations if the heliosphere is not in a stationary state.
Our analysis shows only small differences in the potentials for protons
and helium (at the level of a few percents).
Future studies of these mass dependent effects that include helium nuclei
will also have to take into account their rigidity dependent isotopic composition.
The results of the potentials at 1 GV for fits to the PAMELA protons (83 spectra
<cit.>)
and electrons (7 spectra <cit.>)
are shown, together with the fits to the AMS02 daily spectra,
in Fig. <ref>.
The PAMELA data start in June 2006, and
cover also the final part of solar cycle 23. The measurements of the
proton spectra extend to the beginning of 2014, and can be
compared with the first part of the AMS02 data.
The agreement between the two data sets is good.
The measurements of the e^- spectrum extend only to 2009, and
such a comparison is not possible.
§ THE LOCAL INTERSTELLAR SPECTRA
It is now desirable, indeed necessary, to address the question of what physical
meaning can be attributed to the potentials we have obtained
fitting the AMS02 and PAMELA data, and what
can be deduced from these studies about the CR interstellar spectra.
In the FFA model the physical meaning of the (rigidity independent
in the original formulation) potential is clear: it gives the
average energy loss (divided by |q|) suffered by CR particles
in their propagation from the boundary of the heliosphere to the Earth.
In the model discussed here the potential describes a
spectral distortion calculated with respect to an
“artificial” spectrum, that has a simple power law form in rigidity,
and therefore this potentials does not have a well defined physical meaning.
However, the difference Δ V(, t_1,t_2) = V(, t_1) - V(, t_2) between
potentials obtained from fits to the spectra measured at times t_1 and t_2,
is related to the two observed spectra via Eq. (<ref>),
and can be interpreted as the difference in the average energy loss suffered during
heliospheric propagation by particles
observed with rigidity at times t_1 and t_2.
The simple power law spectrum “cancels” in this comparison,
as it plays the role of a “scaffolding”, used to perform the fits
and obtain the potentials, and that can then be discarded.
This procedure leaves the LIS spectra undetermined, and this is a
serious limitation because the determination of the interstellar
spectra is a fundamental goal in the study of solar modulations.
There is a large literature about estimating the shape
of the cosmic ray LIS spectra
(see for example <cit.>)
and in all these studies the measurements
obtained by Voyager 1 beyond the heliopause <cit.>
play a crucial role.
One should however note that the Voyager data, while of great value,
are not sufficient to allow a model independent determination of the LIS spectra.
This is because the Voyager data cover only a limited kinematical range
(a maximum observed energy of 350 MeV for protons, and 75 MeV for electrons).
Since the energy lost by CR particles traversing the heliosphere
is of order 300 MeV or more, it follows that the CR particles in
the range observed by Voyager do not reach the Earth, and vice–versa
the particles in the energy range of observations at the Earth
arrived at the boundary of the heliosphere with energy
above the range of the Voyager measurements, and therefore a direct
comparison of shapes of the spectra formed by the same particles
in interstellar space and at the Earth is not possible.
The Voyager data are of course a very important constraint
in the construction of the LIS spectra. The importance of this
constraint is evident comparing the spectra in Fig. <ref>.
For example inspecting the top–left panel in the figure
one can see that the proton rigidity power law spectrum
(shown as a dotted line) used as a starting
point in the fitting procedure, is clearly much larger that the LIS
spectrum. On the other hand, distorting this power law spectrum
with a rigidity independent potential of 0.29 GV one obtains
(thick solid line) a spectrum that joins smoothly the Voyager data.
The same considerations are valid for the helium spectrum,
where distorting the power law spectrum with a rigidity independent potential of
0.30 GV one obtains a flux that joins smoothly the Voyager data
(see the top–right panel in Fig. <ref>).
This suggests that the LIS spectra for protons and helium
can be, in first approximation, described by a power law in rigidity
distorted by a rigidity independent potential Δ V_ Lis.
The potential V_ fit (, t) obtained from a fit connects
the spectrum observed at time t to a simple power law spectrum;
subtracting the shift Δ V_ LIS one
obtains a potential V(,t) that connects the observed and the interstellar
spectra, and therefore (in first approximation) describes the energy losses
of the CR particles during heliospheric propagation:
V (, t) ≃ V_ fit (, t) - Δ V_ LIS .
Extending these considerations to the electron spectra poses some
very interesting problems. A first consideration is that, as already
discussed, we expect that the potentials for particles
with electric charge of opposite sign will in general be different.
This is because the trajectories of charged particles
are also determined by the regular
heliospheric magnetic field, and particles with opposite electric charge
will propagate in different regions of the heliosphere,
where they can suffer different energy losses.
At solar maximum however, during the reversal of the heliospheric magnetic field
polarity, the regular field is negligible, and the trajectories of
the CR particles are controlled only by the random field.
This implies that during the duration of the polarity reversal,
the potentials for particles of opposite electric charge should be
approximately equal.
Imposing the constraint:
⟨ V^(e^-)⟩_ reversal =
⟨ V^(p)⟩)_ reversal
for averages of the potentials during the field polarity reversal
(that is approximately the time interval from May to July 2014),
we arrive to an estimate of the potential shift required for electrons:
Δ V_ LIS^(e^-)≃ 1.14 GV.
The electron LIS spectrum calculated with this shift
is shown in the bottom–left panel of Fig. <ref>.
To connect this estimate of the LIS spectrum to the
Voyager data (that are available only at very low energy: E ≲ 75 MeV)
seems to require a non trivial spectral shape,
perhaps indicating the presence of an additional low energy
component in the electron spectrum.
For positrons no measurements at large distance from the Sun are available
to constrain the shape of the e^+ LIS spectrum, however it is possible
to estimate the shift Δ V_ LIS^(e^+) comparing fits to the
p and e^+ spectra taken simultaneously and averaged over
one Bartels rotation <cit.>.
The potentials for p and e^+ are shown in Fig. <ref>,
and are consistent with a constant difference:
V_ fit^(e^+) (, t) ≃ V_ fit^(p) (, t) - 0.124 GV .
suggesting that the Δ V_ LIS^(e^+)≃Δ V_ LIS^(p) - 0.124 GV.
Adopting this shift one obtains for positrons the LIS spectrum
shown with the thick solid line in the bottom–right panel in Fig. <ref>.
The estimates of the LIS spectra obtained in this section are only tentative,
and are not justified by a theoretical model, and therefore
of limited value. In particular, the assumption that Δ V_ LIS is rigidity
independent does not have a good justification,
except for the fact that it results,
for all the four particle types considered here (p, helium nuclei, e^∓),
in a remarkably simple form for the LIS spectra,
with a shape determined by only two parameters
(the spectral index α and the potential Δ V_ LIS).
The possible implications of this result deserve a more detailed study.
The study of the shape of the e^∓ LIS spectra
is of particular importance because different models predict that in the
kinematical range where solar modulations are important
(0.1 ≲≲ 10 GV) one should observe
spectral structures, associated for example to the critical energy where
energy losses during interstellar propagation
become the dominant sink mechanism for e^∓
(overtaking escape from the Galaxy) <cit.>,
or the critical energy where a new source mechanism
(such as acceleration in Pulsars) becomes the dominant one <cit.>.
The simple shape of the e^∓ LIS spectra suggested by our study
disfavours these possibilities.
§ HYSTERESIS LOOPS
§.§ The 22–year solar cycle
An instructive way to compare the proton and
electron potentials is shown in Fig. <ref>, where the top
panel shows the trajectory
of the point {V_p(t), V_e^- (t)} that represents the potentials at
rigidity 1 GV obtained fitting electron and proton spectra measured
at the same time t by PAMELA or AMS02.
For the PAMELA data the plot shows the potentials obtained fitting the
seven electron spectra in
<cit.>, together with an interpolation
of the potentials obtained fitting the proton spectra
<cit.>.
The PAMELA measurements are in the time
interval from July 2006 to October 2009 and cover the
last part of solar cycle 23 when solar activity goes toward its minimum,
with polarity A < 0.
For the AMS02 data we show the results of fits to all days
where both p and e^- spectra have been measured, with the broken line
connecting all measurements (in order of increasing time).
The AMS02 data start in May 2011, and covers most of solar cycle 24,
including the phase of solar maximum where one observes
the reversal of the solar magnetic field polarity.
Inspecting Fig. <ref> one can observe some striking features,
with the trajectory of the potential pair {V_p(t), V_e^- (t)}
that draws a loop.
From the beginning of the AMS02 observations until the solar maximum
around the middle of 2014 (a period where A < 0),
both potentials V_p(t) and V_e^-(t),
averaging over fluctuations,
grow gradually at approximately the same rate but with V_e^- (t) < V_p (t).
The time interval 2014–2016 corresponds to an extended
solar maximum phase and shows an
evident double peak structure separated by a gap, a structure
that is also observed in other solar cycles.
During this phase of the cycle the proton
potential reaches its maximum before the potential for electrons.
In the subsequent phase of the cycle
(with positive polarity A > 0) both potentials
decrease, again at approximately the same rate, but the inequality
for the potential is reversed (V_e^- (t) > V_p (t)).
In the top panel of Fig. <ref> the complicated form of the line
that connects the potentials for the AMS02 daily spectra encodes
valuable information, but performing moving averages of the
two potentials allows to obtain the much simpler trajectory,
shown in the bottom panel, where the “global loop”
of the trajectory is more clearly visible.
The data strongly suggests that the
point {V_p(t), V_e^- (t)} travels along the loop
in a clockwise sense for cycles (like solar cycle 23) where
the magnetic field polarity at the start of the cycle
(that is at solar minimum) is positive, and in an anti–clockwise
sense for cycles (like solar cycle 24) where the situation is opposite.
In fact, in Fig. <ref>, one can observe
that in the time interval where only the PAMELA data are available
both potentials decrease gradually (with V_e^- (t) < V_p(t)),
and the pair {V_p (t), V_e^- (t) }
completes (around the end of 2006) a clockwise loop at the solar minimum that separates
solar cycles 23 and 24. After a gap in the observations of approximately
1.6 years, the AMS02 become available during the growing phase
of solar cycle 24, and one observes a reversal of the trajectory
with both potentials growing (with V_e^- (t) < V_p(t) as before) and therefore moving in an
anti–clockwise sense along the loop.
If this scenario is correct, during the current solar cycle (number 25)
that started around December 2019, one should observe the point that
represents the potential pair to move in clockwise sense along a loop
that during the initial phase of increasing solar activity has V_e^- > V_p.
§.§ Solar activity transients
In the bottom panel of Fig. <ref> are also evident
some loop–like structures of shorter time scale,
that are in coincidence with similar structures observed for the flux–flux correlations
of a single particle (as discussed in section <ref>
and illustrated in Fig. <ref>).
These effects can be also observed studying correlations
between the values of the potential at different rigidities.
This is illustrated in Fig. <ref> that shows the trajectory of the point
{V_1 (t), Δ V(t)} where V_1 = V_[1 GV] and
Δ V = (V_[10 GV] - V_[1 GV]) that describes the time evolution
of the potentials obtained fitting the daily spectra
measured by AMS02 for protons, helium nuclei and electrons.
In the three panels at the top the broken line
connects the results of the fits to the daily spectra for the three particle types
(the errors are not shown to avoid clutter).
One can note that for a fixed value of V_1, the value of Δ V is not unique but it
has a finite range.
The three panels in the middle row of
the figure show moving averages of the potential
after integration over time intervals of 81 days (three Bartels rotations).
The simplification obtained performing
the moving average allows to make evident some interesting “hysteresis structures”.
These structures are of course the same ones visible in the flux–flux correlations
of Fig. <ref>, it is however interesting to note
that the modulation potential describes the state of the heliosphere,
and is independent from the shape of the spectra of the particles
in interstellar space.
Inspecting Fig. <ref> one can see that the hysteresis effects for
protons and helium are approximately equal, while the effects for electrons,
while strongly correlated, are significantly different.
This can be understood noting that the same solar activity events, such as large
CME's, can perturb both of the (different) regions
of the heliosphere where protons and electrons are propagating,
resulting in effects on the p and e^- spectra that are correlated but not identical.
The three panels in the bottom row of Fig. <ref>
show the trajectories of the potentials for moving averages with
a long integration time of 378 days (14 Bartels rotations).
For all three particles (p, He and e^-) one can see
some significant differences (with the same qualitative structure)
for the average potentials during phases of the solar cycle before and
after solar maximum.
This effect is the same that was observed by AMS02
in <cit.> studying the helium/proton ratio.
It is difficult to say at the moment what is the origin of the
effect, and if it is associated to the ensemble of the
solar transient events in the solar cycle under study,
or is related to the general properties of the 22–year solar cycle.
As already discussed, performing moving averages
(of fluxes as in Fig. <ref>, or of potentials
as in Fig. <ref>)
allows the visualization of interesting structures in CR modulation,
but also erases valuable information encoded in the
evolution of modulations for time scales shorter than the averaging time.
To illustrate this point in Fig. <ref>
we show again the detailed (day to day) trajectory of the potential parameters
{V_1(t), Δ V(t)} for protons,
indicating few (seven) days that correspond to major solar events.
These events have also resulted in Forbush decreases observed by neutron monitors.
To each events corresponds an large increase in the modulation potentiala,
and remarkably the increase of the potential at the higher rigidity (10 GV) is stronger
than at the lower one (1 GV).
These effects (as discussed in section <ref>) have been revealed in the past
<cit.>,
but a detailed explanation is still under construction.
The effects of large solar activity events on the CR spectra can evolve very
rapidly on a time scale of hours, and following the details of this evolution,
can be of great help to develop an understanding of these phenomena.
the AMS02 daily measurements are therefore of great interest.
As an example, in Fig.<ref> we show the the trajectory
of the modulation potentials (for p, He and e^-) obtained fitting
the AMS02 daily spectra obtained during few days around
one of the largest solar events during solar cycle 24.
This event was observed around the
summer solstice of 2015 <cit.>.
From 18–23 June, one of the largest sunspot active regions in the Sun (AR 12371),
at the time directly facing Earth, produced several flares,
giving origin of four CME impacting Earth in the period 21-25 June.
The third and largest impact (June 22nd)
generated a G4-severe geomagnetic storm with spectacular auroras even
at low latitudes, followed by a Forbush decrease observed by ground-level detectors.
Fig. <ref> puts in evidence the trajectories of the modulation
potential for p, He and e^- (represented by the pair {V_1(t), Δ V(t)})
taken during a time interval of 16 days around the date of the
solar storm (starting 5 days before, and ending 10 days after).
A detailed description of this event is not possible here, but one can note
that it generated distortions of the spectra for all three CR particles
of very similar structure.
The spectral distortions generated by the event
developed rapidly, with a time scale of one day or less;
following this, the spectra returned to their pre-solar-event values
with a longer time scale of several days.
As noted before, the distortions (measured by the variation of the modulation potential)
were larger at the higher rigidity of 10 GV, and weaker at ≃ 1 GV, and this
appears to be the case in most if not all cases.
Time structures qualitatively similar to what we have described
can be observed for other large solar activity events.
§ SUMMARY AND CONCLUSIONS
Most of the already rich literature that discusses the PAMELA and AMS02
data is based on the study of the time dependence of the CR fluxes in
different intervals of rigidity (or energy).
An alternative possibility, it to extract from the data some
time dependent parameters that describe the CR spectral shapes.
In this work we have used this second approach, and demonstrated
that it is possible to accurately and economically describe the CR spectra
of each particle type in terms of a time dependent modulation potential V (, t).
A small (but not negligible) rigidity dependence of the potential is required
to fit the high precision data that are now available.
The main goal of this work has been to investigate
the origin of the phenomena observed by the AMS02 collaboration
and called “hysteresis effects”. Two of such effects have been shown by
combining measurements of the fluxes of helium and protons
<cit.> and
of electrons and protons <cit.>.
We suggest that two distinct mechanisms are acting to generate the effects.
A first mechanism is at the origin of the largest effect,
that is observed for the e^-/p combination with a long (≳ 1 yr)
time scale. This effect can be described as an “hysteresis loop”
for the potentials V_p (t) and V_e^- (t) (at any fixed rigidity )
for p and e^- spectra, with the same period of the 11–year solar cycle
(note that these loops can also be observed
as the hysteresis of the fluxes {_p (,t), _e^- (, t)}).
In fact, we suggest that this effect generates a “double loop”
with the potentials moving along trajectories of similar form
but in opposite directions in alternate solar cycles.
Fitting the AMS02 data one observes that
during the first part of solar cycle 24, before maximum,
the two potentials V_p (t) and V_e^- (t),
after averaging over fluctuations generated by solar transient events,
increase gradually with solar activity with V_p (t)> V_e^- (t).
After solar maximum the two potentials decrease gradually,
but the inequality is reversed: V_p (t) < V_e^- (t).
This results in a trajectory of the point {V_p (t), V_e^- (t)}
that follows, in an anti–clockwise sense,
a loop–like trajectory.
There are indications from the PAMELA data that a similar trajectory,
but moving in the opposite direction, was followed by the p and e^- potentials
during the previous solar cycle that finished in December 2009.
It is now natural to predict that the pair of p and e^- potentials will move
along loops of similar form, in opposite senses during even and odd solar cycles.
This prediction is based on some simple and well established results about the
propagation of charged particles in the heliosphere.
Because of the structure of the regular solar magnetic field
one has that when q A > 0 (that is when the product
of the electric charge q of the cosmic rays
and the polarity A of the solar magnetic field is positive)
the CR particles arrive at the Earth mainly from the
heliospheric poles,
while in the opposite case (q A <0) the CR particles arrive mainly
along the current sheet near the heliospheric equator.
The energy loss suffered by the particles during propagation
(and therefore the size of the modulations) at the same phase in a cycle
is larger for propagation close to the current sheet, and therefore
one has the inequality
V_[q A > 0](t) < V_[q A < 0](t) .
The polarity A is reversed at solar maximum (in the middle of one solar cycle),
and this, combined with the fact that the potentials are correlated with
the 11-year cycle of solar activity, generates the double loop structure.
A second mechanism is at the origin of two other effects
discussed in the AMS02 publications, namely:
(i) the “sharp structures”
observed in the e^-/p hysteresis that correspond to structures
observed in the time evolution of the fluxes for both particle types
<cit.>.
Similar sharp structures have not been reported but are also present
for He/p hysteresis curves, and become evident performing moving averages
with integration times of 10–100 days.
(ii) the hysteresis effects observed combining the proton and helium fluxes
<cit.>.
In this work we argue that both effects (i) and (ii) have their
origin in the fact that CR spectra at the Earth
suffer modulations that cannot be described by one family of curves
controlled by a single time dependent parameter,
because the distortions generated by modulations can have different shapes at different times.
More explicitely, the value of the spectrum at one rigidity _1 does not determine
uniquely the spectrum at a different rigidity _2.
These variations in spectral shape can be observed
studying the hysteresis of pairs of measurements such as
{ (_1,t), (_2, t)} of the flux of a single particle type
for two distinct values of the rigidity,
or alternatively of the hysteresis for pairs of potentials
{ V (_1,t), V (_2, t)}.
These studies reveal that solar activity events, like large
CME's, that perturb the heliosphere causing rapid variations
(or “sharp structures”) in the time evolution of the CR fluxes at
any (sufficiently low) fixed value of the rigidity,
generate spectral distortions that are rigidity dependent,
with effects that are in general
more rapid and stronger at higher .
Therefore an hysteresis curve { (_1,t), (_2, t)}
or { V (_1,t), V (_2, t)}
in the presence of one such transient will also exibit a “sharp structure”, typically
in the form of a clockwise (for _2 > _1) loop that extends
for the duration of the heliospheric perturbation associated to the
solar transient.
These effects are also visible in hysteresis studies,
such as those performed by AMS02, that combine measurements of the fluxes
of different particles at the same rigidity.
This is the case when comparing protons and electrons, when
(as discussed above) the particles suffer different modulations,
but it also true comparing protons and helium nuclei,
that suffer modulations that are approximately equal,
because the modulations effects act as distortions
on LIS spectra that have different shapes.
On the other hand, if the study is performed for the modulation potentials
(that are independent from the shape of the LIS spectra)
the sharp loop–like structures associated with solar activity events
are absent for the hysteresis of the potentials of
p and He, because the two particles suffer approximately equal modulations,
while they continue to exist for the p/e^- comparison,
because the two particles types have opposite electric charge and
propagate in different regions of the heliosphere, that are disturbed in different ways
by the solar events.
An interesting problem is to establish the origin of the hysteresis effect
reported by AMS02 comparing, in the same rigidity interval,
fluxes of protons and helium nuclei with a long (378 days) averaging time,
and observing that, for the same helium flux, the He/p ratio is larger
after solar maximum.
The same effect can be revealed comparing
fluxes (or modulation potentials) of either protons or helium nuclei,
at rigidities of order 1 GV and 5 GV, and observing that
the spectral shapes are different before and after the solar maximum of 2014,
and for equal flux at the lower rigidity, the flux at the higher rigidity
is larger (by approximately 4%) after solar maximum (see Fig. <ref>).
Establishing the origin of this effect is not easy.
One can notice that protons and helium nuclei arrive at the
Earth mainly from the heliospheric equator before solar maximum,
and mainly from the heliospheric poles after maximum, suggesting
that the difference in
modulation could follow from this fact.
However, in conflict with the hypothesis, one observes
a very similar effect (a larger flux at the higher rigidity)
for electrons, that have the opposite behaviour, arriving at the Earth from
the poles before maximum and from the equator after maximum,
so that the propagation effects should be reversed.
An alternative explanations is that
the before/after maximum asymmetry is generated by a difference in
a “lag effect” of the modulations when the (time averaged) solar activity
is increasing or decreasing, and in this case one should observe the same
effect in different solar cycles.
Another possibility is that the asymmetry is the cumulative
effect of the distortions generated by solar activity events
in the early and late parts of solar cycle 24. In this case
the average effect could be different during different solar cycles.
In this paper we have not addressed the problem
of constructing a model of CR propagation in the heliosphere
capable of generating modulations of different shape at different times,
based in information about the state (and history) of the heliosphere.
We have however developed a preliminary step, constructing a
“minimal” parametrization for the shape of the CR spectra at the Earth based
on a generalization of the FFA model with a rigidity dependent potential,
determined by its values at two arbitrary rigidities
(chosen as 1 GV and 10 GV here). This model allows to describe in very compact
way the differences in shape between spectra measured at different times.
Using this model, we have verified that the modulations
of protons, helium nuclei and positrons are in good approximation equal, with
mass dependent effects smaller than few percent also at rigidities below 1 GV.
Our phenomenological model for the description of solar modulations
also suggests the intriguing result that in a broad rigidity range ([0.1,100] GV for
p and He, and [0.3,10] GV for e^∓) the LIS spectra can be well described by
a very simple form: an exact power law in rigidity
modified by an approximately constant energy loss (of order 0.3 GeV for protons and
helium nuclei, 1.1 GeV for electrons, and 0.18 for positrons).
The construction of a model that can successfully predict the
time dependence of the CR spectra at the Earth on the basis of information
about the heliosphere remains a challenging task,
necessary to validate the reconstruction of the cosmic ray interstellar spectra.
100
Potgieter:2013pdj
M. Potgieter,
“Solar Modulation of Cosmic Rays”
Living Rev. Solar Phys. 10, 3 (2013)
doi:10.12942/lrsp-2013-3
[arXiv:1306.4421 [physics.space-ph]].
simpson
J. A. Simpson,
“The Cosmic Ray Nucleonic Component: The Invention and Scientific
Uses of the Neutron Monitor”,
Space Science Reviews, 93, 11 (2000)
doi:10.1023/A:1026567706183
Adriani:2013as
O. Adriani, et al. [PAMELA Collaboration],
“Time dependence of the proton flux measured by PAMELA during the July 2006 - December 2009 solar minimum”,
Astrophys. J. 765, 91 (2013)
doi:10.1088/0004-637X/765/2/91
[arXiv:1301.4108 [astro-ph.HE]].
Martucci:2018pau
M. Martucci, et al. [PAMELA Collaboration],
“Proton Fluxes Measured by the PAMELA Experiment from the Minimum to the Maximum Solar Activity for Solar Cycle 24,”
Astrophys. J. Lett. 854, no.1, L2 (2018)
doi:10.3847/2041-8213/aaa9b2
[arXiv:1801.07112 [physics.space-ph]].
Adriani:2015kxa
O. Adriani, et al. [PAMELA Collaboration],
“Time dependence of the e^- flux measured by PAMELA during the
2006 july 2009 december solar minimum,”
Astrophys. J. 810, no.2, 142 (2015)
doi:10.1088/0004-637X/810/2/142
[arXiv:1512.01079 [astro-ph.SR]].
Adriani:2016uhu
O. Adriani, et al. [PAMELA Collaboration],
“Time Dependence of the Electron and Positron Components of the Cosmic Radiation Measured by the PAMELA Experiment between July 2006 and December 2015,”
Phys. Rev. Lett. 116, no.24, 241105 (2016)
doi:10.1103/PhysRevLett.116.241105
[arXiv:1606.08626 [astro-ph.HE]].
ams_bartels_protons
M. Aguilar et al. [AMS02 Collaboration],
“Observation of Fine Time Structures in the Cosmic Proton and Helium Fluxes with the Alpha Magnetic Spectrometer on the International Space Station,”
Phys. Rev. Lett. 121, no.5, 051101 (2018)
doi:10.1103/PhysRevLett.121.051101
ams_bartels_electrons
M. Aguilar et al. [AMS02 Collaboration],
“Observation of Complex Time Structures in the Cosmic-Ray Electron and Positron Fluxes with the Alpha Magnetic Spectrometer on the International Space Station”,
Phys. Rev. Lett. 121, no.5, 051102 (2018)
doi:10.1103/PhysRevLett.121.051102
ams_daily_protons
M. Aguilar et al. [AMS02 Collaboration],
“Periodicities in the Daily Proton Fluxes from 2011 to 2019 Measured by the Alpha Magnetic Spectrometer on the International Space Station from 1 to 100 GV”,
Phys. Rev. Lett. 127, no.27, 271102 (2021)
doi:10.1103/PhysRevLett.127.271102
ams_daily_helium
M. Aguilar et al. [AMS02 Collaboration],
“Properties of Daily Helium Fluxes,”
Phys. Rev. Lett. 128, no.23, 231102 (2022)
doi:10.1103/PhysRevLett.128.231102
ams_daily_electrons
M. Aguilar et al. [AMS02 Collaboration],
“Temporal Structures in Electron Spectra and Charge Sign Effects in Galactic Cosmic Rays”,
Phys. Rev. Lett. 130, no.16, 161001 (2023)
doi:10.1103/PhysRevLett.130.161001
ams_helium_isotopes
M. Aguilar et al. [AMS02 Collaboration],
“Properties of Cosmic Helium Isotopes Measured by the Alpha Magnetic Spectrometer,”
Phys. Rev. Lett. 123, no.18, 181102 (2019)
doi:10.1103/PhysRevLett.123.181102
Gleeson:1968zza
L. J. Gleeson and W. I. Axford,
“Solar Modulation of Galactic Cosmic Rays,”
Astrophys. J. 154, 1011 (1968)
doi:10.1086/149822
forbush-1937
S.E. Forbush
On the Effects in Cosmic-Ray Intensity Observed During
the Recent Magnetic Storm",
Phys.Rev. 51, 1108 (1937).
doi:10.1103/PhysRev.51.1108.3
hyst1
H.J. Verschell, R.B. Mendell and S.A. Korff,
“A Hysteresis Effect in Cosmic Ray Modulation”,
In Proc. of 13th ICRC Vol. 2 p.1317 (1973).
rajan_loops
R.S. Rajan,
“Hysteresis of primary cosmic rays associated with Forbush decrease”,
Australian Journal of Physics, 29, 89 (1976)
doi:10.1071/PH760089
Munini:2018cgc
R. Munini, et al. [PAMELA Collaboration],
“Evidence of Energy and Charge Sign Dependence of the Recovery Time for the 2006 December Forbush Event Measured by the PAMELA Experiment,”
Astrophys. J. 853, no.1, 76 (2018)
doi:10.3847/1538-4357/aaa0c8
[arXiv:1803.06166 [astro-ph.HE]].
Lagoida:2021udw
I. A. Lagoida, V. V. Mikhailov, S. A. Voronov and M. D. Ngobeni,
“Energy Dependence of the Main Characteristics of Forbush Decreases, Obtained by the PAMELA Experiment,”
Bull. Russ. Acad. Sci. Phys. 85, no.11, 1276-1279 (2021)
doi:10.3103/S1062873821110186
DAMPE:2021qet
F. Alemanno et al. [DAMPE],
“Observations of Forbush Decreases of Cosmic-Ray Electrons and Positrons with the Dark Matter Particle Explorer,”
Astrophys. J. Lett. 920, no.2, L43 (2021)
doi:10.3847/2041-8213/ac2de6
[arXiv:2110.00123 [astro-ph.HE]].
ams_electrons
M. Aguilar et al. [AMS02 Collaboration],
“Towards Understanding the Origin of Cosmic-Ray Electrons,”
Phys. Rev. Lett. 122, no.10, 101101 (2019)
doi:10.1103/PhysRevLett.122.101101
voyager-2016
A. C. Cummings, et al.,
“Galactic Cosmic Rays in the Local Interstellar Medium: Voyager 1 Observations and Model Results,”
Astrophys. J. 831, no.1, 18 (2016)
doi:10.3847/0004-637X/831/1/18
Lipari:2014gfa
P. Lipari,
“Solar modulations by the regular heliospheric electromagnetic field”,
[arXiv:1408.0431 [astro-ph.HE]].
Boschini:2017fxq
M. J. Boschini, et al.,
“Solution of heliospheric propagation: unveiling the local interstellar spectra of cosmic ray species,”
Astrophys. J. 840, no.2, 115 (2017)
doi:10.3847/1538-4357/aa6e4f
[arXiv:1704.06337 [astro-ph.HE]].
Bisschoff:2019lne
D. Bisschoff, M. S. Potgieter and O. P. M. Aslam,
“New very local interstellar spectra for electrons, positrons, protons and light cosmic ray nuclei,”
Astrophys. J. 878, no.1, 59 (2019)
doi:10.3847/1538-4357/ab1e4a
[arXiv:1902.10438 [astro-ph.HE]].
Lipari:2018usj
P. Lipari,
“Spectral shapes of the fluxes of electrons and positrons and the average residence time of cosmic rays in the Galaxy,”
Phys. Rev. D 99, no.4, 043005 (2019)
doi:10.1103/PhysRevD.99.043005
[arXiv:1810.03195 [astro-ph.HE]].
DiMauro:2023oqx
M. Di Mauro, F. Donato, M. Korsmeier, S. Manconi and L. Orusa,
[arXiv:2304.01261 [astro-ph.HE]].
event-summer-solstice-2015
C.R. Augusto, et al.,
“The 2015 Summer Solstice Storm: One of the Major Geomagnetic Storms of Solar Cycle 24 Observed at Ground Level”,
Solar Physics 293, 84 (2018).
DOI:10.1007/s11207-018-1303-8
[arXiv:1805.05277]
|
http://arxiv.org/abs/2306.01497v1
|
20230602124534
|
Data-Efficient French Language Modeling with CamemBERTa
|
[
"Wissam Antoun",
"Benoît Sagot",
"Djamé Seddah"
] |
cs.CL
|
[
"cs.CL"
] |
Using alternating de Bruijn sequences to construct de Bruijn tori
Matthew Kreitzer, Mihai Nica, Rajesh Pereira
University of Guelph
July 31, 2023
==========================================================================
Recent advances in NLP have significantly improved the performance of language models on a variety of tasks.
While these advances are largely driven by the availability of large amounts of data and computational power, they also benefit from the development of better training methods and architectures.
In this paper, we introduce , a French DeBERTa model that builds upon the DeBERTaV3 architecture and training objective.
We evaluate our model's performance on a variety of French downstream tasks and datasets, including question answering, part-of-speech tagging, dependency parsing, named entity recognition, and the FLUE benchmark, and compare against CamemBERT, the state-of-the-art monolingual model for French.
Our results show that, given the same amount of training tokens, our model outperforms BERT-based models trained with MLM on most tasks.
Furthermore, our new model reaches similar or superior performance on downstream tasks compared to CamemBERT, despite being trained on only 30% of its total number of input tokens.
In addition to our experimental results, we also publicly release the weights and code implementation of , making it the first publicly available DeBERTaV3 model outside of the original paper and the first openly available implementation of a DeBERTaV3 training objective.[https://gitlab.inria.fr/almanach/CamemBERTahttps://gitlab.inria.fr/almanach/CamemBERTa]
§ INTRODUCTION
Advances in natural language processing (NLP) have been driven mainly by scaling up the size of pre-trained language models, along with the amount of data and compute required for training <cit.>.
However, these are not the only factors to determine a model's downstream performance, as the model's architecture and training objective are also important.
<cit.> showed that we can improve a model's performance by using disentangled attention, which uses two vectors to represent a token, one for position and one for content.
<cit.> later showed that performance could be further improved by using ELECTRA's <cit.> self-supervised and sample-efficient replaced token detection objective.
Another crucial aspect lies in the ability to train models faster, which allows for quick iteration and thus accelerates the research process and allows for more efficient exploration of new ideas <cit.>.
This research aims to develop data-efficient and optimized training techniques that can improve performance in downstream tasks, while reducing the required training corpus size and compute.
To achieve this goal, we propose a new data-efficient French language model based on DeBERTaV3 <cit.>.
Our proposed model aims to optimize the training process by using a sample-efficient training objective, a state-of-the-art model architecture, and an efficient implementation.
We evaluate downstream performance with a variety of NLP tasks, including dependency parsing, part-of-speech tagging, named entity recognition, text classification, and question answering.
We compare our model to a BERT model trained with the masked language modeling (MLM) objective using the same tokenizer and training corpus, and to the state-of-the-art French language model, CamemBERT <cit.>, which required three times as many training iterations.
Our results show that our proposed model reaches or establishes a new state-of-the-art using one third of the computational budget of its main predecessors.
Our contributions can be summarized as follows:
* We propose a new data-efficient French language model, which we train based on our DeBERTaV3 re-implementation with our optimized training recipe.
* We empirically show that under the same conditions, our model outperforms Transformer models trained with MLM on most tasks, and that it reaches or establishes a new state-of-the-art even when compared with models trained for three times as long.
* Our release is the only publicly available implementation of DeBERTaV3's training objective, and the first for a monolingual model other than the original paper.
Our code and models are available under an open-source license[https://gitlab.inria.fr/almanach/CamemBERTahttps://gitlab.inria.fr/almanach/CamemBERTa], making it easy for researchers to reproduce our results and build upon our work.
§ RELATED WORKS
Transformers. This architecture has been widely adopted in NLP tasks such as language modeling, mainly due to the use of the self-attention mechanisms <cit.>, which allow the model to weigh the importance of different parts of the input when making predictions.
A downside of the Transformer block is that it is permutation-invariant, which inhibits the model from encoding word order information.
Originally, the authors proposed to add either a fixed sinusoidal pattern or a learned positional embedding as positional bias the input token embedding.
Later studies have shown that using relative positional embeddings is more effective <cit.>.
Recently, <cit.> proposed a new disentangled attention mechanism, which considers both the relative position and the content of the input tokens as separate vectors.
Pre-trained French Language Models.
Current language models available for French are either trained using Masked Language Modeling (MLM) or Causal Language Modeling (CLM).
<cit.> and <cit.> are two of the most popular contemporary French models, both trained with masked language modeling.
Other models include FrALBERT <cit.>, a French version of ALBERT <cit.>, LePetit <cit.> which is a small version of CamemBERT, and D’AlemBERT <cit.>, a RoBERTa <cit.> based language model targeted towards Early Modern French. BARThez <cit.> is a sequence-to-sequence model trained with BART's objective <cit.>, and PAGnol <cit.> and Cedille <cit.> are models trained with the CLM objective.
To the best of our knowledge, there is no prior effort in developing language models with this improved disentangled attention mechanism and objectives other than MLM/CLM beyond English.
§ : METHODOLOGY
The following section details our proposed architecture and pre-training objective, along with descriptions for the downstream tasks.
Architecture
is based on the DeBERTaV3 <cit.> architecture which uses two vectors to encode the word and its position, with the premise being that the relative position of a word pair should also directly affect the computed attention weights.
The V3 version optimizes the initial DeBERTa architecture by sharing the relative position embedding projection layers across all the encoder layers, and by adding a convolution layer aside the first encoder layer.[See Section 5.3 of the DeBERTa paper <cit.>]
We use a base model configuration with 12 layers and 12 attention heads, 768 hidden dimensions with 32k for vocabulary size.
Training Objective
We follow the DeBERTaV3 <cit.> pretraining strategy by using the replaced token detection (RTD) pre-training loss first introduced in ELECTRA <cit.>, with a generator and discriminator based on the DeBERTa architecture.
During pre-training we project the generator embeddings to 256 dimensions and keep the generator model at 12 layers.
During pre-training the generator model is trained using the MLM objective where we dynamically mask 15% of the input tokens.
We then sample from the generator the masked tokens, and feed the output along with the unmasked tokens to the discriminator which is tasked to identify tokens that were replaced by the generator.
The RTD objective increases sample efficiency since the model is predicting over all input tokens instead of the 15% masked tokens.
In DeBERTaV3, the authors hypothesized and showed that sharing token embeddings between the generator and the discriminator results in a tug-of-war situation, where the MLM and RTD tasks pull the embedding vectors into opposing directions.
To alleviate this problem, the authors implemented Gradient-Disentangled Embedding Sharing (GDES), a method that re-parameterize the discriminator's token embeddings as E_D = sg(E_G) + E_Δ, where sg stops the gradient flow from the RTD loss to the generator token embeddings E_G, and hence the loss gradient only updates a Difference Embedding matrix E_Δ that is added to E_G to form the discriminator token embeddings E_D.
After pre-training, E_Δ and E_G are summed to get the final E_D and E_Δ is then discarded.
Pre-Training
We pre-train on the French subset of [See Appendix <ref> for more information on dataset choice.] <cit.>, the same corpus used to pre-train CamemBERT_CCNet <cit.>.[We go over the pertaining dataset choice in the experiments section.]
Moreover we reuse CamemBERT_CCNet's tokenizer <cit.>.
By reusing the pre-training corpus and tokenizer, we isolate the performance differences to the model architecture and training objective variables.
Optimization
To speed up the pre-training experiments, we split the pre-training into two phases; in phase 1, the model is trained with a maximum sequence length of 128 tokens for 10,000 steps with 2,000 warm-up steps and a very large batch size of 67,584.
In phase 2, maximum sequence length is increased to the full model capacity of 512 tokens for 3,300 steps with 200 warm-up steps and a batch size of 27,648.
Because we use very large batch sizes, we optimize the model using the LAMB optimizer <cit.> with a learning rate of 6e^-3, β_1 = 0.878, and β_2 = 0.974.
§ EXPERIMENTS AND RESULTS
Pre-Training Setup
We re-implement the RTD pre-training objective with GDES, since no public implementation was available at the time of writing.
Our training implementation is based on Nvidia's ELECTRA and BERT TensorFlow2 implementations.[https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow2/LanguageModeling/https://github.com/NVIDIA/DeepLearningExamples/]
We train our models for 8 days on 6 Nvidia A40 with Horovod <cit.>, and make use of XLA compilation, mixed-precision and gradient accumulation to speed-up training and to fit large batch sizes with our limited compute.
During pre-training, our model would have seen 133B tokens compared to 419B tokens for CamemBERT_CCNet which was trained for 100K steps. This represents roughly 30% of CamemBERT's full training.
Hence for a fair comparison, we train a RoBERTa model, which we dub CamemBERT_30%, using our same exact pre-training setup but with the MLM objective.
Downstream Evaluation
We compare our models, CamemBERT_CCNet, and CamemBERT_30%, on a diverse set of French downstream tasks and datasets, namely:
Question Answering (QA) on FQuAD 1.0 <cit.>, Part-Of-Speech (POS) tagging and Dependency Parsing on GSD <cit.>, Rhapsodie <cit.>, Sequoia <cit.> in their UD v2.2 versions and the French Social Media Bank[We follow <cit.> and use their shuffled version of the treebank, which they split into around 2000 sentences for training, and 1000 for each the dev and test sets] <cit.>, Named Entity Recognition (NER) on the 2008 version of FTB <cit.> with NER annotation by <cit.>, and the FLUE benchmark <cit.>.
We use the dataset splits as provided by their respective authors, and we finetune using well-tested scripts from the Hugging Face library and the HOPS parser <cit.>.
We only perform hyper-parameter tuning for the NER and QA tasks. See Appendix <ref> for task-specific details.
Bold text shows the best statistically significant score over 5 seeds.
Question Answering.
We evaluate our model on the FQuAD 1.0 dataset <cit.>, which is a SQuAD <cit.> style French question-answering dataset with 20731 examples for training, and 3188 for evaluation.
The results shown in Table <ref> show that our model outperforms CamemBERT_30% by 6.01 F1 points, but shows no statistically significant improvement over CamemBERT_CCNet F1 score, and exact match (EM) score.
Part-of-Speech and Dependency Parsing.
We report our results on 4 diverse French treebanks.
For the parser training, we make use of the HOPS parser <cit.> implementation, which is a graph-based dependency parser inspired by <cit.>.
Our configuration uses the Transformer model's last layer in addition to FastText embeddings <cit.>, character-level bi-directional RNN embeddings, and word embeddings trained during the fine-tuning phase.
Table <ref> shows that our proposed model consistently outperforms CamemBERT_30%, and competes with CamemBERT_CCNet on all 4 treebanks.
Named Entity Recognition is performed on the French Treebank (FTB) which contains 350k tokens in 27k sentences extracted from news articles.
Our results in Table <ref>, surprisingly show that CamemBERT_30% outperforms CamemBERT_CCNet, while not being statistically better than our model.
FLUE Benchmark
We use datasets from the French Language Understanding Evaluation (FLUE) benchmark <cit.>, namely the French part of the paraphrase identification dataset PAWS-X <cit.>, and of XNLI <cit.>, in addition to CLS, a binary classification dataset with Amazon reviews taken from Amazon.
Our results (Table <ref>) show that our model outperforms all models on the CLS movie classification task, and matches the performance of CamemBERT_CCNet on the other FLUE tasks.
Pre-training Dataset Choice
We choose CCNet as our pre-training dataset instead of the more common OSCAR dataset <cit.>, as (i) it was shown to produce less offensive output <cit.> and (ii) it allowed us to be fully comparable with many of the CamemBERT models <cit.>, enabling thus meaningful comparisons.
Nevertheless, we also ran experiments with CamemBERT_OSCAR, and found that it performed slightly worse than CamemBERT_CCNet, as shown in Table <ref> Appendix <ref>.
Pre-training Compute and CO2 Impact
Our model was trained for 8 days on 6 A40 GPUs, compared to CamemBERT which was trained on 256 V100 GPUs for one day, which is roughly equivalent to 28 days of training on 6 A40 GPUs, since an NVIDIA A40 GPU is about 1.5x faster than a V100 GPU on language modeling tasks according to recent benchmarks.[See https://lambdalabs.com/blog/nvidia-rtx-a40-benchmarkshttps://lambdalabs.com/blog/nvidia-rtx-a40-benchmarks.]
Following the reports by <cit.> and <cit.> on the environmental impact of language model training, we use 's online carbon footprint calculator to provide the following estimates: 's pre-training used 700kWh and emitted 36kg CO_2 compared to 3.32MWh and 170kg for CamemBERT.[These estimates are specific to our training infrastructure situated in France.
These estimates highlight the remarkable efficiency achieved by CamemBERTa's pretraining process.
]
§ DISCUSSION
Our experiments clearly show that given the same training corpus, tokenizer, and total number of examples seen during training, outperforms the MLM trained CamemBERT model on all tasks except NER on FTB and POS tagging on Rhapsodie. Moreover, our model implementation is able to match or outperform a fully trained CamemBERT model, trained on around 3 times more samples and more compute. The strong performance of our model on higher level FLUE tasks suggest that lower level tasks such as POS tagging and dependency parsing are less challenging for current generation models, since they mostly require surface level information which the model can capture early in the training process, as suggested by <cit.>, compared to tasks such as question answering and text classification which require more complex processing.
Taking a step back and looking at the only DeBERTa model that includes French, mDeBERTa <cit.> we can see (cf. Table <ref>) that our model only requires 6.6% of its multilingual counterpart training samples to achieve competitive performance while additionally also outperforming the XLM-R model <cit.> trained on a much larger training sample size.
This confirms the interest in using such training paradigms in compute limited scenarios for semantically demanding tasks such as question-answering or natural-language inference.
Last but not least, other competitive language models for French are available and although not the primary focus of this paper, we conducted a comparative analysis involving FlauBERT <cit.> and FrALBERT <cit.>.
The results, presented in Table <ref> in Appendix <ref>, demonstrate the better performance of our model across all evaluated tasks in comparison to these French models. Additionally, it is worth noting that FlauBERT was trained for 17 days with 32 V100 GPUs, which is equivalent to 60 days of training on 6 A40 GPUs. This represents a 7.5-fold increase in computational resources employed compared to .
§ CONCLUSION
We presented , a data-efficient French language model trained on a large corpus of French text and the first publicly available DeBERTaV3-style pretrained model and implementation.
For a fair evaluation we reused the same corpus and tokenizer as CamemBERT_CCNet, but using only 30% of the total number of input training tokens.
We compared the performance of both models in addition to an MLM model trained from scratch under the same setup as , CamemBERT_30%, on a variety of downstream tasks.
Our experiments showed that our model outperforms CamemBERT_30% on all tasks except NER on FTB, and that it is able to match and even surpass CamemBERT_CCNet.
Furthermore, we have also made our optimized code implementation and pretrained model weights publicly available for others to use.
§ LIMITATIONS
Although our model is more efficient than previous models trained using the MLM objective and the standard transformer architecture, we notice that the models runs around 30% slower.
This is due to the disentangled attention mechanism, which is more computationally expensive than the standard attention mechanism.
We also note that at the time of writing, the DeBERTaV3 TensorFLow 2 implementation available on HuggingFace's Transformers library <cit.> experiences heavy slowdowns with TPU backends.
Our attempts to solve this issue were unsuccessful, and we were unable to train our model on TPUs.
§ ETHICS STATEMENT
We propose a model trained using DeBERTaV3 style pre-training along with an optimized training implementation, which reduces training computation cost when compared to previous models, and hence greatly reduces the energy cost and environmental impact of language model training.
We trained our model using the CCNet dataset, for which we direct the reader to for further discussion on bias and ethical considerations.
Our experiments do not include any additional data collection or human annotators.
Like other language models trained on massive corpora, there may be potential biases present in the training data, which could affect the output of our models.
Therefore, we advise against using these models in production without thorough testing.
All our experiments were carried out on clusters with energy sources consisting of nuclear (65–75%), 20% renewable, and the remaining from gas.
§ ACKNOWLEDGEMENTS
This work was partly funded by Benoît Sagot's chair in the PRAIRIE institute funded by the French national reseach agency (ANR as part of the “Investissements d’avenir” programme under the reference . This work also received
funding from the European Union’s Horizon 2020
research and innovation programme under grant
agreement No. 101021607. The authors are grateful to the OPAL infrastructure from Université Côte d'Azur for providing resources and support.
acl_natbib
§ APPENDIX
§ EXPERIMENTS RESULTS ON OSCAR AND DROPOUT
§ NEGATIVE RESULTS
In addition to our main results, we attempted to improve the performance of our model by adding BPE-Dropout <cit.> to the tokenization process, as it was shown that this method of subword regularization improves performance on translation tasks.
We retrain our model with BPE-Dropout, dubbed CamemBERTa_dropout, and compare the results to our original model in Table <ref>.
We observe that by adding BPE-Dropout, we obtain a decrease in performance on most tasks, except for POS tagging and dependency parsing, where the performance does not change.
§ HYPER-PARAMETERS
For experiments on the FLUE benchmark we use the same hyper-parameters as the authors of CamemBERT on the NLI task.
As for POS tagging and dependency parsing, we use the same configurations as the one used in <cit.>.
|
http://arxiv.org/abs/2306.02485v2
|
20230604213134
|
Study of gapped phases of 4d gauge theories using temporal gauging of the $\mathbb{Z}_N$ 1-form symmetry
|
[
"Mendel Nguyen",
"Yuya Tanizaki",
"Mithat Ünsal"
] |
hep-th
|
[
"hep-th",
"cond-mat.str-el"
] |
Discussion Paper: The Threat of Real Time Deepfakes
Guy Frankovits, Yisroel Mirsky
July 31, 2023
===================================================
§ INTRODUCTION
Color confinement is one of the most remarkable phenomena in 4d non-Abelian gauge theories, and we are continuously developing various techniques to understand its physical mechanism. Importantly, we are interested in the vacuum structure in the space of gauge theories as well as its properties for a specific theory. This motivation naturally leads us to classify the possible vacua as states of quantum phases of matter.
The classification problem of possible gapped phases of 4d SU(N) gauge theories (with adjoint matter) has a long history, and one of the key ideas is to study the behavior of the interparticle potential for probe particles.
We can introduce the test quark as the Wilson loop operator, and the electric charge of the test quark is characterized by the center elements of the gauge group, ℤ_N⊂ SU(N). Then, Wilson proposed that confinement and Higgs phases are discriminated by studying whether the Wilson loop shows the area law or the perimeter law <cit.>.
Interestingly, we can also consider magnetic particles as well as electric ones. The magnetic charges belong to ℤ_N=π_1(SU(N)/ℤ_N), whose elements specify possible Dirac strings, and we can describe their worldlines using 't Hooft loops.
The above observation leads to the Wilson–'t Hooft classification, which says that the gapped phases are classified according to the set of deconfined dyonic lines in ℤ_N×ℤ_N <cit.>.
In the modern perspective of generalized global symmetry in quantum field theories (QFTs), this Wilson–'t Hooft classification is a bit mysterious.
When we consider a 4d gauge theory with (generalized) locality, we need to specify the global structure of the gauge group, such as SU(N) vs. SU(N)/ℤ_N.
Once the global structure is specified, we cannot have both Wilson and 't Hooft loops as genuine line operators <cit.>.
Only N of N^2 dyonic lines are genuine line operators and they have to be mutually local. The other lines are non-genuine and live on the boundaries of topological surface operators, which explains the Wilson–'t Hooft commutation relation kinematically.
These mutually-local dyonic lines specify an order-N group G (⊂ℤ_N×ℤ_N), and the theory has a G 1-form symmetry <cit.>.
It should be noted that so far we have not discussed the dynamics of gauge theories at all in this paragraph; everything is just about the definition of genuine line operators even though the order-N subgroup of ℤ_N×ℤ_N appears similarly as in the Wilson–'t Hooft classification of gapped phases.
In this paper, let us always choose the global structure of the gauge group to be SU(N). Then the 1-form symmetry is denoted by ℤ_N^[1], which measures the ℤ_N electric charge of the Wilson loop.
Since the 't Hooft lines are not genuine line operators, we do not have the magnetic counterpart of the 1-form symmetry that measures ℤ_N.
This situation would naturally raise the question of why we need the whole set of dyonic lines to characterize gapped phases in the Wilson–'t Hooft classification.
Here, we wish to answer this question and make a clear connection between the Wilson–'t Hooft classification and the classification via the 1-form symmetry.
To achieve this goal, we introduce “temporal gauging” of the 1-form symmetry and apply this technique to produce 3d QFTs with ℤ_N×ℤ_N 1-form symmetry out of 4d QFTs with ℤ_N 1-form symmetry.
We study the partition function of these 3d QFTs with ℤ_N^[1]×ℤ_N^[1] in the presence of the background gauge fields, which we call the 't Hooft partition function as it was first introduced by 't Hooft in Ref. <cit.>.
Let us emphasize that the temporal gauging is reversible, so the 't Hooft partition function carries the same amount of information as the 4d partition function.
The 't Hooft partition function turns out to be strongly constrained by the 4d Lorentz invariance of the original theory, and this is exactly the setup that justifies the Wilson–'t Hooft classification.
We show in Sec. <ref> that the classification of the 4d gapped phases according to the spontaneous breaking of the 4d 1-form symmetry enriched with symmetry-protected topological (SPT) states is in 1-to-1 correspondence with the Wilson–'t Hooft classification via the temporal gauging operation:
(ℤ_N^[1])_4 (ℤ_n^[1])_4 enriched with the ℤ_n^[1] level-k SPT state
1:1 ℤ_N×ℤ_N H={x(n,0)+y(k,-N/n)∈ℤ_N×ℤ_N}.
The left-hand-side describes the characterization of the gapped phases in 4d QFT language and the right-hand-side describes it after performing the temporal gauging, and these two are shown to be completely equivalent.
In Sec. <ref>, we discuss the situation where the 4d 1-form symmetry has a mixed 't Hooft anomaly.
We shall see that the anomaly relation in 4d is translated into the higher-group structure of 3d QFTs after the temporal gauging.
We can reproduce the anomaly matching constraint by combining the higher-group structure with the symmetry breaking, ℤ_N^[1]×ℤ_N^[1]H^[1], while the higher-group structure itself is not sufficient to reach this conclusion.
We then introduce the and operations to study the connection between different gapped phases in Sec. <ref>.
These operations generate an SL(2,ℤ) action on the space of 4d QFTs with ℤ_N^[1] symmetry, and these operations give automorphisms on ℤ_N×ℤ_N that relate different order-N subgroups H_1 H_2.
We apply it to the 𝒩=1^* supersymmetric Yang–Mills theory and study its rich vacuum structure from this viewpoint.
§ TEMPORAL GAUGING OF 1-FORM SYMMETRY AND 'T HOOFT PARTITION FUNCTION
Throughout this paper, we will analyze the gapped phases of 4d QFTs with ℤ_N 1-form symmetry, which is denoted by ℤ_N^[1].
For this purpose, we introduce the background gauge field B_4 for the 1-form symmetry and study properties of the partition function
[B_4].
This partition function is defined on any general 4-dimensional Riemannian manifold M_4.
In the following, we especially pay attention to the case
M_4=M_3× S^1.
We refer to this S^1 as the temporal direction, and denote its coordinate by τ∼τ+L.
As we are still interested in the four-dimensional dynamics, we basically assume that the size L of S^1 is sufficiently large and the phase is smoothly connected to the ground states.
In the following, we choose a spin structure for M_3× S^1.
By regarding the size of M_3 to be much larger than that of S^1, we can pretend that we are dealing with 3d QFTs.
Then, the 4d ℤ_N^[1] symmetry splits into <cit.>
(ℤ_N^[1])_4⟹
(ℤ_N^[1])_3× (ℤ_N^[0])_3.
Let and A denote the background gauge fields for (ℤ_N^[1])_3 and (ℤ_N^[0])_3, respectively. Then they can be related to the 4d background gauge fields B_4 as <cit.>
B_4=+A∧τ/L.
Here, does not have the temporal component, and we sometimes call it the magnetic flux.
The temporal-spatial component is expressed by the 1-form gauge field A.
We define the temporal gauging by the path integral in terms of A:[We follow the convention that the background fields are denoted with upper case and the dynamical ones are with lower case. When we promote the background gauge fields to the dynamical ones, we change their letters to the corresponding lower case letters, accordingly. ]
[, ]= ∫ a exp(2π/N∫_M_3∪ a) [+a∧ (τ/L)].
Here, is the background gauge field for the ℤ_N 1-form symmetry, ℤ_N^[1], dual to the original ℤ_N^[0] symmetry.
Regarding as the partition function of the 3d QFT defined on M_3, it enjoys the ℤ_N^[1]×ℤ_N^[1] symmetry and (, ) is the corresponding background 2-form gauge field.
As this partition function (<ref>) was first introduced by 't Hooft in Ref. <cit.> for the case of M_4=T^4, we shall refer to it as the 't Hooft partition function.
As this theory enjoys the ℤ_N×ℤ_N 1-form symmetry, we must have the corresponding line operators.
Since the ℤ_N^[1] symmetry is the 1-form symmetry in the original 4d theory, let us refer to the corresponding operator as the Wilson loop, W_(0,1)(C).
It is then natural to refer to the charged object of ℤ_N^[1] as the 't Hooft loop, W_(1,0)(C). In general, the dyonic loop operator (with the magnetic charge m∈ℤ_N and the electric charge e∈ℤ_N) is denoted as
W_(m,e)(C) ^∫_D(m + e )
with some ∂ D=C in the presence of the background gauge fields.[We note that these are genuine line operators in the effective 3d QFT on M_3 as the surface dependence appears only when we turn on background 2-form gauge fields. ]
Here, we need to emphasize that the 't Hooft partition function (<ref>) is introduced to understand the possible phases of 4d gauge theories, while and there are 3d ℤ_N 2-form gauge fields.
Even though [B_4] is covariant under Lorentz transformations, the temporal-gauging procedure does not respect it, and thus may seem at first sight to be less useful compared with the original one [B_4] for studying the 4d dynamics.
Let us point out, however, that the temporal gauging is a reversible operation, and thus [B_4] and [, ] should carry the same amount of information.
Moreover, it turns out in the following that [,] provides a convenient tool for the classification of gapped phases, and the physical meaning of each phase also becomes quite transparent.
§.§ Positivity of the 't Hooft partition function Zth[Be,Bm]
An important property of the 't Hooft partition function is its semi-positivity, and one can easily show it using reflection positivity.[We assume that the 4d theory is unitary and thus its path integral satisfies reflection positivity. ]
Regard S^1 as the temporal direction. We pick the antipodal points {τ=0,L/2}⊂ S^1 and choose M_3×{0, L/2}⊂ M_4 as the reflection plane for the Osterwalder–Schrader reflection.
Using the 1-form gauge invariance, or the topological nature of the codim-2 defects, we may set the specific alignment of the discrete gauge fields.
For the spatial part , we require that does not depend on τ at all so that is invariant under the Osterwalder–Schrader reflection.
To discuss the temporal gauge field a∧ (τ/L), we note the following trivial identity,
∫ a F[a]=∫ a_1 a_2 F[a_1-a_2],
which holds for any functionals F[a]. Using this identity, we find
[, ]=∫ a_1 a_2 ^2π/N∫_M_3∪ (a_1-a_2)[ + a_1∧τ/L-a_2 ∧τ/L].
Thus, the defects for the temporal directions can be doubled, and these two defects can be put on arbitrary locations due to their topological nature.
By a suitable choice, they can be related by the Osterwalder–Schrader reflection.
Then, the reflection positivity ensures that
[, ]≥ 0
for any , ∈ H^2(M_3;ℤ_N).
We note that this positivity is achieved by the temporal gauging procedure.
Indeed, the ordinary partition function, [B_4], can take complex values in general in the presence of the background gauge fields, and it often provides us important information on the quantum phases of matter.
In general, temporal components of the gauge field flip their sign under the Osterwalder–Schrader reflection, so this complex phase is consistent with reflection positivity.
In the case of the 't Hooft partition function, the positivity argument works as we sum up all the possible gauge fields having temporal components (note that only has purely spatial components), and we find (<ref>).
The physical meaning of the positivity (<ref>) becomes more transparent if we consider it in the operator formalism <cit.>.
Let ℋ be the Hilbert space when we quantize the theory on M_3, and let Ĥ[] be the Hamiltonian operator that contains the magnetic flux .
We can further define the projection operator onto the electric flux sector P̂[], which satisfies P̂[]^†=P̂[], P̂[]^2 = P̂[] and ∑_P̂[]=1_ℋ.
Then the 't Hooft partition function can be written as
[,]=_ℋ[P̂[] ^-L Ĥ[]],
and then its positivity is quite manifest.
§.§ ZtH[Be,Bm] for gapped phases and constraints from Lorentz invariance
In Ref. <cit.>, 't Hooft found the “duality equation” for [, ] by considering a discrete rotation of the torus M_4=T^4. By assuming that the system is gapped, the duality equation implies that ℤ_N^[1]×ℤ_N^[1] should be spontaneously broken to an order-N subgroup, i.e.,
ℤ_N^[1]×ℤ_N^[1] H^[1],
where the unbroken symmetry H has order N, |H|=N, and satisfies[We note that when H is an order-N subgroup of ℤ_N×ℤ_N, then the mutual locality condition (<ref>) is automatically satisfied.
To see this, let us assume (to derive a contradiction) that the mutual locality is violated, so that there are (x_1, y_1), (x_2,y_2) ∈ H such that M≡ x_1 y_2 - x_2 y_1 ≠ 0 N. Without loss of generality, this M (<N) can be taken to be a positive divisor of N.
Then, the subgroup {a (x_1, y_1)+b(x_2, y_2)}⊂ H has order N^2/M>N, and this contradicts with |H|=N. ]
∀ (x_1, y_1), (x_2, y_2)∈ H, ⟨ (x_1,y_1), (x_2,y_2)⟩≡ x_1 y_2 - x_2 y_1 =0 N.
This condition is referred to as the mutual locality condition.
In order to see how such a constraint on gapped phases arises, let us consider what would happen if ℤ_N^[1]×ℤ_N^[1] were not broken at all. By gauging , we can undo the temporal gauging procedure and this gives the delta-functional constraint on A for (ℤ_N^[0])_3. This implies that (ℤ_N^[0])_3 is spontaneously broken, while (ℤ_N^[1])_3 is unbroken by assumption.
Since both of these symmetries arise from (ℤ_N^[1])_4, this option obviously violates the 4d Lorentz invariance.
Similarly, if we assume ℤ_N^[1]×ℤ_N^[1] were completely broken, we find that (ℤ_N^[0])_3 is unbroken while (ℤ_N^[1])_3 is broken, and again Lorentz invariance is violated.
These quick observations already tell us that Lorentz invariance puts severe constraints and requires the correct amount of symmetry breaking for ℤ_N^[1]×ℤ_N^[1], and we can actually find that it is broken down to an exactly order-N subgroup (with mutual locality) when assuming a mass gap (and also one technical assumption).[As this consequence is very similar to that of the anomaly-matching constraint, one might wonder if this can be understood from the mixed anomaly for ℤ_N×ℤ_N 1-form symmetry. However, this is not the case, and we emphasize that the 4d Lorentz invariance of the original theory plays a pivotal role here. ]
We shall give a review of the original argument by 't Hooft in Appendix <ref> to be self-contained.
Here, instead, let us perform explicit calculations of the 4d partition function [B_4] and [,] for gapped phases.
We here assume that the 4d ℤ_N 1-form symmetry is spontaneously broken to a subgroup,
(ℤ_N^[1])_4(ℤ_n^[1])_4,
where n is a positive divisor of N, and the vacuum state further acquires a nontrivial SPT phase for the unbroken (ℤ_n^[1])_4 symmetry.
The low-energy theory becomes ℤ_N/n topological field theory, and the partition function can be modeled as[We may consider more general 2-group gauge theories as possible models (see, e.g., Refs. <cit.>). Here, let us restrict our attention to these simplest possibilities. ]
[B_4] =
|H^0(M_4;ℤ_N/n)|/|H^1(M_4;ℤ_N/n)|∑_b∈ H^2(M_4;ℤ_N/n)exp(2π/N/n∫ b∪ B_4)
×exp(2π k/n∫1/2P_2(B_4/N/n)),
where P_2(B)=B∪ B+B∪_1 B is the Pontryagin square.[As long as working on torsion-free 4-manifolds, we can always take an integral lift of the discrete gauge field B and P_2(B) can be simply thought of as B∪ B by identifying B with one of the integral lifts. We shall use this property throughout this paper to simplify computations.]
The b field refers to the discrete ℤ_N/n 2-form gauge field (not a ℤ_N gauge field) for the topological field theory, and its path integral gives the delta-functional constraint on B_4 so that
∫_Σ B_4∈N/nℤ
for any closed 2-cycle Σ. As ∫ B_4 is well-defined N, we can regard B_4/(N/n) as the ℤ_n 2-form gauge field, and the second line on the right-hand-side of (<ref>) describes the level-k SPT action for this unbroken ℤ_n^[1] symmetry with k∼ k+n (given a spin structure).
Let us compute the 't Hooft partition function for (<ref>), which is given by
[, ] =1/|H^0(M_3;ℤ_N)|∑_a∈ H^1(M_3;ℤ_N)exp(2π/N∫∪ a)
×|H^0(M_3× S^1;ℤ_N/n)|/|H^1(M_3× S^1;ℤ_N/n)|∑_b_m∈ H^2(M_3;ℤ_N/n)∑_a'∈ H^1(M_3;ℤ_N/n)
×exp(2π/N/n∫ (b_m ∪ a + ∪ a')+2π/nk/(N/n)^2∫∪ a).
We note that |H^1(M_3× S^1;ℤ_N/n)|=(N/n)^β_1(M_3)+1, so |H^0(M_3× S^1;ℤ_N/n)|/|H^1(M_3× S^1;ℤ_N/n)|=1/(N/n)^β_1(M_3).
Here, β_i(M)=rank H^i(M;ℤ) is the i-th Betti number.
The summation over a' gives the delta-functional constraint on B_m, so we separate it from other path integrals:
[,] = 1/(N/n)^β_1(M_3)∑_a'∈ H^1(M_3;ℤ_N/n)exp(2π/N/n∫ a' ∪)
×1/N∑_a∈ H^1(M_3;ℤ_N)∑_b_m∈ H^2(M_3;ℤ_N/n)exp(2π/N∫ a ∪(n b_m + + kn/N))
= 1/(N/n)^β_1(M_3) (N/n)^β_1(M_3)δ_N[n ] ×1/N N^β_1(M_3)δ_N[N/n( + k n/N)]
=N^β_1(M_3)-1 δ_N[n , N/n + k ].
Here, δ_N[B] is the delta functional that gives 1 when ∫_ΣB ∈ Nℤ for every closed cycle Σ and gives 0 otherwise.
This shows that the deconfined lines are generated by
W_(0,n)(C) ^∫_D n,
W_(N/n,k)(C) ^∫_D((N/n)+ k ),
with ∂ D=C,
and thus the unbroken subgroup H is given by
H={x(n,0)+y(k,-N/n)}⊂ℤ_N×ℤ_N,
and we can readily confirm that the mutual locality condition (<ref>) is satisfied.
We can also check that |H|=N and moreover that every order N subgroup of ℤ_N×ℤ_N appears in this way.[To see that every order N subgroup of ℤ_N ×ℤ_N appears, note that any such subgroup K arises from an index N sublattice L of ℤ×ℤ containing N ℤ× N ℤ such that K = L / (N ℤ× Nℤ). Then, since (N,0) and (0,N) are linearly independent vectors in L, a theorem on lattices implies that we can find a basis u,v of L of the form u ≡1/q(N,0), v ≡k/N (N,0) + 1/n (0,N), with q,n positive divisors of N and k an integer. The condition that L have index N in ℤ×ℤ then implies that N^2/qn = ⟨ u,v ⟩ = N, i.e., that N/q = n. Hence, K is precisely of the form (<ref>).]
In the above discussion, we start from the 4d partition function (<ref>) and derive (<ref>) by the temporal gauging, but we can reverse the logic to reproduce (<ref>) by performing the path integral of of the 't Hooft partition function (<ref>), which achieves the equivalence mentioned in (<ref>), and let us recapitulate it here:
(ℤ_N^[1])_4 (ℤ_n^[1])_4 enriched with the ℤ_n^[1] level-k SPT state
1:1 ℤ_N×ℤ_N H={x(n,0)+y(k,-N/n)∈ℤ_N×ℤ_N}.
Therefore, the order-N subgroup H of ℤ_N×ℤ_N correctly characterizes the gapped phases of 4d QFTs with ℤ_N^[1] symmetry: Vacuum states with different H are distinguished as quantum phases.
§.§ Example: Lattice SU(N) Yang–Mills theory at strong coupling
It would be useful to compute [B_4] and [, ] in some microscopically solvable model for concrete understanding of their behaviors. Here, let us consider the strong-coupling expansion of the lattice SU(N) gauge theory with the Wilson action.
The Wilson action with the ℤ_N two-form gauge field is given by
S_W[U_ℓ, B_p]=-1/2g^2∑_p(^-2π/N B_pU_p + ^2π/NB_pU_p^†),
where U_ℓ denotes the SU(N)-valued link variable, U_p=𝒫∏_ℓ∈ p U_ℓ is the path-ordered products along the plaquette p, and B_p denotes the ℤ_N-valued plaquette variable, which is identified with B_4. The partition function is given by
[B_4]=∫ U_ℓexp(-S_W[U_ℓ, B_p]).
We expand this partition function in terms of 1/g^2 in the strong-coupling expansion by using formulas of Haar integration, such as ∫ U (U)_i_1 i_2 (U^†)_j_1 j_2=1/Nδ_i_1 j_2δ_i_2 j_1.
Let us then expand the path-integral weight up to the O(1/g^2) term for each plaquette,
exp(-S_W)≃∏_p(1+1/2g^2 (^-2π/N B_pU_p + ^2π/NB_pU_p^†)).
Then the partition function can be represented as a sum over closed surfaces,
[B_4]≃∑_Σclosed surface N^χ(Σ)(1/2Ng^2)^Area(Σ)^-2π/N∫_ΣB_4.
We can think of this expression as the sum over the worldsheets of confining strings with the string tension σ=ln (2Ng^2) in lattice units.
When Σ is a contractible closed surface, we have ∫_Σ B_4=0 N. Thus, the nontrivial B_4 dependence appears only if the confining-string worldsheet wraps around nontrivial 2-cycles, and such processes are exponentially suppressed:
[B_4]-[0]≃ O(^-σ L^2)0,
where L is the length of T^4.
This shows that, in the infinite-volume limit, we can regard [B_4]→ 1, which corresponds to n=N and k=0 in (<ref>).
Now, let us perform the temporal gauging of (<ref>) to find [,].
As we have found that the B_4 dependence of 𝒵[B_4] is exponentially small, its Fourier transform localizes to =0 and we get [, ]∝δ_N[].
More precisely, for =0,
[0,] =∫ a [B_m+a∧τ/L]
≃[0]+O(^-σ L^2) [0].
On the other hand, if we take ∫_(T^2)_12=1 as an example of ≠ 0, then exp(∫∪ a)=exp(∫_S^1a_3 x^3).
To cancel this phase in the summation of a, the confining-string worldsheet should wrap once around the 3-4 cycle, and we get
[(≠0),] =∫ a ^2π/N∫ a_3 x^3[+a∧τ/L]
≃ O(^-σ L^2) 0.
We actually find [, ]=[0] δ_N[] neglecting the exponentially small contributions as L→∞, and the unbroken order-N subgroup is H={0}×ℤ_N⊂ℤ_N×ℤ_N.
§ ANOMALY MATCHING AND THE HIGHER-GROUP STRUCTURE
In general, global symmetry in QFTs may have an 't Hooft anomaly, which is an obstruction to the promotion of the global symmetry to local gauge redundancy.
The 't Hooft anomaly is invariant under any local and symmetric deformations of QFTs, and thus the low-energy effective theory is strongly constrained as it must reproduce the anomaly computed in ultraviolet.
In this section, we shall discuss the structure of the 't Hooft partition function [,] when the original 4d ℤ_N^[1] symmetry has a mixed 't Hooft anomaly.
Pure Yang–Mills theory:
As an example, let us consider the generalized anomaly, or global inconsistency, of pure SU(N) Yang–Mills theory.
The 4d Yang–Mills partition function ^YM_θ has an 't Hooft anomaly involving the θ periodicity <cit.> and we can detect it by introducing the background ℤ_N two-form gauge field B_4:
^YM_θ+2π[B_4]=exp(2π/N∫1/2P_2(B_4))^YM_θ[B_4].
To satisfy the anomaly matching condition in the confined phase, the level crossing of the ground state is mandatory, as the two confined states at θ and θ+2π are distinct as 4d SPT states with ℤ_N^[1] symmetry.
Let us interpret this result using the 't Hooft partition function:
_tH,θ^YM[,]=∫ a exp(2π/N∫∪ a) _θ^YM[+a∧τ/L].
By performing the temporal gauging on both sides of (<ref>), we find that
_tH,θ+2π^YM[,]
=∫ a ^2π/N∫ a^YM_θ+2π[+a∧τ/L]
=∫ a ^2π/N∫ a^2π/N∫ a^YM_θ+2π[+a∧τ/L]
=_tH,θ^YM[+,].
This is nothing but the Witten effect <cit.>, which claims that the purely magnetic line at θ+2π is equivalent to the dyonic line at θ.
Equivalently, the (-1)-form transformation, θ→θ+2π, induces the nontrivial action on the 1-form symmetries, →+, which is an example of the higher-group structure <cit.>.
We note that the Witten effect, or the higher-group structure, itself does not give nontrivial constraints, in constrast to the 't Hooft anomaly. The trivial state, _tH,θ^YM[,]=1, is consistent with the transformation, →+, and this state is indeed realized by the high-temperature Yang–Mills theory at any value of θ.
The anomaly matching constraint is reproduced by considering the 4d Lorentz invariance.
As discussed in Sec. <ref>, the gapped state with 4d Lorentz invariance must have the symmetry breaking ℤ_N^[1]×ℤ_N^[1]H^[1], and thus we can set the 't Hooft partition function for a given θ to be
_tH,θ^YM[,]=δ_N[n]δ_N[(N/n)+k].
Dialing the θ parameter, θ→θ+2π, we should obtain
_tH,θ+2π^YM[,]=δ_N[n]δ_N[(N/n)+(k+(N/n))].
As k∼ k+n, we should encounter a phase transition as a function of θ if N/n is not a multiple of n.
This is always the case for ordinary confinement phases, n=N, while the totally Higgs phase, n=1, does not need the phase transition in θ. This reproduces the consequence of the 4d 't Hooft anomaly (<ref>).
=1 supersymmetric Yang–Mills theory:
The higher group structure may be more evident in the 𝒩=1 super Yang–Mills (SYM) case, where the shift of the θ angle is related to the discrete chiral symmetry (ℤ_2N)_χ.
Introducing the discrete chiral gauge field A_χ, we find the 't Hooft anomaly,
^SYM[A_χ+λ_χ, B_4]
= exp(2π/N∫λ_χ∪1/2P_2(B_4))
^SYM[A_χ, B_4].
By performing a similar computation as in (<ref>), this relation is translated as
^SYM_tH[A_χ+λ_χ, , ]
=^SYM_tH[A_χ, +λ_χ∪, ].
The 0-form chiral symmetry causes the Witten effect and induces a nontrivial action on the 1-form symmetry, →+λ_χ∪.
Again, the higher-group symmetry itself does not require the degeneracy of ground states, but it gives a nontrivial consequence when we further impose the 4d Lorentz invariance.
If we assume that the system is in a confined phase (i.e. n=N), then the 't Hooft argument shows that the partition function of a given vacuum should be described by
δ_N[+k ],
with some k∼ k+N. Then, the higher-group structure discussed above indicates that the discrete chiral transformation interchanges the vacuum with label k to the vacuum with label k+1:
[auto,->]
(a) at (0,0) δ_N[];
(b) at (2.7,0) δ_N[+];
(c) at (5.2,0) ⋯;
(d) at (8.4,0) δ_N[+(N-1)];
(a) – node chiral (b);
(b) – node chiral (c);
(c) – node chiral (d);
(8.,-0.3) .. controls (6.3,-0.6) and (2.1,-0.6) .. (0.2,-0.3)
node[pos=0.5] chiral;
The 't Hooft partition function of 𝒩=1 SYM theory is then given by
^SYM[,]=∑_k=1^Nδ_N[+k].
These N vacua are understood as the chiral broken vacua, (ℤ_2N)_χℤ_2, and the label k specifies the phase of the gluino condensate, ⟨λ^2⟩=Λ^3^2π k/N.
This is exactly the vacuum structure expected from the anomaly matching condition obtained before the temporal gauging, and the same information is found via the higher-group structure combined with the constraint from 4d Lorentz invariance.
§ S AND T OPERATIONS ON 4D QFTS WITH THE ZN 1-FORM SYMMETRY
Let us introduce the formal operations, and , that act on 4d QFTs with the ℤ_N 1-form symmetry:
[B_4]↦[B_4]≡∫ b_4 [b_4]exp(2π/N∫ B_4∪ b_4),
[B_4]↦[B_4]≡[B_4]exp(2π/N∫1/2P_2(B_4)).
The operation dynamically gauges the ℤ_N^[1] in 4d spacetime, and thus the original background gauge field is promoted to the dynamical field b_4. The gauged theory acquires the dual ℤ_N^[1] symmetry, and we introduce the background gauge field B_4 that couples to it.
The operation just shifts the local counterterm for the background gauge field B_4.
Here, we would like to emphasize that these and operations do not necessarily imply the duality/symmetry of a given 4d QFT.
We can always apply these operations as long as the 4d QFTs have a ℤ_N 1-form symmetry, and generically these operations generate different QFTs.
One may say that and are morphisms in the category of 4d QFTs with the ℤ_N^[1] symmetry (see Refs. <cit.> for the case of 3d U(1) symmetry).
When the generated QFT is accidentally the same as the original one, these operations may be regarded as self-duality operations.
The SU(N) Yang–Mills theory with adjoint matter always has the self duality associated with θ→θ+2π as we see in (<ref>).
Examples with the full SL(2,ℤ) self-duality are the Cardy–Rabinovici model <cit.> and 𝒩=4 SYM theory <cit.>. In Sec. <ref>, we shall discuss the 𝒩=1^* SYM theory in detail.
§.§ S and T operations on 't Hooft partition functions
Let us study how these operations act on the 't Hooft partition function (<ref>). We define the and operations on [,] by the temporal gauging of the - and -transformed partition functions, respectively:
[,]≡∫ a exp(2π/N∫_M_3∪ a) [+a∧τ/L],
[,]≡∫ a exp(2π/N∫_M_3∪ a) [+a∧τ/L].
We can compute these path integrals explicitly to express the right-hand-side using .
The transformation is given by
[, ]
=∫ a ^2π/N∫_M_3 a∫ b_m a' ^2π/N∫_M_3( a'+b_m a)[b_m+a'∧τ/L]
=∫ b_m a' N^b_1(M_3)-1 δ_N[+b_m] ^2π/N∫_M_3 a'[b_m+a'∧τ/L]
=[, -].
Here, we decompose the dynamical 4d ℤ_N gauge field as b=b_m+a'∧τ/L.
The transformation is given by
[, ]
= ∫ a ^2π/N∫_M_3 a^2π/N∫_M_3 a[+a∧τ/L]
=[+, ].
Let us choose the generators S,T∈ SL(2,ℤ) to be
S=[ 0 -1; 1 0 ],
T=[ 1 -1; 0 1 ],
so that SL(2,ℤ)=⟨ S,T | S^2=(ST^-1)^3, S^4=1⟩ and C≡ S^2=(ST^-1)^3 corresponds to charge conjugation.
If we write B⃗=(, )^t as a column vector, then the transformations (<ref>) and (<ref>) can be expressed as
[B⃗] = [S^-1B⃗], [B⃗] = [T^-1B⃗].
Thus, and transformations generate the SL(2,ℤ) action on the space of 4d QFTs with the ℤ_N^[1] symmetry.
The explicit form of the action can be easily identified when we use the 't Hooft partition function [, ], as we just need to perform an SL(2,ℤ) transformation on the background field B⃗=(, )^t.
For example, we find
(^p_1^q_1^p_2^q_2⋯)[B⃗]
=[(S^p_1T^q_2S^p_2T^q_2⋯)^-1B⃗].
As we can express
(S^p_1T^q_2S^p_2T^q_2⋯)=
[ p q; r s ]∈ SL(2,ℤ),
with some p,q,r,s ∈ℤ with ps-qr=1, the above relation becomes
(^p_1^q_1^p_2^q_2⋯)[, ]
=[ s -q , -r + p ].
Let us now consider two different gapped phases specified by order-N subgroups H_1 and H_2, and assume that there is an isomorphism H_1 H_2 induced by an automorphism of ℤ_N×ℤ_N.
Then, there should be an SL(2,ℤ) operation that relates H_1 and H_2, and thus these different phases are connected in the web of , operations.
§.§ Non-Abelian gapped phases in the N=1* SYM theory
N=1^* SYM theory is defined as the mass deformation of the 4d N=4 SYM theory. It is one of the most interesting theoretical playgrounds for 4d theories as its vacuum structure is very rich <cit.>.
Our goal below is to express the 't Hooft partition functions for the massive vacua of N=1^* theory.
First, we provide a quick overview of N=1^* theory with SU(N) gauge group starting with its N=4 origin.
In the N=1 notation, the field content of N=4 SYM theory is a vector multiplet V and three chiral multiplets Φ_i (i=1, 2, 3)
all in the adjoint representation of SU(N). The action of the theory consists of gauge invariant kinetic terms of these fields plus a unique superpotential,
W= 1/g^2 ( Φ_1 [ Φ_2, Φ_3]).
The N=1^* theory is obtained from N=4 by adding a mass term:
Δ W= m/2 g^2( Φ_1^2 + Φ_2^2 +Φ_3^2 ).
In the m →∞ limit with arbitrarily small coupling constant at the cut-off m, i τ (m) →∞, with
Λ^3 = m^3 ^ 2 πτ(m)/N fixed, the theory reduces to pure N=1 theory, which is believed to be a confining gauge theory. At finite m, the theory has an extremely rich classical and quantum vacuum structure.
Since we take the three masses non-zero, the moduli space of the N=4 theory is completely lifted
and the theory has isolated vacua. The classical vacua are determined by the solutions of the F-term equations, ∂ W/∂Φ_i=0, given by
[Φ_i, Φ_j] = -m ε_ijkΦ_k .
Therefore, the supersymmetric classical vacua can be expressed by three N × N matrices which obey the standard commutation relations for the 𝔰𝔲(2) algebra.
Up to gauge transformations, the classical vacua are described as
Φ'_i≡1/ mΦ_i
=J^(d_1)_i⊕⋯⊕ J^(d_1)_i_k_d_1⊕J^(d_2)_i⊕⋯⊕ J^(d_2)_i_k_d_2⊕⋯,
where J^(d)_i are the generators of d-dimensional irreducible representation of 𝔰𝔲(2), k_d denotes its multiplicity, and N=d_1k_d_1+d_2 k_d_2+⋯.
Ignoring discrete factors momentarily, the gauge structure at these vacua is reduced to [⊗_d U(k_d)]/U(1) at the classical level.
If the classical vacuum contains different 𝔰𝔲(2) representations, i.e. N ≠ d k_d, then there will be unbroken U(1)'s in the infrared and it becomes a gapless Coulomb vacuum.
Our primary interest in this paper is the 't Hooft partition functions of massive vacua, and they arise when the Higgs expectation values take the following form:
Φ'_i= 1_N/d⊗ J_i^(d), (i=1, 2, 3),
where d is a divisor of N. As a result, the SU(N) gauge group is Higgsed as
SU(N)SU(N/d)×ℤ_N/ℤ_N/d,
where SU(N/d) is the remaining continuous gauge group acting on the 1_N/d component, ℤ_N in the numerator is the center subgroup of SU(N), and they need to be divided by the common center ℤ_N/d.
As the classically massless contents describe the 𝒩=1 SU(N/d) SYM theory with emergent ℤ_2(N/d) discrete chiral symmetry, they are expected to be gapped by quantum fluctuations with N/d distinct vacua.
In addition to these confinement dynamics, the remnant discrete ℤ_N/ℤ_N/d≃ℤ_d gauge field describes the topological field theory for deconfined Wilson lines.
As we have this dynamics for each divisor of N, the total number of the massive vacua is given by the divisor function,
σ (N)= ∑_d|N d = ∑_d|N(N/d).
As pointed out in Ref. <cit.>, every gapped state in the Wilson–'t Hooft classification can be realized as one of these vacua in the 𝒩=1^* SYM theory, and we will find their 't Hooft partition functions and study how these vacua behave under and transformations.[A recent paper <cit.> also studies the global properties of the 𝒩=1^* gapped vacua using the analogous S and T operations. ]
Depending on whether the prime factorization of N is square free, the SL(2,ℤ) orbit of the massive vacua shows different features.
In the following, let us work on concrete examples, N=6 and N=4, as demonstrations.
§.§.§ Massive vacua of 𝒩=1^* SU(6) SYM theory (the case with N square-free)
Let us consider SU(6) gauge theory as a pedagogical example.
Positive divisors of N=6 are d=1,2,3, and 6, and we get the following table for the Higgs expectation values to have massive vacua:
Higgs vev SU(6) (ℤ_6^[1])_ 4d [,]
J_i^(d=6) ℤ_6 1 δ_6[]
1_2 ⊗ J_i^(d=3) SU(2) ×ℤ_3 (ℤ_2^[1])_ 4d δ_6[2, 3+k]
1_3 ⊗ J_i^(d=2) SU(3) ×ℤ_2 (ℤ_3^[1])_ 4d δ_6[3, 2+k]
1_6 ⊗ J_i^(d=1)=0 SU(6) (ℤ_6^[1])_ 4d δ_6[+k]
Let us discuss each of these vacuum structures to obtain the 't Hooft partition functions and their relation under SL(2,ℤ).
Let us first discuss the Higgs phase, where Φ'_i=J_i^(6) and the gauge group is Higgsed to the center ℤ_6.
We note that this classical vacuum cannot be consistent with the nonzero magnetic flux B_4≠0, and a vortex must be created to satisfy the boundary condition. Thus, the partition function with nonzero flux becomes exponentially small,
[B_4d]=δ_6[B_4],
and the corresponding 't Hooft partition function is given by
Z_ tH [, ]= δ_6[].
Indeed, in this Higgs phase, the electrically charged particles associated with the triplet of adjoint scalars condense, and we naturally expect the confinement of magnetic charges.
In the totally confined phase, the Higgs expectation values are zero, Φ'_i=1⊗ J^(1)_i=0, and the ℤ_6^[1] symmetry is unbroken.
At the classical level, gapless gluons and gluinos are associated with the SU(6) gauge group with 𝒩=1 supersymmetry, and quantum fluctuations should generate confinement.
In the 𝒩=1^* theory, the counterpart of discrete chiral symmetry of 𝒩=1 SYM does not exist as the potentially anomalous U(1) symmetry is broken at the classical level by the superpotential (<ref>), but the low-energy effective theory for the confinement phase acquires an emergent ℤ_2N axial symmetry.
As computed in (<ref>), the 't Hooft partition functions for these gapped chiral-broken vacua are given by
Z_ tH [, ]= δ_6[ + k ],
where k=1,…, N specifies the phase of the gluino condensate and they are cyclically permuted by the operation.
Another important remark is that the k=0 confining state is dual to the Higgs state by the operation as expected from electromagnetic duality.
The analyses of the remaining two cases are similar, so let us focus on the case of Φ'_i=1_2⊗ J^(3)_i, where the gauge group becomes
SU(6) SU(2)×ℤ_3.
The Higgs expectation value Φ'_i=1_2⊗ J^(3)_i is compatible with the magnetic fluxes ∫_Σ2π/6B_4∈πℤ, so the partition function should contain the factor δ_6[2 B_4].
The possible 't Hooft partition functions for gapped phases are then given by
δ_6[2, 3+k],
with k∼ k+2. We note that these two states are related by the operation since
δ_6[2, 3]⟷δ_6[2, 3+3]=δ_6[2, 3+].
Let us discuss why this is the case. To proceed, we need to study the confinement dynamics of the effective 𝒩=1 SU(2) SYM theory, and we should note that the θ angle for the SU(2) gauge group becomes θ_eff=3θ in the Higgsing (<ref>).
The operation, θ→θ+2π, acts as θ_eff→θ_eff+6π =θ_eff+2π 4π and the chiral broken vacua for the SU(2) theory are actually related by the transformation.
The complete list of the 't Hooft partition functions for gapped phases are listed in Fig. <ref>, and we also show how the SL(2,ℤ) operations relate those phases.[This figure has the identical structure with the S,T-duality orbit for 𝒩=4 SYM of the gauge Lie algebra 𝔤=𝔰𝔲(6) in Ref. <cit.>, and some readers may be confused about the difference between them. The key difference is that Ref. <cit.> discusses the global structure of the gauge group itself, which is about the kinematics not the dynamics, while we here discuss the vacuum structure of the SU(6) gauge theory. In particular, in Ref. <cit.>, one constructs genuine line operators in (SU(N)/ℤ_n)_k theory, while in our case, we are depicting the dyonic charges that are screened in the SU(N) theory. ]
All the gapped states are exchanged by and operations.
This is a general fact when N is a square-free integer, and we demonstrate it here for N=6 as an example.
In this figure, we use the fact that the 't Hooft partition function for this case can be written using a single delta functional.
For example, (<ref>) contains two different delta functions, but it can be equivalently rewritten as
δ_6[2, 3+k]=δ_6[3+k]
for k=1,2 (Since k∼ k+2, we can always choose this way).
This is related to the fact that all the order N subgroups of ℤ_N×ℤ_N is isomorphic to ℤ_N when N is square free.[Let us prove this fact using the 't Hooft partition function (in a physically intuitive way). We start from the 't Hooft partition function, δ_N[n, N/n+k], and we assume that (n, N/n)=1, which is always the case if N is square free. The transformation shifts k→ k+(N/n), and this surveys all possible k∼ k+n due to (n,N/n)=1. In particular, we can reach the vacuum, δ_N[n , N/n+]=δ_N[N/n+], and its transformation gives one of the totally confined phase, δ_N[-N/n]. ]
§.§.§ Massive vacua of 𝒩=1^* SU(4) SYM theory (the case with N not square-free)
When N contains squares in its prime factorization, the SL(2,ℤ) structure of massive vacua has disconnected components. Let us discuss N=4 as the simplest example.
The massive vacua are listed in the following table:
Higgs vev SU(4) (ℤ_4^[1])_ 4d [,]
J_i^(d=4) ℤ_4 1 δ_4[]
1_2 ⊗ J_i^(d=2) [SU(2) ×ℤ_4]/ℤ_2 (ℤ_2^[1])_ 4d δ_4[2, 2+k]
1_4 ⊗ J_i^(d=1)=0 SU(4) (ℤ_4^[1])_ 4d δ_4[+k]
The analysis of the Higgs and totally confining phases is completely the same as we have done for N=6, so let us here focus on the case Φ'_i=1_2⊗ J_i^(2).
The Higgs expectation value Φ'_i=1_2⊗ J_i^(2) causes the Higgsing of the gauge group,
SU(4)SU(2)×ℤ_4/ℤ_2,
and it breaks 1-form symmetry as
(ℤ_4^[1])_ 4d→ (ℤ_2^[1])_ 4d.
The SU(2) gauge group is confined in the infrared and it exhibits two vacua due to the spontaneous breaking of emergent chiral symmetry. These two phases are distinguished by the SPT actions of the unbroken (ℤ_2^[1])_4 symmetry, and thus their 4d partition functions are given by
[B_4]=δ_4[2B_4]exp(2π k/2∫1/2P_2(B_4/2)),
with k∼ k+2. The corresponding 't Hooft partition functions are given by
[, ]=δ_4[2, 2+k].
Importantly, these chiral broken vacua are not related by the transformation, →+, and each chiral-broken vacuum is invariant under .
To understand their invariance from the microscopic viewpoint, we should notice that the effective θ angle of the SU(2) theory is given by θ_eff=2θ using the θ angle of the SU(4) theory.
Under the 2π shift of θ, the effective θ angle is shifted as θ_eff→θ_eff+4π, and each chiral-broken vacua of 𝒩=1 SU(2) SYM theory is invariant under this operation.
In Fig. <ref>, we give the complete list of the 't Hooft partition functions for gapped vacua and their relations under the and operations.
We also show the set of deconfined lines of each gapped vacua.
Unlike the SU(6) case, there are disconnected components in the duality web of and operations for SU(4) theory, and in particular, [,]=δ_4[2,2] is invariant under both and operations.
The presence of disconnected components is a general feature for cases with N not square free, and SU(4) is an illustrative example.
§ SUMMARY AND DISCUSSION
In this paper, we introduce the temporal gauging for 4d QFTs with ℤ_N^[1] symmetry and define the 't Hooft partition function.
This operation does not respect the 4d Lorentz invariance, but it introduces the spatial ℤ_N^[1]×ℤ_N^[1] symmetry, and thus the spatial Wilson and 't Hooft lines become genuine line operators (while all the temporal line operators are no longer genuine).
This allows us to justify the classification of gapped phases using the Wilson–'t Hooft criterion, and we establish its 1-to-1 correspondence with the spontaneous breakdown of 4d ℤ_N^[1] symmetry enriched with the SPT phase of the unbroken 1-form symmetry.
In other words, our argument justifies the use of dyonic line operators to classify the SPT phases for the vacua of 4d gauge theories.
This may be reminiscent of the use of the string order parameter <cit.> to characterize the Haldane gap or AKLT state <cit.>.
The Kennedy–Tasaki (KT) transformation <cit.> maps this nonlocal string order parameter to a local correlation function, and the SPT nature of the AKLT state can be understood as the spontaneous breaking of hidden (or dual) ℤ_2×ℤ_2 symmetry (see Ref. <cit.> for its field-theoretic description from a modern viewpoint).
In this context, we may say that the temporal gauging of 4d ℤ_N^[1] symmetry gives a suitable KT transformation for 4d gauge theories to classify their gapped phases solely by spontaneous symmetry breaking.
Honestly, it is quite astonishing that 't Hooft had already introduced all the essential ingredients in Refs. <cit.> before any of these developments.
In this paper, we compute the 't Hooft partition function for the simplest 4d topological states and make the connection with the Wilson–'t Hooft criterion.
As the general 4d topological states are known to be described by the 2-group gauge theories <cit.>, it would be an interesting future study to compute the 't Hooft partition functions of these topological states, which would give us better understanding of 4d gapped phases.
The work of Y. T. was supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant numbers, 22H01218, and by Center for Gravitational Physics and Quantum Information (CGPQI) at Yukawa Institute for Theoretical Physics.
The work of M. Ü. was supported by U.S. Department of Energy, Office of Science, Office of Nuclear Physics under Award Number DE-FG02-03ER41260.
§ DUALITY EQUATION ON T4 AND CLASSIFICATION OF 4D GAPPED PHASES
In the main text, we have taken the point of view that the mutual locality condition (<ref>) is nothing but the statement that the spontaneous breakdown of 1-form symmetry be consistent with 4d Lorentz invariance. Here, we review 't Hooft's original derivation of this condition <cit.>, which is quite insightful and also interesting for its elementary character.
We take the Euclidean spacetime manifold to be a flat four-torus with L_μ = 1,…,4 the circumferences of the various circles in T^4 = (S^1)^4. With this topology, the background gauge fields A, , are associated with triplets of mod N integers n = (n_1,n_2,n_3)≡ (n_14,n_24,n_34), m⃗ = (m_1,m_2,m_3)≡ (n_23,n_31,n_12), e= (e_1, e_2, e_3)≡ (e_23,e_31,e_12) via
B_4 =+A∧τ/L_4=∑_i<j n_ij x_i ∧ x_j/L_i L_j+∑_i n_i4 x_i∧τ/L_iL_4,
= ∑_i<j e_ij x_i ∧ x_j/L_i L_j.
Accordingly, we shall write the ordinary and 't Hooft partition functions as
[B_4] ≡[n,m], [,] ≡[e, m],
and the relation between them as[When we use the definition (<ref>), the canonical normalization is given by 1/|H^0(M_3;ℤ_N)|=1/N instead of 1/N^3, and the inverse operation ∫ has the normalization factor |H^0(M_3;ℤ_N)|/|H^1(M_3;ℤ_N)|=1/N^β_1(M_3)-1=1/N^2. Here, we follow the original normalization by 't Hooft that assigns 1/N^3 for the 't Hooft partition function.
The following argument basically works in both normalizations, but we would like here to note that the normalization of (<ref>) becomes 1 instead of N^β_1(M_3)-1 in this convention. ]
[e,m] = 1/N^3∑_nexp( 2π/Ne·n) [n,m].
4d Lorentz invariance provides a important constraint on the 't Hooft partition function. In particular, let us consider the Euclidean Lorentz transformation
Λ =
[ 0 1 0 0; -1 0 0 0; 0 0 0 -1; 0 0 1 0 ],
then the magnetic flux is transformed as (n'_ij)=Λ^t (n_ij)Λ.
This has the effect of interchanging the pairs (n_1,n_2) and (m_1,m_2) as we have
[ -m'_2 n'_1; m'_1 n'_2 ]
=
[ 0 -1; 1 0 ][ -m_2 n_1; m_1 n_2 ][ 0 -1; 1 0 ],
while keeping n_3,m_3 fixed. Using the notation q≡ (q_1,q_2) for a three-vector q⃗ = (q_1,q_2,q_3), the covariance of the ordinary partition function under the above Lorentz transformation reads (we also have L_1 ↔ L_2, L_3 ↔ L_4, but this is suppressed in our notation)
[n,n_3;m,m_3] = [m,n_3; n,m_3].
After Fourier transformation, we then obtain
[ẽ,e_3; m,m_3]
= 1/N^2∑_e', m'exp( 2π/N( ẽ·m̃' - m̃·ẽ') ) [e',e_3;m̃',m_3],
which is what 't Hooft calls the “duality relation.”
To obtain the constraints on the gapped vacua, 't Hooft makes a technical assumption: If the vacuum is gapped, then the ratio [e,m]/[0,0] should approach either 0 or 1 as L_μ = 1,…,4→∞.
Although this seems to be a nontrivial assumption, we have checked in (<ref>) that it is actually valid for the ℤ_N/n topological states with a ℤ_n SPT phase.
It would be nice if we could prove/disprove it for general 4d topological states with ℤ_N^[1].
Under the above assumption, a gapped phase is then characterized by the set of all fluxes that are `light,' i.e., the set of fluxes (e,m⃗) with [e,m]/[0,0] → 1.
The possible sets of light fluxes are strongly constrained by the duality relation (<ref>).
As shown in Ref. <cit.>, any two light fluxes (e,m),(e ',m') with e_3 = e_3', m_3,=m_3' must satisfy
ẽ·m̃'- m̃·ẽ' = 0 N,
and there are either exactly N^2 or 0 light fluxes out of the possible N^4 fluxes with given e_3,m_3.
[Proof] Let us fix the normalization [0,0] = 1.
Suppose (ẽ,e_3; m,m_3) is a light flux.
Let us first establish that (0̃,e_3; 0,m_3) is also light. To see this, we note that
1 = [ẽ,e_3; m,m_3]
= 1/N^2∑_e', m'exp( 2π/N( ẽ·m̃' - m̃·ẽ') ) [e',e_3;m̃',m_3]
≤1/N^2∑_e', m'[e',e_3;m̃',m_3]
= [0,e_3;0̃,m_3] ≤ 1,
where the second step uses the duality relation, the third step uses the positivity of , and the fourth step uses the duality relation again. So indeed [0,e_3;0̃,m_3] = 1.
Replacing the final inequality with an equality in (<ref>), we find
1/N^2∑_e', m'[e',e_3;m̃',m_3]
= [0,e_3;0̃,m_3] = 1.
Clearly, this can hold only if precisely N^2 of the N^4 terms on the left-hand side are equal to 1, with all the remaining terms equal to 0. In other words, we have established that precisely N^2 among the N^4 fluxes (e',e_3;m̃',m_3) are light.
Let us now look at the duality relation again,
1 = [ẽ,e_3; m,m_3]
= 1/N^2∑_e', m'exp( 2π/N( ẽ·m̃' - m̃·ẽ') ) [e',e_3;m̃',m_3].
As exactly N^2 of the [ẽ',e_3;m̃',m_3] equal 1 while all the others vanish, this equality can be true iff exp(2π/N( ẽ·m̃' - m̃·ẽ') ) = 1 for each light flux (ẽ',e_3;m̃',m_3).
□
utphys
|
http://arxiv.org/abs/2306.12187v1
|
20230621113340
|
Hydrophobically gated memristive nanopores for neuromorphic computing
|
[
"Gonçalo Paulo",
"Ke Sun",
"Giovanni di Muccio",
"Alberto Gubbiotti",
"Blasco Morozzo della Rocca",
"Jia Geng",
"Giovanni Maglia",
"Mauro Chinappi",
"Alberto Giacomello"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall",
"cond-mat.soft"
] |
Hydrophobically gated memristive nanopores for neuromorphic computing]Hydrophobically gated memristive nanopores for neuromorphic computing
1]Gonçalo
Paulo
2,3]Ke Sun
1]Giovanni Di Muccio
1]Alberto Gubbiotti
4]Blasco Morozzo della Rocca
3]Jia Geng
2]Giovanni Maglia
5]Mauro Chinappi
[1]Alberto [email protected]
[1]Department of Mechanics and Aerospace Engineering, Sapienza University of Rome, Via Eudosiana 18, Rome, 00184, Italy
[2]Chemical Biology Department, , Groningen Biomolecular Sciences & Biotechnology Institute, Nijenborgh 7, Groningen, 9700 CC, The Netherlands
[3]Department of Laboratory Medicine, State Key Laboratory of Biotherapy and Cancer Center, Med+X Center for Manufacturing,West China Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, 610041, China
[4]Dipartimento di Biologia, Università di Roma Tor Vergata, Via della Ricerca Scientifica 1, Rome, 00133, Italy
[5]Dipartimento di Ingegneria Industriale, Università di Roma Tor Vergata, Via del Politecnico 1, Rome, 00133, Italy
Brain-inspired computing has the potential to change the current paradigm
based on the von Neumann architecture,
boosting machine learning applications.
Signalling in our brains relies on voltage-gated ion channels, which have the electrical behaviour of memristors, a resistor with memory. We use molecular dynamics simulations, continuum models, and electrophysiological experiments to investigate the concept of a bioinspired hydrophobically gated memristive nanopore.
We found that hydrophobic gating enables memory by an electrowetting mechanism,
for which we formulate simple design rules.
We engineered a biological nanopore, to produce the typical hysteresis cycles of a memristor. This route can be used to realise compact, economical, and flexible bioinspired memristors.
[
*
July 31, 2023
=================
With the current upsurge in the production and deployment of artificial intelligence technologies, it has become critical<cit.> to circumvent the bottleneck associated with processing and storing data in separate units, which is specific to the von Neumann computer architecture<cit.>. Biology, which initially motivated the birth of artificial neural networks, is currently serving as a source of additional inspiration for a different paradigm in computer architectures, neuromorphic computing, which could boost the performance and sustainability of artificial intelligence <cit.>.
Neuromorphic computing, as the name suggests, is shaped after the architecture of the brain, in which storage and processing of data happen at the same place <cit.>.
The most advanced technologies to date <cit.> implement this paradigm exploiting semiconductors; their applicability for machine learning systems has already been demonstrated <cit.>. Even though these approaches have significantly lowered the power consumption of typical neuromorphic calculations, they are still far from the performance of biological neurons
<cit.>.
The brain indeed requires just a few watts to run and its basic operations are orchestrated by nanofluidic devices
– ion channels <cit.> –
transmembrane proteins which transmit signals in the form of ion currents.
The non-linear behaviour that is essential for brain functions originates in the history-dependent conductance of the ion channels that are found in neurons, enabling the action potential, as first explained by Hodgkin and Huxley<cit.>. Specifically, ion channels in neurons can “gate”, i.e., switch on or off, depending on the transmembrane potential <cit.>.
Voltage gating typically occurs by complex action-at-a-distance mechanisms in which information is propagated from a voltage sensor domain to the ion-permeable pore, which is actuated by sterical occlusion <cit.>.
From an electrical standpoint, ion channels behave as memristors (memory resistors) <cit.>, circuital elements whose resistance depends on the internal state of the system <cit.>.
Different architectures have been proposed to produce iontronic nanofluidic memristors <cit.>, in which ions act as charge (and information) carriers instead of electrons. Iontronics platforms have the potential of being multichannel, as their natural counterpart <cit.>, with information flowing in parallel through the same circuit encoded by different ions.
In this work we propose a hydrophobically gated memristive nanopore (HyMN),
with an architecture inspired by biological ion channels;
a drastic simplification is introduced in the gating mechanism,
which relies on the formation of nanoscale bubbles to switch the ion currents thus requiring no moving parts <cit.>.
Voltage can be used to control the conductance of the nanopore imparting
memory by electrowetting. The HyMN prototype that we demonstrate based on an engineered nanopore, FraC, produces the pinched hysteresis loop in the voltage-current curve which is the signature of memristors <cit.>.
This robust and flexible design combines the advantage of being an iontronic memristor with the simplicity of a 1D system, showing promise as the basic element for innovative nanofluidic computing.
§ FROM WET/DRY BISTABILITY TO MEMRISTIVE BEHAVIOUR
§.§ Electrowetting of a single nanopore
To show how hydrophobic gating can enable memristive behaviour, we consider a simple nanopore model (Fig. <ref>a), consisting of a hydrophobic cylinder with a diameter of 1 nm and a length of 2.8 nm, mimicking the sizes of biological nanopores
<cit.>.
When immersed in water,
the nanopore lumen can be found either in the dry or in the wet states (Fig.<ref>a), due to its small size and hydrophobicity <cit.>.
The dry state is characterised by the presence of a vapour bubble, which precludes the flow of water and ions, resulting in a non-conductive (gated) pore
<cit.>.
The wet and dry states correspond to two different minima of the free energy, separated by a barrier.
In the following, we will refer to the global minimum as the stable/most probable state, while the metastable state corresponds to the local minimum.
The full (equilibrium) free energy profile,
obtained by Restrained Molecular Dynamics (RMD)<cit.>,
is reported in Fig. <ref>b (solid black line),
and in Supp. Fig. S1.
For our model pore,
the global free energy minimum corresponds to the dry (non-conductive) state;
the free energy barrier for wetting is about 18 k_BT,
while the drying one is less than 5 k_BT.
By applying an external voltage
Δ V across the nanopore,
it is possible to shift the free energy profile towards the wet state (Fig. <ref>b) thereby changing its conductance,
for details see Supp. Text S1 and Supp. Fig. S2.
The origin of this effect is electrowetting –
the electric field favours the wetting of the pore by electrostricting the water meniscus <cit.>.
The voltage at which the stable state switches from
the dry to the wet is indicated as V_c.
For Δ V > V_c, the system will preferably be in the wet, conductive state. In analogy to electronic memristors <cit.>, the voltage-dependence of the ionic conductance of the nanopore shown in Fig. <ref>a is the crucial ingredient for developing a hydrophobically gated memristor.
In Fig. <ref>c we report the wetting and drying transition rates (k_w and k_d, respectively) computed at different Δ V, which are fundamental to assess the memory behaviour of the system; the protocol to accurately estimate these rates is discussed in Supp. Text S2 and Supp. Fig. S3. Indeed, the emergence of memory is due to the finite time that the system takes to transition from the metastable state to the stable state. Consider for example a pore which, at a moment τ_0, is in the dry state:
by switching instantaneously the voltage to Δ V> V_c,
the system will “remember” the previous dry state for a certain time τ_w=1/k_w.
In this dry state ions cannot translocate through the pore
and the nanopore is non-conductive even if Δ V > V_c.
However, if the previous condition of the system was wet, at the same voltage the nanopore would be conductive.
In the next section,
we will show how the dynamic modulation of the wet/dry bistability of an ensemble of HyMNs generates a pinched I-V loop, the hallmark of memristors.
§.§ Collective Behaviour and Pinched Hysteresis Loop
Figure <ref>a-c shows that a single model pore can only be observed in a conductive (wet)
or a non-conductive (dry) state.
Instead, an array (ensemble) of pores would have a distribution of wet and dry pores, whose ratio depends, inter alia, on the applied voltage. The transition from single pore to the ensemble behaviour is discussed in Supp. Fig. S4, showing that just some tens of pores are needed to observe a continuous response as opposed to a stochastic one.
The average per-pore conductance G=1/N_pI/Δ V, with I the total current, N_p the number of pores, and Δ V the applied voltage at a given moment, of an ensemble of pores is given by
G(Δ V, t) = g_0 n(Δ V, t)
,
where g_0 is the single wet pore conductance and
n the probability that a single pore is wet.
n is history dependent and,
in the limit of an infinite number of pores,
its evolution can be described by a master equation
d n/dt = (1-n) k_w -n k_d
,
with k_w/d(Δ V) the voltage-dependent wetting/drying rates in
Fig. <ref>c.
In Fig. <ref>d we report three current-voltage (IV) curves
obtained by the numerical integration of
Eqs. <ref> and <ref>
under a saw-tooth potential at different cycling frequencies.
The picture shows that an array of HyMNs has three possible regimes:
i) at low frequencies (10 Hz, orange line), the array behaves as a non-linear resistor, because the system has enough time to visit both the wet and dry states with the equilibrium probabilities.
ii) at high frequencies (100 MHz, dashed pink), the system behaves as an ohmic resistor with finite or infinite resistance, depending on its initial wet or dry state, respectively; in this regime the voltage variation is too fast to allow to move away from the local equilibrium.
iii) at intermediate frequencies (10 kHz, blue) the system displays a pinched-loop hysteresis, i.e., memristive behaviour. This happens because the cycling frequency does not allow a complete equilibration of all the pores of the array to their stable state.
As a consequence,
the number of the wet pores at a given moment
strongly depends on the previous state, i.e., the system has memory.
For instance, starting with all dry pores,
the total current will increase with increasing voltage,
but with some delay as compared to the equilibrium wet pore probability:
cf. the blue and orange lines of Fig. <ref>d;
for Δ V > Δ V_c.
The inset of Fig. <ref>d shows that the memristive behaviour is observed over a rather broad range of frequencies; in this example, 10^2<f<10^7 Hz. A number of parameters can influence this range and the location of the maximum, see also Supp. Fig. S5.
§.§ Design criteria for HyMNs
The previous analysis demonstrated
that a pinched hysteresis loop
– the fingerprint of memristors –
can be produced by an ensemble of hydrophobically gated nanopores.
Based on the physical insights into the gating mechanism, we identify four design criteria that a nanopore must satisfy to behave as an efficient HyMN:
* The pore must be preferentially dry at Δ V = 0;
* The pore must undergo electrowetting before the maximum voltage Δ V^* that the system can sustain; e.g., for biological pores embedded in lipid membranes no more than 300 mV can usually be applied <cit.>, while solid-state membranes can bear voltages up to some volts, depending on thickness and other parameters <cit.>;
* The pore must dry “quickly” at Δ V=0 to ensure a fast transition from the wet state to the dry state;
* The pore must wet “quickly” at the maximum voltage Δ V^*
to ensure a fast transition from the dry state to the wet state.
The four conditions above require the fine-tuning of a non-linear combination of different physical properties of the system, like the contact angle between the solid and the liquid-vapour interface, the radius and length of the nanopore, and the susceptibility of the pore to wetting by applying a voltage. To explore how different geometries and physical parameters affect the wetting and drying dynamics, we constructed a macroscopic model based on classical nucleation theory to estimate the wetting and drying rates, taking into account the effect of the voltage;
the full details of the model are described
in Supp. Text S5.
Within this model,
we find that the range of parameters satisfying the previously expressed requirements is restricted to narrow (sub)nanometer-sized pores and to aspect ratios close to unity,
see Fig. <ref>a.
The drying time depends mostly on the diameter of the pore and its contact angle, while the wetting time depends also on the length of the pore. These characteristic times restrict the size of the pore to the nanoscale, as pores with larger diameters would not dry once wet and longer pores would require too high voltages to wet.
The contact angle is the dominant factor controlling the allowed aspect ratio to have a functioning HyMN,
see Fig. <ref>b.
Some biological channels fall in or near the region where hydrophobic gating is possible, and in fact some are known to do so, like the CRAC <cit.> and
BK <cit.> channels.
In the next section, we explore the biological FraC channel, whose approximate dimensions are represented by a white ellipse in Fig. <ref>b.
When allowing for higher maximum voltages Δ V^*,
the range of aspect ratios can be significantly expanded,
see Fig. <ref>c;
the ellipse denotes the approximate position in parameter space of the model pore in Fig. <ref>,
which indeed displays wetting around Δ V = 1.2 V.
§.§ A biological HyMN example: the engineered FraC nanopore
To put to a test the above predictions,
we engineered a biological nanopore, namely,
the Fragaceatoxin C (FraC). The wild type FraC is a biological toxin found in
the sea anemone Actinia fragacea <cit.>, which has been recently used in nanopore single-molecule sensing <cit.>; its stability allows the pore to be easily engineered by introducing different point mutations in its constriction <cit.>.
Here, we studied the wetting and drying properties of the heptameric double mutant G13F/G6F
in which two hydrophobic residues are introduced in the narrowest region of the pore (the constriction),
see Fig. <ref>a and Supp. Fig. S6.
Experimentally, an intense flickering between two current levels is observed at pH 3.8,
(black line in Fig. <ref>b), which is not seen in the wild type.
The highest level corresponds to the stable wet pore current at pH 7 (red line)
for which no flickering is observed during the recording.
Hence, we hypothesised that
the current blockages observed at low pH
are caused by hydrophobic gating, which is induced by the neutralisation of charges in the pore constriction:
the protonation of aspartic acid D10, highlighted in Fig. <ref>a,
creates an uncharged region extending for ca. 1.2 nm (3-4 aminoacidic rings)
that is mostly hydrophobic, allowing the formation of a stable vapour bubble inside the pore.
The hydrophobic gating hypothesis in the FraC mutant is confirmed by RMD simulations of the wetting process.
Figure <ref>c reports the pore filling free-energy profile,
showing that, at pH 3.8, with D10 completely neutralised,
the system exhibits two free-energy minima,
with the dry state being the most favoured one as expected from the design guidelines in Fig. <ref>b.
On the other hand, at pH 7 (red line), with charged D10,
the system displays a single free-energy minimum, the wet (conductive) state;
both results are coherent with the experiments in Fig. <ref>b.
Figure <ref>d reports the average experimental IV curve,
from which the capacitive current, e.g., due to the membrane, was subtracted out – see Supp. Text S4 Supp. Fig. S8-S12.
The system clearly shows a pinched hysteresis loop, characteristic of memristors.
The asymmetric response of the system, under opposite applied voltages,
is likely to originate in the conical shape and asymmetric charge distribution of the FraC nanopore, as previously reported for other asymmetric geometries <cit.>.
Consistent with experiments, we used the same protocol as in
Fig. <ref>a to compute the effect of voltage
on the simulated free-energy profile, finding that indeed hydrophobic gating has the same asymmetric behaviour (Supp. Fig. S7).
In summary, by leveraging the wetting properties of a mutated FraC nanopore,
we demonstrated the potentiality of the proposed nanofluidic memristors,
which exploit hydrophobic gating to induce memory.
HyMNs have the advantages of
compactness, simplicity –having no moving parts nor allosteric gating mechanisms, durability, and high reproducibility.
The mutagenesis approach can be easily extended to other well studied nanopores having different radii and lengths, such as α-Hemolysin <cit.>, Aerolysin <cit.>, CsgG <cit.>, or artificial de-novo β-barrel nanopores <cit.>
that can have other dynamical characteristics.
Moreover, solid-state nanopores can be easily grafted with hydrophobic groups <cit.>
and, together with engineered biological nanopores,
can pave the way to the next generation of highly tunable nanofluidic memristors.
§.§ Neuromorphic computing using HyMNs
Here, we showcase the potential of HyMNs in neuromorphic computing applications by proposing two simple circuits which exploit the properties of our basic HyMN element of Fig. <ref>a.
Neuromorphic computing has garnered significant attention as it promises to transcend the capabilities of digital computers by emulating the complex behaviour of neurons.
The first circuit, composed of a signal generator, a capacitor (the membrane itself), and the memristor, exhibits the behaviour of a stochastic, leaky, integrate and fire neuron, Fig. <ref>a.
Upon receiving voltage signal pulses (V_in, top panel),
the capacitor charges, increasing the voltage at its terminals
Δ V_C (black line).
The memristor is modelled by the HyMN shown in Fig. <ref>,
which has a non-conductive, dry state and a conductive, wet state.
As the charge of the capacitor increases, the probability of pore wetting increases, eventually resulting in the discharge of the capacitor,
producing a current spike (I_out, red line)
at the output terminal.
The stochastic nature of pore wetting allows for spikes
below or above the critical voltage of V_c=1.2 V.
Several extrinsic and intrinsic parameters influence the system response, e.g., as the the intensity of the signal pulse
affects the spiking frequency and the area of the membrane (capacitance) the spiking probability of the device (Supp. Fig. S14).
We also propose a circuit similar to the one described by Hodgkin and Huxley <cit.>, which employs two slightly different memristors, Fig. <ref>b.
This circuit exhibits a neuron-like behaviour with trains of spikes produced after a certain threshold input; the spike shape
depends on the specific realisation and combination of the memristive elements,
see Supp. Fig. S14.
In detail, the circuit can either produce i) a continuous response,
for a low enough input signal I_in=0.04 pA (blue line),
ii) a damped oscillatory response, with a slightly higher input signal
I_in=0.06 pA (orange line), or
iii) periodic spikes reminiscent of the waves of the action potential,
above the threshold I_in=0.1 pA (green line).
One voltage spike produced by biological neurons consumes order of 10^-11 J, which is significantly less expensive than those produced by solid-state neurons, which in turn outperform digital software-based ones <cit.>. We expect that the spikes produced by HyMN circuits, because of the size and architecture of the nanopores, will be on par with biological neurons in terms of efficiency, but will be more robust, because there are no moving parts as in voltage-gated ion channels, and more easily tunable, as specific mutations of the pore lumen have a predictable effect on hydrophobic gating, as demonstrated in this work. Differently, mutations on voltage-gated ion channels have more complicated implications on the protein structure and on the allosteric gating mechanisms <cit.> and, hence, are harder to engineer. Our findings showcase the potential of HyMNs as flexible building blocks of nanofluidics neuromorphic computing.
§ CONCLUSIONS
In this work, we propose and demonstrate a hydrophobically gated memristive nanopore (HyMN). Molecular dynamics simulations revealed the microscopic mechanism at the heart of the memristive behaviour, i.e., memory by electrowetting. Guided by the molecular dynamics results, we propose design criteria to narrow the parameter space where HyMNs can be found, pointing towards biological nanopores as promising candidates owing to their size and the possibility to carefully control their hydrophobicity by point mutations.
We tested our prediction by engineering a mutant of the biological FraC nanopore to have a hydrophobic constriction.
Molecular dynamics simulations demonstrated that it displays hydrophobic gating at low pH.
Electrophysiological experiments confirmed this microscopic insight,
showing a random telegraph signal only at low pH and displaying the expected hysteresis
behaviour which is a signature of memristors.
Engineered biological nanopores thus can serve as HyMNs, with important strengths: they are nanometer-sized, have no moving parts, are highly reproducible and economical, and advanced technologies are available to fine tune their properties <cit.>.
The computational capabilities of the brain have initiated the era of artificial intelligence, which in turn calls for suitable neuromorphic computing architectures,
that should be durable and sustainable.
The most advanced technologies so far have employed semiconductors, but nanofluidic memristors are making way. The proposed HyMN concept brings back to the original archetype from which this journey started, i.e., ion channels which confer to neurons their computational capabilities. Could the considerable simplification of hydrophobic gating together with the capabilities of molecular biology bring about a revolution in the field?
§ MATERIALS AND METHODS
§.§ Molecular dynamics setup
We used molecular dynamics simulations to extract the free energy profile,
diffusivity, and their dependence with voltage.
These simulations were done using the molecular dynamics package LAMMPS <cit.> for the model pore
and NAMD <cit.> for the FraC <cit.>.
Model Nanopore.
We construct a membrane out of a slab of fixed atoms in a FCC arrangement,
with lattice spacing 0.35 nm, from which a nanopore was excavated.
Water (SPC/E <cit.>) was placed on both sides of the slab.
The interaction of the water with the surface was tuned <cit.> so that the contact angle is 104^∘.
The nanopore has a nominal diameter of 1.04 nm and a nominal length of 2.8 nm.
At each end of the water reservoirs, a thin slab of hydrophilic material is present,
which is used as a piston to control the pressure of the system <cit.>.
The NVT ensemble was sampled using a Nosé–Hoover chains thermostat <cit.>
at 310^∘ K with a chain length of 3.
FraC Nanopore.
The starting PDB structure of the heptamer nanopore
was taken from the previous work <cit.>.
The missing N-terminals (sequence ASAD) were modeled by AlphaFold;
the modeled sequence was ASADVAGAVIDGAGLG, generating an alpha-helix (best prediction)
that was then aligned and merged to each of the seven chain of the starting FraC structure.
After, the residues G6 and G13 were mutated in phenylalanine with VMD <cit.> Mutator Plugin.
The resulting structure is minimised for 1000 step of gradient descent in vacuum.
From the mutated pore, two systems were built: one at pH 7 and another at pH 3.8.
The protonation state of each tritable residue was assessed by computing the pKa with
PROPKA <cit.>.
In particular, it resulted that the D10 residues, at the centre of the hydrophobic region,
have an average pKa = 4.66 ± 0.06; so, they are completely protonated at pH 3.8.
The complete list of pKa and protonation state for the other amino acids can be found in Supp. Fig. S7.
The two systems were embedded in a lipid bilayer and immersed into a 1M KCl solution,
using the VMD's Membrane Builder, Solvate and Autoionize Plugins.
The final system consists of about 215,000 atoms
(∼450 molecules of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC),
50,000 water molecules, 850 potassium and 900 chloride ions)
and the simulation box had dimensions (142 × 142 × 130) Å^3.
For the MD simulations, we used the ff15ipq force field <cit.> for the protein,
the Lipid17 force-field <cit.> for the phospholipids
and the SPC/E water model <cit.>.
The systems were thermalised and equilibrated for 10 ns,
following the procedure illustrated in previous works <cit.>.
§.§ Restrained molecular dynamics
We use Restrained Molecular Dynamics (RMD) <cit.>
to compute the free energy as a function of the pore filling.
This is done by adding a harmonic restraint to the original Hamiltonian of the system,
H_N(r,p)=H_0(r,p)+k/2(N-Ñ(r))^2 ,
where r and p are the positions and momenta of all the atoms, respectively, H_0 is the unrestrained Hamiltonian,
k is a harmonic constant which was set to 1 kcal/mol,
N is the desired number of water molecules in a box centred around the nanopore,
and Ñ is the related counter,
counting the number of water molecules in the system.
We used the same protocol for counting the number of water molecules as described in previous work [JCP paper].
A fermian distribution with Fermi parameter equal to 3 (Å)
is used to smooth the borders of the box and make the
collective variable continuous.
For the FraC nanopore, the protocol is implemented in NAMD by using the
Volumetric map-based variables of the Colvars Module <cit.>.
For the FraC nanopore, the centre and size of the counting box was set
equal to the centre and the size of the F6 and F13 hydrophobically mutated rings.
The water molecules affected by the counting box are represented in VDW spheres in Fig. 3 of the main manuscript.
Each filling state, corresponding to each point of Fig. 3c,
was sampled for 2 ns.
Each trajectory is saved every 20ps,
while the number of water molecules inside the box is saved every 1 ps.
§.§ Experimental setup
Chemicals.
Potassium chloride, sodium chloride, imidazole, urea, Citric acid, N,N-dimethyldodecylamine N-oxide (LDAO), chloroform, n-decane and LB medium were purchased from Sigma-Aldrich. 1,2-Diphytanol-sn-glycero-3-phophocholine (DPhPC) lipids and sphingomyelin were obtained from Avanti. Ampicillin and isopropyl-β-D-1-thiogalactopyranoside (IPTG) were purchased from Fisher Scientific.
FraC monomer expression and purification.
A pT7-SC1 plasmid containing the G6F/G13F-FraC gene was transformed into BL21(DE3) cells. The transformed cells were grown overnight at 37^∘ C on LB agar plates supplemented with
1% glucose and 100 μg/ml ampicillin. On the next day, picking the single colony and resuspended to grow in 10 mL LB medium at 37 ºC overnight. Grown LB culture was transferred into 1 L LB medium, supplemented with 100 mg/L ampicillin and grown under constant shaking at 37 °C until the OD600 reached a value of 0.8-1.0. At this point, 0.5 mM IPTG was added for induction and growth continued overnight at 25 oC. Afterwards, the cells were pelleted by centrifugation at 3500 rpm for 15 minutes. 50 mL lysis buffer, containing 150 mM NaCl, 2 M Urea, 20 mM imidazole and 15 mM Tris buffered to pH 7.5. The mixture was sonicated using a Branson Sonifier 450 (2 minutes, duty cycle 30%, output control 3) to ensure full disruption of the cells. The lysate was pelleted by centrifugation at 10000 rpm for 30 minutes and the supernatant is carefully transferred to a fresh falcon tube. Meanwhile, 300 μl of Ni-NTA bead solution is washed with wash buffer, containing 150 mM NaCl, 20 mM imidazole and 15 mM Tris buffered to pH 7.5. The beads are added to the supernatant and incubated at room temperature for 30 minutes under constant rotation. Afterwards, the solution is loaded on a pre-washed Micro Bio-Spin column (Bio-Rad) and washed with 50 ml of wash buffer. The bound protein is eluted in fractions of 200 μl of elution buffer (150 mM NaCl, 300 mM imidazole, 15mM Tris buffered at pH 7.5). The presence of FraC monomers was detected using SDS-PAGE.
Sphingomyelin-DPhPC liposomes preparation.
An equal mixture of 25 mg sphingomyelin and 25 mg DPhPC was dissolved in 10 mL pentane containing 10 v/v% chloroform. A film was formed on the side of the flask through evaporating by nitrogen under constant rotation. The resulting film was dissolved in 10 mL of 150 mM NaCl and 15 mM Tris buffered to pH 7.5. The resulting liposome solution (5 mg/mL) was frozen (-20 °C) and thawed multiple times.
FraC oligomerisation.
Liposome were added to FraC monomers in a 10:1 (lipid: protein) mass ratio at 37 oC for 30 minutes. Afterwards, the liposomes were solubilised by the addition of 0.6% v/v% LDAO. Subsequently, the sample was diluted 10 times with 150 mM NaCl buffered at pH 7.5 using 15 mM Tris supplemented with 0.02% DDM. Meanwhile, 200 μl of Ni-NTA was added and incubated for 1 h at 25 ºC under constant rotation to purify the FraC oligomer. The mixture was then loaded on a Micro Bio-Spin column and extensively washed with 20 mL of wash buffer supplemented with 0.02% DDM. FraC oligomers were eluted in 200 μl elution buffer containing 1 M imidazole, 150 mM NaCl and 15 mM Tris buffered to pH 7.5 supplemented with 0.02% DDM. Oligomers can be stored at 4 oC for several weeks.
Planar lipid bilayer electrophysiological recordings.
Two fluidic compartments are separated into cis and trans compartments by a Delrin partition (Warner, USA) containing an aperture of approximately 150 μm in diameter. An Ag/AgCl electrode is placed in each compartment as to make contact with the buffer solution. Planar lipid bilayers were formed in single-channel trials using the standard methods as described previously. In brief, the 150 μm aperture in a Delrin partition was pre-painted with 0.5 μL of 40 mg/mL DPhPC solution in n-decane prior to loading the buffer containing 1 M KCl buffered at pH 3.8 using 50 mM citric acid with bis-tris-propane, and then painted with 0.2 μL of 40 mg/mL of DPhPC for the bilayer formation. G6F/G13F FraC oligomers were added to the cis compartment which was connected with the ground. Presence of a single channel were confirmed by the IV characteristics of the pore. After buffer exchange in cis compartment to remove additional oligomers,
gating data were collected by applying different potential sequence protocols.
Data acquisition.
The ionic current was recorded using a Digidata 1550B (Molecular Devices) connected to an Axopatch 200B amplifier (Molecular Devices). Data is recorded with a sampling frequency of 10 kHz or 20 kHz.
§.§ Circuit simulation, numerical integration
For the circuit applications of the HyMN device we numerically solved the circuit equations taking into account the respective behaviour of the elements in our circuit.
Stochastic integrate and fire.
For the stochastic integrate and fire circuit we have used an entrance resistance, R, of 10 TΩ, a capacitance for the membrane, C, of 10 pF, and single pore conductance for the HyMN, G, of 100 pS. The time evolution of voltage across the capacitor is given by:
Δ V_c/dt = 1/C((V_in-V_c)/R-V_c G n),
where n is a binary variable that is 0 if the HyMN is in the dry (non-conductive) state and 1 if it is in the wet (conductive) state. At every timestep, dt, we randomly select if the state of the HyMN changes by comparing against the survival function of an exponential process:
accept=
1 - e^-k_w dt, if state = 1
1 - e^-k_d dt, if state = 0
where accept is the probability that the state is changed, k_w and k_d are the wetting and drying rate respectively. The timestep of integration, dt, is 0.01 μ s. The signal protocol consists of 15 square spikes, of 4.5 V, where the signal is kept high for 4 μ s, T_on, and then is kept off for 10 μ s and, T_off.
Hodgkin-Huxley neuron.
For the Hodgkin-Huxley circuit we have used a leak resistance, R_leak,
of 1 GΩ, a capacitance for the membrane of 180 pF, and maximum conductance for the HyMNs, G_slow and G_fast, is of 3100 pS, some tens of individual pores. The voltage sources V_slow=-1 V, V_fast=1.6 V, V_leak=0.5 V are in the original Hodgkin-Huxley model due to the Nernst potential of different ion channels, but are set to reasonable values that enable spiking.
The time evolution of voltage across the membrane,V_m is given by:
dV_m/dt = 1/C(I - G_slow(V_m)(V_m-V_slow) - G_fast(V_m)(V_m-V_fast) - 1/R_leak(V-V_leak),
where the dependence of G_fast and G_slow on voltage is that of equation 1, but in the case of G_slow the wetting and drying rates are divided by 10.
The timestep of integration, dt, is 0.01 μ s.
§ ACKNOWLEDGEMENTS
§.§ Funding
This research is part of a project that has received funding from the European Research Council (ERC)
under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 803213).
The authors acknowledge PRACE for awarding us access to Marconi100 at CINECA.
The authors acknowledge CSCS for awarding us access to Piz Daint (project id s1178).
§.§ Authors contributions
M.C. and A.G. conceived the project.
G.P. carried out the theoretical analysis and the molecular dynamics simulations on the model nanopore,
with inputs from A.Gubbioti and A.G.
K.S performed the experiments.
G.D.M. carried out the molecular dynamics simulations on the FraC channel.
G.P and G.D.M wrote the article.
G.P and G.D.M analysed the experimental data.
All authors discussed the results and reviewed the final manuscript.
§.§ Conflicts of interest
The authors declare no competing interests.
§.§ Data and materials availability
All data reported in this article can be accessed at: 10.5281/zenodo.8018059
|
http://arxiv.org/abs/2306.08842v2
|
20230615040624
|
ViP: A Differentially Private Foundation Model for Computer Vision
|
[
"Yaodong Yu",
"Maziar Sanjabi",
"Yi Ma",
"Kamalika Chaudhuri",
"Chuan Guo"
] |
cs.CV
|
[
"cs.CV",
"cs.CR",
"cs.LG"
] |
Particle interaction strengths modified by magnetic fields
[
==========================================================
Artificial intelligence (AI) has seen a tremendous surge in capabilities thanks to the use of foundation models trained on internet-scale data. On the flip side, the uncurated nature of internet-scale data also poses significant privacy and legal risks, as they often contain personal information or copyrighted material that should not be trained on without permission. In this work, we propose as a mitigation measure a recipe to train foundation vision models with differential privacy (DP) guarantee. We identify masked autoencoders as a suitable learning algorithm that aligns well with DP-SGD, and train ViP—a Vision transformer with differential Privacy—under a strict privacy budget of ϵ=8 on the LAION400M dataset. We evaluate the quality of representation learned by ViP using standard downstream vision tasks; in particular, ViP achieves a (non-private) linear probing accuracy of 55.7% on ImageNet, comparable to that of end-to-end trained AlexNet (trained and evaluated on ImageNet). Our result suggests that scaling to internet-scale data can be practical for private learning.
Code is available at <https://github.com/facebookresearch/ViP-MAE>.
§ INTRODUCTION
Foundation models (e.g., GPT-3, SimCLR, CLIP, etc. <cit.>) pre-trained on vast amounts of diverse unlabeled data through self-supervised learning (SSL) have emerged as an important building block for artificial intelligence (AI) systems <cit.>.
These foundation models enable downstream applications via fine-tuning, prompting, or training a simpler model on top of the learned representations to perform more specialized tasks, and have performed tremendously well on challenging benchmarks in both language and vision domains <cit.>.
Despite the widespread deployment of foundation models, there are significant privacy and legal risks of training these models on uncurated data that often contain personal information or copyrighted material. Although the training data for these models are considered public in most cases, some of the data may be sensitive; additionally, there are certain privacy and copyright laws that apply to model training even on such public data <cit.>.
In addition, studies have shown that generative foundation models such as GPT-3 can sometimes regurgitate memorized information about individuals and licensed content from its training data when prompted to do so <cit.>.
More recently, <cit.> showed that non-generative vision SSL models can also be probed to reveal sensitive information about individual samples in its training data when given partial information.
Given these risks, there is an urgent need to train foundation models that can adhere to relevant privacy and copyright laws. To this end, differential privacy (DP; <cit.>) seeks to limit the influence of individual training data points on the trained model, and hence has the potential to mitigate both privacy and copyright risks for sensitive information that is confined to a single or a few training examples <cit.>.
For any model that can be trained using gradient-based optimization, DP-SGD <cit.> can be applied instead to ensure that the trained model satisfies the rigorous definition of DP. However, there are still significant technical challenges in DP-SGD training of large-scale foundation vision models:
* Differentially private representation learning in general is a difficult problem. <cit.> showed that even handcrafted features can outperform feature learned by state-of-the-art DP-trained models, and attaining high-utility learned representations requires significantly more training data—much more than what is provided in typical supervised/curated datasets.
* Combining self-supervised learning (SSL) with internet-scale uncurated datasets may seem like a natural approach to gain access to the large amount of data needed for DP training. However, most vision SSL training algorithms are based on contrastive learning, where the objective function depends on multiple samples in an entangled manner. This makes it difficult to perform the per-sample gradient computation needed in DP-SGD.
* SSL training requires a much larger number of training epochs compared to supervised learning, which sharply increases the DP parameter ϵ, leading to meaningless privacy guarantees.
In this paper, we describe a successful recipe for training differentially private large-scale foundation models via SSL. Firstly, we identify masked autoencoder (MAE; <cit.>) as a promising SSL training algorithm that is amenable to DP-SGD. MAE uses an instance-separable loss function and does not require batch normalization, and hence per-sample gradients can be easily computed. We also show that it is tolerant to the large amount of Gaussian noise added in DP-SGD. Next, we demonstrate that MAE can effectively leverage synthetic datasets containing only programmatically-generated synthesized textures <cit.> to warm-start the DP training process, significantly reducing the number of training epochs required to reach a high-utility model. The combination of these two ingredients forms a powerful DP training recipe for obtaining high-utility differentially private foundation vision models.
We implement this training recipe on the LAION400M dataset <cit.>. We show that the resulting model, which we call ViP (Vision transformer with differential Privacy), learns highly useful and transferable representations—rivaling that of representation learned by SimCLR on ImageNet—while providing a strong DP guarantee with ϵ=8.
In Figure <ref>, we compare ViP with other private and non-private models in terms of downstream linear probing accuracy and fine-tuning accuracy for different image datasets:
* For iNat-2021 and Places-365 classification, outperforms both TAN <cit.>—the previous SOTA for DP supervised training—and AlexNet <cit.>, while matching or exceeding the performance of SimCLR pre-trained on ImageNet.
* On ImageNet, the linear probing accuracy of matches that of end-to-end trained AlexNet[The model is sourced from the PyTorch website and is end-to-end trained with supervised learning.].
* On MS-COCO detection and segmentation, ViP outperforms both SimCLR pre-trained on ImageNet and Mask R-CNN.
Our experiments demonstrate that by scaling DP-SGD training to vast amounts of unlabeled data and using synthetic data to warm-start the model, we can attain high-utility foundation vision models under stringent privacy guarantees. Consequently, we hope that future work can continue to build on our successful recipe and further push the performance boundary of large-scale DP training.
§ BACKGROUND
Differential privacy <cit.> is a mathematical framework for formal reasoning about information leakage through a private mechanism.
A learning algorithm is said to be (ϵ, δ)-differentially private (denoted (ϵ,δ)-DP) if for all training datasets , ' that differ[We adopt the removal notion of adjacency, i.e., ' = ∪ for some and vice versa.] in a single training sample, we have:
P(() ∈ S) ≤ e^ϵ P((') ∈ S) + δ
for all outcome sets S. More generally, Eq. (<ref>) can be expressed as a statistical divergence D(() || (')) between the distribution of models trained on vs. ', with (ϵ,δ)-DP corresponding to the “hockey-stick” divergence <cit.>. Another useful variant is Rényi differential privacy (RDP; <cit.>), which uses the Rényi divergence D_α <cit.>: is said to be (α,ϵ)-RDP if D_α(() || (')) ≤ϵ.
Moreover, RDP can be converted to DP via the following <cit.>: if is (α, ϵ_α)-RDP then it is also (ϵ,δ)-DP with
ϵ = ϵ_α + log( α-1/α) - logδ + logα/α - 1.
DP-SGD training. <cit.> showed that stochastic gradient descent (SGD)—the quintessential learning algorithm—can be made differentially private by perturbing the per-iteration gradient with Gaussian noise. The modified SGD update with gradient perturbation (often referred to as DP-SGD) is given by:
θ_t+1 = θ_t - η_t/|_t|( ∑_∈_t_C(∇_θℓ(; θ)|_θ = θ_t) + (0, σ^2 C^2 I) ),
where η_t is the learning rate, _t is the sampled batch, σ > 0 is the noise multiplier, and _C is the operation that clips the per-sample gradient norm to at most C>0. It can be shown that this update procedure is (α, ϵ_α)-RDP for some computable ϵ_α <cit.>.
The end-to-end learning algorithm by running T iterations of SGD is thus (α, T ϵ_α)-RDP via composition <cit.>, and a conversion to (ϵ,δ)-DP can be obtained using Eq. (<ref>). Such privatization mechanism—per-sample clipping and injecting noise—can be easily integrated with other first-order optimization algorithms such as Adam <cit.> and AdamW <cit.>.
Self-supervised learning (SSL) has emerged as a prominent approach for scaling up the training of machine learning models to large-scale unlabeled datasets.
Restricting our attention to the vision domain, SSL pre-trained models generalize effectively across a wide range of transfer learning downstream tasks such as classification, instance segmentation and object detection <cit.>, especially under the scenario of limited downstream training data.
Vision SSL methods can be broadly categorized as either joint embedding-based learning (JE) <cit.> or reconstruction-based learning (REC) <cit.>.
JE-based approaches design objective functions so that all views (or image augmentations) of the same sample have similar embeddings, while views of different samples have different embeddings. As a result, most JE-based approaches require a batch containing multiple samples in order to define the objective function.
On the other hand, REC-based approaches aim to optimize models to reconstruct image inputs in the pixel space based on partially masked inputs, which promotes the model to learn compressed representations that can generalize well.
Related work.
Recently, an expanding body of literature has emerged on scaling DP training to large-scale datasets and models in both NLP and vision domains.
In NLP, a series of works <cit.> showed that by combining public pre-training and scaling up the training batch size, it is possible to fine-tune the pre-trained language model to achieve reasonable downstream performance.
In computer vision, <cit.> first attempted to scale DP training of convolutional neural networks (ResNets) to ImageNet.
<cit.> further improved the performance of <cit.> with a Normalizer-Free ResNet architecture and an improved training recipe. More recently, <cit.> proposed a more efficient hyperparameter tuning method for DP training that led to state-of-the-art performance on ImageNet. It is worth noting that all these works on DP-trained computer vision models focus on training supervised models.
§ RECIPE FOR TRAINING DP FOUNDATION VISION MODELS
In this work, we identify a successful recipe for training differentially private foundation vision models. Training DP foundation models, or in general any deep learning model with a large number of parameters, poses a significant challenge due to the large amount of injected noise—(0, σ^2 C^2 I) in Eq. (<ref>).
Indeed, current state-of-the-art differentially private deep learning models even under-perform linear models with handcrafted features when ϵ is small <cit.>.
We propose two effective techniques that reduce the magnitude of noise injected during training while attaining strong (ϵ, δ)-DP guarantees: 1. Scaling up the number of training samples via SSL with masked autoencoder; and 2. Facilitating faster training by warm-starting the model with weights pre-trained on synthetic samples.
§.§ Differential Private SSL with Mask Autoencoder
Most existing works on differentially private training <cit.> focus on supervised learning, which inherently restricts the quantity of training samples that can be utilized.
In contrast, self-supervised learning approaches unlock the use of (albeit uncurated) internet-scale training data that can be on the order of billions of samples, which can potentially satisfy the amount of data needed for DP training of high-utility models <cit.>.
On the other hand, most existing SSL training approaches do not align with requirements in DP-SGD training.
For example, SimCLR <cit.> requires a mini-batch of samples in order to compute the contrastive loss; BYOL <cit.> computes per-sample loss but it utilizes batch normalization (BN) <cit.> in the model architecture, resulting in each loss depending on a mini-batch of training samples.[
Subsequent work by <cit.> demonstrated that BN can be substituted with group normalization by carefully modifying the model architecture.
However, we have observed that the design of exponential moving averaged online network in BYOL can result in dynamic instability during training, which poses challenges in the context of DP training.]
Therefore, it is challenging to perform the per-sample gradient clipping as described in Eq. (<ref>).
Among various types of SSL methods, we identify reconstruction-base learning with masked autoencoders (MAE) <cit.> as one of the most suitable SSL approaches for training DP foundation vision models.
The training objective L_MAE(θ) in MAE is defined as:
L_MAE(θ) := 1/n∑_i=1^nℓ_MSE(g ∘ψ(mask(_i); θ), _i) = 1/n∑_i=1^nℓ(_i; θ),
where n is the number of training samples, _i ∈ℝ^C× H × W is the input of the i-th training image (C-number of channels, H-height, W-width), mask(·) is a function that mask out a fraction of the image, ψ: ℝ^C× H × W→ℝ^d is the encoder and g: ℝ^d→ℝ^C× H × W is the decoder. We use θ to denote the trainable parameters of the ψ and g, and use ℓ_MSE to denote the mean squared error (MSE) loss defined on the pixel space, i.e., ℓ_MSE(_1, _2) = _1 - _2_F^2. Similar to <cit.>, we apply vision transformers <cit.> to instantiate the encoder and decoder maps.
As shown in Eq. (<ref>), the training objective can be decomposed into n individual losses, and each individual loss ℓ(_i; θ) only depends on the i-th training sample _i and does not require the label of _i.
Therefore, we can compute per-sample gradient ∇_θℓ(_i; θ) and perform per-sample gradient clipping without modifying the MAE training.
By leveraging the self-supervised MAE training paradigm, we can now significantly scale up the training data size for DP SSL pre-training. Dataset scaling can effectively reduce the magnitude of noise in DP-SGD while maintaining the same (ϵ, δ_n)-DP guarantee, where δ_n=1/2n.
As shown in Figure <ref>, we investigate the impact of injected noise in training by keeping all training hyperparameters the same except for the number of training samples[We maintain the same batch size across various data size settings while modifying the noise multiplier σ.
Consequently, as the data size increases, the corresponding σ values decrease.]. With more training samples, the magnitude of the injected noise σ becomes smaller.
We find that when the noise magnitude is large, the training loss cannot be further optimized after certain number of training steps.
In contrast, smaller magnitude of noise (as a result of larger training dataset) facilitates faster optimization of the training loss in comparison to larger noise scenarios. Importantly, the optimization trajectory is stable despite the presence of noise, allowing the MAE model to learn useful features.
§.§ Synthetic Pre-training Enables Faster DP Training for
Non-private training of SSL models often require a significant number of training epochs, much larger than what is required in supervised learning <cit.>. This creates an additional challenge for DP training since the number of training iterations T directly impacts the privacy guarantee. Indeed, as mentioned in Section <ref>, DP-SGD with T iterations is (α, T ϵ_α)-RDP. Consequently, naively applying DP-SGD to MAE training results in an unfavorable privacy-utility trade-off.
Fortunately, <cit.> demonstrated that using pre-trained initialization enables much faster model convergence compared to random initialization. However, in light of our discussion in Section <ref>, it is critical that the pre-training data does not contain any private information, even if the data is deemed “public”. One promising alternative is pre-training on programmatically-generated synthetic images <cit.>, which was shown to achieve competitive downstream performance compared to pre-training on natural images.
Doing so allows the MAE to learn spatial structure in the transformer modules <cit.> without expending any privacy budget for the natural image data. More importantly, synthetic pre-training does not carry any privacy risk, and legal risk is limited to obtaining proper license for the synthetic image generation code.
Thus, to accelerate training, we pre-train the model on synthetic images generated using the Shaders21k tool developed in <cit.>. Figure <ref> shows samples of synthetic images generated by the tool.
In Figure <ref>, we compare the training with and without synthetic pre-trained initialization. Notably, training with synthetic pre-trained weights converges significantly faster than those with random initialized weights. Increasing the synthetic pre-training from 20 to 900 epochs further improves convergence for training. Interestingly, as shown in Figure <ref>, MAE trained on the synthetic dataset already outperforms existing state-of-the-art DP-trained models <cit.> under our transfer learning evaluation, which shows that DP training on datasets even as large as ImageNet does not learn sufficiently expressive features (see Table <ref>).
§.§ Our Proposed Approach
We now summarize our approach for DP foundation vision model training (also see Figure <ref>):
[colback=myblue!2!white, colframe=myblue!60!gray]
DP-MAES – DP Masked Autoencoder with Synthetic Pre-training
* Step 1: Synthetic pre-training for initialization. Pre-train mask autoencoder on the synthetic dataset with non-private optimizers.
* Step 2: DP training with synthetic initialization. Apply the synthetic pre-trained model as initialization and train mask autoencoder on a large-scale natural image dataset (e.g., LAION400M) with DP-SGD. The DP guarantee then applies to the natural image dataset.
It is worth mentioning that our proposed approach offers flexibility in the selection of both SSL training methods and synthetic datasets.
For example, developing better synthetic datasets or more effective SSL learning method can further push the performance of the final DP foundation model.
§ EVALUATION
We evaluate the effectiveness of our training recipe by applying it to the LAION400M dataset to train our private foundation vision model: ViP. We consider various downstream tasks in order to demonstrate the quality and transferability of its learned representation.
Furthermore, we compare ViP to previous state-of-the-art DP-trained models as well as widely adopted non-privately trained models, and find that ViP significantly improves SOTA for DP training on downstream transfer tasks (Section <ref>) and even outperforms non-private models on several challenging datasets.
In addition to assessing the performance of on non-private downstream tasks, in Section <ref>, we also evaluate the model via DP fine-tuning on ImageNet-1K, which shows a notable improvement of 10%+ absolute top-1 accuracy compared to previous SOTA <cit.>.
For additional experimental results on , see Appendix <ref>.
§.§ Evaluation Setup
Our implementation uses PyTorch, along with the package <cit.> for computation of per-sample gradients and the package <cit.> for privacy accounting.
See Appendix <ref> for additional implementation details.
Datasets. We use 1.05 million samples generated using the Shader21k <cit.> tool as our synthetic pre-training dataset, and the LAION400M <cit.> as our private pre-training dataset for the ViP model[Some of the links in LAION400M are now broken since its initial release, and the version we use contains ∼233 million real images.
We use LAION233M to denote this subsampled version of LAION400M.].
We evaluate ViP and baseline models via non-private linear probing and fine-tuning on the following downstream classification datasets: ImageNet-1K <cit.>, Places-365 and Places-205 <cit.>, iNaturalist-2021 and iNaturalist-2018 <cit.>, CIFAR-100 <cit.>, Caltech101 <cit.>, and Aircraft <cit.>. The input images are resized and center-cropped to 224×224 resolution.
We also evaluate using MS-COCO instance segmentation and object detection <cit.>, and semantic segmentation with the ADE20K dataset <cit.> (in Appendix <ref>).
Model architecture. Following <cit.>, we use vision transformer (ViT) <cit.> to instantiate the masked autoencoder models. The default MAE-encoder has 12 transformer blocks and width 768, and the default MAE-decoder has 4 transformer blocks and width 512. We denote this MAE model as MAE-base. We also consider MAE models with different model sizes, including MAE-Nano, MAE-Tiny, MAE-Small and MAE-Large in Section <ref>.
Optimization and hyperparameters for (DP-)MAE training. We use AdamW <cit.> for training MAE – both for synthetic pre-training and differentially private MAE pre-training.
When evaluating pre-trained models in downstream tasks, we apply LARS <cit.> for linear probing and AdamW for fine-tuning.
For MAE training, we set the masking ratio to 75%. In terms of DP training, we set ϵ=8.0 and δ=1/2n by default for training (ϵ, δ)-DP model.
We set the clipping parameter C=0.1, sampling ratio q=81920/n, and noise parameter σ=0.5.
Existing methods for comparison.
We compare with existing state-of-the-art DP-trained models: DP-NFNet <cit.> and TAN <cit.>), both of which are trained differentially privately on ImageNet-1K using supervised learning. In addition, we present the results of several widely used non-private models that are pre-trained on ImageNet-1K including AlexNet <cit.> (supervised learning-based) and SimCLR <cit.> (SSL-based) for reference.
To measure the effectiveness of DP pre-training compared to synthetic pre-training, we also evaluate the model pre-trained on synthetically generated Shader21k data, denoted (Syn)-ViP.
§.§ Transfer Learning Evaluation
To show that ViP learns high-quality representations from its training data, we evaluate its transfer learning performance on a suite of image classification tasks using both linear probing and few-shot fine-tuning.
For linear probing, we use all the training samples in the downstream task training set to learn the linear classifier, while freezing all layers except for the final linear layer.
For few-shot fine-tuning, we randomly select K training samples from each class and fine-tune the entire model.
It is worth noting that both linear probing and fine-tuning evaluations are done using non-private training; our pre-trained ViP model only satisfies (ϵ, δ)-DP on the LAION233M dataset.
Linear probing.
Table <ref> shows the linear probing results on four large-scale image classification datasets: ImageNet-1K, Places-365/205 and iNat-2021.
The most suitable baselines in this setting are DP-NFNet and TAN, both of which are DP-trained on ImageNet-1K with ϵ=8 and represent previous state-of-the-art in large-scale DP pre-training.
First of all, we find that MAE pre-training only on synthetic images (i.e., (Syn)-ViP) is already comparable or even outperforms SOTA DP pre-trained models.
After differentially privately pre-training on LAION233M, ViP effectively improves the performance of (Syn)-ViP on all datasets by a large margin.
Importantly, ViP even outperforms non-private SimCLR pre-trained on ImageNet-1K on all datasets (except ImageNet-1k itself because SimCLR does not need to transfer), and achieves similar performance as end-to-end non-privately trained AlexNet.
To the best of our knowledge, this is the first time a DP-trained model can achieve similar performance on vision benchmark datasets as that of a mainstream (albeit older) model, which demonstrates the potential of our training recipe.
Few-shot fine-tuning. Table <ref> shows the few-shot fine-tuning results on Aircraft, Caltech-101 and CIFAR-100.
Similar to the linear probing result, (Syn)-ViP already outperforms TAN—the previous SOTA DP-trained model—across all evaluation settings except for 10-shot classification on Aircraft.
Next, we find that ViP can largely improve upon (Syn)-ViP when the number of samples per class is small, attaining SOTA performance in all evaluation settings. ViP also achieves better performance than non-privately pre-trained AlexNet by a large margin, but falls short against non-private SimCLR despite having access to more than 100× training data. Thus, our result can be viewed as both a positive and a negative result, showing that there is still a long way to go for private learning before matching the performance of mainstream vision models across the board.
§.§ Scaling Properties
We now study scaling properties of our training recipe, including scaling up (1) the model size, (2) the training set size, and (3) the previously known successful recipe of scaling up batch size.
Scaling up model size.
DP-SGD training is generally unfavorable to large models because the noise magnitude increases with model size. Interestingly, we show that model performance in fact improves by scaling up model size using our training recipe.
Specifically, we change the MAE-encoder size while fixing the MAE-decoder size, resulting in five different model sizes from MAE-Nano to MAE-Large; Table <ref> in Appendix <ref>) gives architecture details including number of parameters.
All models are trained to satisfy the same (ϵ, δ)-DP guarantee with ϵ=8.
Figure <ref> plots the training curve for the different-sized models. At the beginning of DP training, due to synthetic pre-training, a larger MAE model can learn more expressive features and hence the MAE training loss on LAION233M decreases as model size increases.
Intriguingly, the training losses of MAE-Small/Base/Large are similar at the beginning, but larger ViT models achieve faster convergence despite the large amount of DP noise.
Although similar observations on larger models converge faster have also been described in the context of non-private learning <cit.>, the fact that we observe the same phenomenon in Figure <ref> suggests that model scaling can be effective even for private learning under our training recipe.
Figure <ref> shows the effect of model scaling on downstream linear probing and fine-tuning performance.
In particular, the effective reduction in training loss shown in Figure <ref> indeed translates to better downstream performance, with larger ViP model consistently achieving better accuracy without modifications to the training process.
Moreover, comparing ViP with synthetic pre-training (blue line) vs. random initialization (gray line) shows that synthetic pre-training is crucial for unlocking this scaling behavior: the difference in performance between MAE-Large and MAE-Nano is much smaller when the model is randomly initialized.
Scaling up dataset size.
Next, we investigate the effect of scaling up the number of training samples in training.
We vary the training dataset size from 2M to 23M to 233M while choosing the magnitude of injected noise σ so that models trained on different dataset sizes satisfy (ϵ, δ_n)-DP guarantee with ϵ=8 and δ_n=1/2n, where n is the number of training samples. Table <ref> shows downstream evaluation results. The first row corresponds to the synthetically pre-trained ViP model and rows 2-4 correspond to DP-trained ViP models with different dataset sizes. As expected, a larger pre-training dataset size results in a higher-utility ViP model.
For example, scaling from 2M to 233M gives 3.1% linear probing accuracy gain on ImageNet-1K (from 52.6% to 55.7%).
Given that the collection of large labeled datasets is very costly in practice, these results highlight the significance of self-supervised learning in DP training.
Scaling up batch size. Scaling up the training batch size is a known effective way to achieve strong performance in DP supervised learning <cit.>. We analyze the effect of batch size in training ViP models and show that the same observation holds for DP self-supervised learning. We consider three different batch size B ∈{8192, 32768, 98304}, and keep the computational budget—number of per-sample gradient computation—the same for all batch sizes.
We then select the noise σ such that models trained with different batch size satisfy the same (ϵ, δ)-DP.
As shown in Figure <ref>, we find that larger batch size leads to better stability in the training process as well as faster convergence under the same computational budget.
Rows 5-7 in Table <ref> demonstrate that larger batch size also translates to a substantial improvement in ViP's transfer learning performance.
§.§ DP Fine-tuning on ImageNet-1K
Thus far, our main emphasis has been on evaluating DP pre-trained through non-private linear probing or fine-tuning on downstream tasks.
For certain use cases, the downstream task training set may be privacy-sensitive as well and DP fine-tuning is required. We simulate such a scenario by fine-tuning the privately pre-trained ViP model[-Base pre-trained on LAION233 shown in the last row of Table <ref>.] on ImageNet-1K with DP-SGD.
As a result, the fine-tuned model satisfies (8, 8· 10^-7)-DP on the ImageNet-1K dataset in addition to the LAION233M dataset.
We compare against prior works on training DP ImageNet models without pre-training <cit.>; results are summarized in Table <ref>.
By utilizing our pre-trained as an initialization, we observe an improvement in top-1 accuracy of more than 10% compared to the previous SOTA <cit.>, demonstrating the efficacy of our DP pre-training recipe.
§ DISCUSSION AND FUTURE WORK
We developed a recipe for DP self-supervised learning of foundation vision models, and showed that the resulting model—ViP—can achieve downstream performance matching or exceeding that of mainstream non-private models such as SimCLR (with ImageNet-1K pre-training). Our work shows the potential of scaling DP training to internet-scale unlabeled datasets and presents several opportunities for future work.
1. Our recipe adapted MAE to DP-SGD training with minimal modifications. It may be possible to design more specialized SSL training algorithms that conform to the requirements of DP-SGD and are more effective at learning useful representations.
2. Multi-modal SSL is generally more effective than single-modality pre-training due to the additional supervision from cross-modal alignment <cit.>. However, existing multi-modal SSL methods are mostly based on contrastive learning (e.g., CLIP <cit.>, SLIP <cit.> and FLIP <cit.>) and do not admit per-sample gradient computation. Additional work may be needed to adapt these methods to DP-SGD training.
Acknowledgements We thank Xinlei Chen for helpful discussions on masked autoencoders.
abbrvnat
§ IMPLEMENTATION AND EVALUATION DETAILS
In this section, we provide implementation details for training and evaluating , , as well as other existing methods.
§.§ Details for MAE model
In Table <ref>, we provide details for backbones of MAE model with different model sizes.
Both MAE-Large and MAE-Base encoders are constructed following the identical setup described in <cit.>.
§.§ Details for Pre-training
For pre-training, we follow the training setup outlined in <cit.>: we apply the training parameters specified in Table 8 of <cit.> and pre-train pre-train on the S21k dataset developed in <cit.>, which comprises of 1,300,000 training samples, for a total of 1,000 epochs.
Our pre-training applies the self-supervised MAE training methodology and does not use the label information available in the S21k dataset.
We now present details for differentially private pre-training.
As mentioned in Section <ref>, we first initialize the model weights with pre-trained on S21k dataset.
Then we apply DP-AdamW[A variant of the standard DP-SGD — we first compute the noisy clipped stochastic gradient described in Eq. (<ref>), then apply one step update of AdamW <cit.> using the estimated gradient. ].
See the table below for training hyperparameters.
For masking in the MAE training, we follow the random masking strategy and masking ratio of 75% in <cit.> for both pre-training and pre-training.
The process of executing each iteration of DP-AdamW for training the -Base model takes approximately 25 seconds when utilizing 48 A100 (40GB) GPUs.
Each epoch of the -Base model's training process takes roughly 90 seconds to complete with 48 A100 (40GB) GPUs.
§.§ Details for Downstream Classification Task
Linear probing. We follow the training setup in <cit.>: we apply BatchNorm <cit.> before the last linear layer, and use the LARS <cit.> optimizer.
We choose the base learning rate ∈{0.1, 0.05, 0.01}, batch size B=16,384, weight decay λ=0.0.
We set warmup epoch as 10, and total training epoch as 90.
We use the and augmentations.
Few-shot fine-tuning.
For vision transformer based architectures, we apply the AdamW optimizer with learning rate of lr∈{3· 10^-3, 3· 10^-4, 3· 10^-5} and set weight decay as 0.05.
For convolutional neural networks (AlexNet, ResNet used in SimCLR), we apply the SGD optimizer because it consistently outperforms AdamW. We select learning rate lr∈{1· 10^-2, 1· 10^-3, 1· 10^-4}, while setting the momentum as 0.9 and the weight decay as 0.0.
For all models we apply the cosine learning rate decay, and use 10 warm-up epochs and fine-tine with 200 total epochs.
We apply AutoAugment <cit.> for data augmentation.
§.§ Details for Downstream Segmentation and Detection Tasks
COCO object detection and segmentation.
We fine-tune the pre-trained and on COCO with the package <cit.>.
We apply the pre-trained -Base and -Base as the ViT initializations for the detection and segmentation tasks, and apply the default hyperparameter config in for ViTDet-Base.
ADE20K semantic segmentation.
We follow the setup described in <cit.> on evaluating pre-trained MAE models for semantic segmentation.
We apply the UPerNet <cit.> and perform fine-tuning for 100 epochs with a batch size of 16.
§.§ Details for Differentially Private Fine-tuning on ImageNet
We use the pre-trained encoders of and and apply DP-AdamW for DP end-to-end fine-tuning.
The details for parameters in DP-AdamW can found in the following table.
We use 50 iterations for learning rate warm-up, and then keep the learning rate constant afterwards.
For selecting parameters not presented in the aforementioned table, we adopt the default configuration of AdamW in <cit.>.
The fine-tuned model satisfies (8, 8· 10^-7)-DP on the ImageNet-1K dataset in addition to the LAION233M dataset.
§.§ Details for Figure <ref>
For the linear probing results, we present the performance of the -Large model, with the summarized results shown in the last row of Table <ref>.
Regarding the detection and segmentation results, we utilize the -Base model as the ViT backbone, and the corresponding outcomes can be found in Table <ref>.
§ ADDITIONAL EXPERIMENTAL RESULTS
In this section, we provide additional experimental results on evaluating , , as well as other existing methods.
§.§ Segmentation and Detection Evaluations of /
We summarize the results for object detection and segmentation in Table <ref>. Training details can be found in Appendix <ref>.
§.§ Additional Experiments on ViP Pre-training
In Figure <ref>, we plot the training loss v.s. number of training steps for training without initialization.
Compared to the results in Figure <ref>, when pre-training from scracth with DP-AdamW, larger models do not converge faster than smaller ones.
These results further demonstrate the effectiveness of synthetic pre-training for unlocking DP-SGD training of larger vision models.
§.§ Additional Experiments on the Classification Task
Comparison with non-private MAE. To gain a better understanding of the gap between non-private training and private training, we use the same synthetic pre-trained model as initialization and perform DP-AdamW training on LAION233M with σ=0.0[In this case, the ϵ=+∞ for the (ϵ, δ)-DP.].
We keep most of the training parameters the same except for setting the sampling ratio to q=4096/n and the number of iterations T=60,000[While the trained model may not necessarily achieve optimal performance, our main purpose is to present a non-private model that follows a similar training setup, with the exception of setting the noise to zero. This allows us to compare its performance to the private model.].
We then evaluate the linear probing (few-shot fine-tuning) performance of the trained model and provide the results in Table <ref> (Table <ref>).
For linear probing, our ViP model closes more than half the gap between the (Syn)-ViP model and the non-private MAE model. With a more refined training recipe, it is plausible that the gap can be reduced even further, allowing DP-trained foundation vision models to rival non-privately trained ones on certain downstream tasks.
In the context of few-shot fine-tuning, a comparison between private learning and the non-private MAE model reveals considerable potential for improvement in the private learning approach.
Linear probing evaluation of ViP with different model sizes.
We study the scaling behavior of and through linear probing.
As shown in Table <ref>, we compare the performance of and with different model sizes.
The performance of consistently improves across all datasets as the model size increases.
In contrast, increasing the model size from MAE-Base to MAE-Large results in less than 1% improvement in top-1 accuracy for .
These findings further underscore the effectiveness of our proposed training recipe for scaling up model size in private pre-training.
§.§ Ablation Experiments
We study the effect of MAE-decoder depth and MAE-masking ratio in pre-training, and evaluate different models with linear probing on ImageNet-1K.
We consider the -Base setting and the results are summarized in Table <ref>.
|
http://arxiv.org/abs/2306.04392v1
|
20230607124701
|
On Galois groups of type-1 minimally rigid graphs
|
[
"Mehdi Makhul",
"Josef Schicho",
"Audie Warren"
] |
math.CO
|
[
"math.CO",
"math.MG"
] |
A rapid method for preliminary identification of subthreshold strongly lensed counterparts to superthreshold gravitational-wave events
Juno C. L. Chan
July 31, 2023
======================================================================================================================================
For every graph that is mimimally rigid in the plane, its Galois group is defined as the Galois group generated by the coordinates of its planar realizations, assuming that the edge lengths are transcendental and algebraically independent. Here we compute the Galois group of all minimally rigid graphs that can be constructed from a single edge by repeated Henneberg 1-steps. It turns out that any such group is totally imprimitive, i.e., it is determined by all the partitions it preserves.
§ INTRODUCTION
A graph is called rigid in the plane if the number of its realizations – maps from vertices to the plane and from edges to straight line segments – with fixed lengths is generically finite. A graph is minimally rigid if the removal of any edge causes the graph to become non-rigid. Minimally rigid graphs have been characterised by Geiringer<cit.>, and independently by Laman <cit.>. It is known that all mimimally rigid graphs can be constructed from a single edge by two basic operations on graphs, called the Henneberg 1-step and Henneberg 2-step.
This paper is devoted to the problem of calculating Galois groups for minimally rigid graphs of type-1 - that is, minimally rigid graphs that can be constructed from a single edge by repeated applications of the Henneberg 1-step. Galois groups of minimally rigid graphs were studied previously in <cit.>, where the authors conjectured that if a minimally rigid graph is 3-connected then its Galois group is not solvable. The minimally rigid graphs that can be constructed by repeated applications of the Henneberg 1-step fall into the subclass of those for which Owen <cit.> proved that the Galois group is solvable – actually, it is proven that the Galois group is even a 2-group in this case. As a consequence, the realizations of such a graph for given edge lengths can be constructed by compass-and-ruler constructions (see <cit.>). The main result of this paper is a complete description of the Galois group for type-1 graphs.
The question may be embedded in the wider class of problems of computing Galois groups for enumerative geometric problems, such as lines on cubic surfaces.
For any geometric problem that has only finitely many solutions for a given instance of parameters, such a Galois group can be defined, namely the Galois group
of the field extension generated by coordinates of all solutions in the algebraic closure of the field generated by the parameters. By elimination theory,
the geometric problem can be reduced to finding roots of a single univariate polynomial with coefficients depending on parameters, and the Galois group
is the splitting field of this polynomial.
Computing Galois groups of geometric problems is an active area in algebraic geometry, with far-reaching results <cit.>, <cit.>, <cit.> <cit.>. Known methods for computing these Galois groups include monodromy and the decomposition of the discriminant variety (see <cit.>). In this paper, we stay on a level that is more elementary: we just use the Galois correspondence, known properties of polynomials, and known results on rigid graphs.
The first sequence in <cit.> gives the number of isomorphy classes of groups of order n, for each n. This sequence has high peaks at a power of 2:
the number of isomorphy classes is particularly large if n is a power of 2. Therefore there are potentially many candidates for the Galois groups we
want to identify - recalling that it was already proven that the Galois group of a type-1 graph is a 2-group. We observed a common feature in all these groups, which can be described as follows. The Galois groups come with a transitive action on the
set of realizations. The group is called imprimitive if the set on which it acts can be partitioned into blocks, such that each permutation induces a permutation on blocks. We call a group totally imprimitive if there is a set of partitions into blocks, such that the group consists of all
permutations that take blocks into blocks, for each partition. Theorem <ref> says that the Galois group of any minimally rigid graph that can be
constructed by Henneberg 1-steps is totally imprimitive. Any block partition is related to a subset of vertices: two realizations are in the same block if and only if the lengths between two points in the subset are the same for both realizations.
The structure of the paper is as follows. In Section <ref> we give the necessary definitions and preliminaries. In Section <ref> we prove our main result, and in Section <ref> we give an example of how our result can be used to calculate Galois groups.
§ PRELIMINARIES
Let =(V,E) be a graph. A labelling of is a map λ E →. A realization of is a function ρ V →^2. We say that a realization of is compatible with a labelling λ if for each edge e∈ E, the Euclidean distance between its endpoints agrees with its label, that is,
λ(e)=⟨ρ(u)-ρ(v),ρ(u)-ρ(v)⟩, where e={u, v }.
Labels of the graph therefore correspond to squared edge lengths in the realisation. Realisations of a graph with a fixed labelling are considered equivalent, if the points given are the same up to rotation and translation.
There is a classification (see <cit.>, <cit.>) of minimally rigid graphs (also called Laman graphs), purely in combinatorial terms; a graph is minimally rigid iff |E|= 2|V|-3, and for every subgraph '=(V',E') we have |E'|≤ 2|V'|-3. For a given minimally rigid graph , each of its realizations corresponds to a point in ^2|V|, and each of its labellings correspond to a point in ^|E|. We are interested to count up to equivalence, so we fix two vertices v_1,v_2 that are connected by an edge, and we assume that λ(v_1,v_2)=1 and that ρ(v_1)=(0,0), ρ(v_2)=(1,0).
The cardinality of the preimage of a generic point under h_ is equal to the number of compatible realizations. We call the set of these realisations ℱ.
There are two processes, called the Henneberg 1 and Henneberg 2 steps, which can generate all minimally rigid graphs from an edge.
By the following two rules we can construct all minimally rigid graphs, starting from a single edge.
* Henneberg 1-step: add a new vertex and connect it to two
existing vertices.
* Henneberg 2-step: select three vertices of the graph, at
least two of which are connected by an edge e; delete the edge e;
add a new vertex and connect it to the three chosen vertices.
A graph which can be constructed from a single edge by applications of only the Henneberg 1-step is called a type 1 graph.
§.§ The Galois group of a minimally rigid graph
Given a minimally rigid graph , from hereon we shall always consider a generic labelling λ of . In particular, we assume that
λ_v_1,v_2 and that each other label λ_i,j:=λ({i,j}) is a transcendental element over ℚ,
so that the set of all labels, given by
Λ := {λ_i,j : {i,j}∈ E∖{v_1,v_2}}
generates a purely transcendental field extension of ℚ, with Λ algebraically independent over ℚ. Call this field extension 𝕂 = ℚ (Λ). Furthermore, for such a labelling λ, let ℱ be the set of realisations of compatible with λ. For each ρ∈ℱ, ρ(V) is the set of vertex coordinates given by ρ. Then we define
C:= ⋃_ρ∈ℱρ(V),
which is the set of all vertex coordinates given by some realisation of compatible with λ. We can now define the algebraic (in fact, Galois) field extension 𝔼:= (C). We then have the tower
ℚ⊆⊆.
The Galois group of is then defined as
() := (/).
This Galois group of a graph does not depend on the generic choice of λ, nor on the choice of vertices embedded at (0,0) and (1,0). The Galois group of a minimally rigid graph acts on its set of realisations in the following way: for a field automorphism σ∈(G) and a realisation ρ∈ℱ, we define
σ(ρ) := ( σ(ρ(v_1)), σ(ρ(v_2)),...,σ(ρ(v_n))
that is, σ is applied to the coordinates of each vertex. Note that since we always fix the vertices v_1 = (0,0), and v_2 = (1,0), since these coordinates lie in the fixed field, they are invariant under the group action. In this sense, an element of the Galois group of a graph corresponds to a permutation of the realisations ℱ, and we will often abuse notation by regarding () as both a group of field automorphisms, and as a subgroup of the permutation group S_|ℱ|.
§.§ Signed area and the Cayley-Menger determinant
Given an (oriented) triangle P_1P_2P_3 in the real plane, the signed area of P_1P_2P_3 is the area of the triangle, multiplied by +1 if the vertices are listed in counter-clockwise order, and -1 if they are listed in clockwise order. We denote the signed area of P_1P_2P_3 by a_P_1P_2P_3. This is given as a polynomial relation by
a_P_1P_2P_3 = 1/2( (x_2-x_3)(y_1-y_3) - (x_1-x_3)(y_2 - y_3) )
where P_i=(x_i,y_i).
This formula is taken as the definition of a_P_1P_2P_3 for complex points. Note that the rule above gives the relation
a_P_1P_2P_3 = -a_P_2P_1P_3.
Given three vertices v_1,v_2,v_3 of a minimally rigid graph , and a realisation ρ of , we define
a_v_1v_2v_3(ρ):=a_ρ(v_1)ρ(v_2)ρ(v_3).
The squared area of a triangle with edge lengths l_1,l_2,l_3 ∈ℂ can be calculated using the Cayley-Menger determinant, which gives the equation
-16A^2 =
0 l_1^2 l_2^2 1
l_1^2 0 l_3^2 1
l_2^2 l_3^2 0 1
1 1 1 0
.
The squared area of a triangle therefore fulfills a degree two polynomial in the squared edge lengths.
§.§ Type-1 graphs
Suppose ' is a minimally rigid graph on n-1 vertices, and is constructed from ' by a single Henneberg 1-step, which adds an n'th vertex at fixed distances to two vertices of '. It is clear that each realisation of ' splits into two realisations ρ_1, ρ_2 ∈ℱ of , depending on the choice of position for the last vertex. We define an equivalence relation on realisations of in the following way
ρ∼ρ' ρ|_n-1 = ρ'|_n-1.
Where ρ|_n-1 denotes ρ restricted to the n-1 vertices of '. Each equivalence class has size two, and the number of equivalence classes is equal to the number of realisations of the graph ', call the set of these realisations ℱ'. We define a subgroup H ⊆ S_|ℱ| as follows:
H:= { h ∈ S_|ℱ| : ∀ρ, ρ' ∈ℱ, ρ∼ρ' h(ρ) ∼ h(ρ') }.
Note that we are viewing elements of S_|ℱ| as acting on ℱ = {ρ_1,ρ_2,...,ρ_|ℱ|} itself. We see that H are those permutations of ℱ which respect the equivalence relation ∼. We claim that () is a subgroup of H. Indeed, suppose we take σ∈(), and two realisations ρ,ρ' ∈ℱ with ρ∼ρ'. Since they are equivalent, their restriction to the first n-1 vertices coincides. Then by the definition of the group action (<ref>), we have
σ(ρ)|_n-1 = (σ(ρ(v_1)), σ(ρ(v_2)),...,σ(ρ(v_n-1))
= (σ(ρ'(v_1)), σ(ρ'(v_2)),...,σ(ρ'(v_n-1)) = σ(ρ')|_n-1
and therefore σ(ρ)|_n-1 = σ(ρ')|_n-1, as needed. Furthermore, there is a group homomorphism ψ :H → S_|ℱ'| which maps each permutation h ∈ H to the permutation in S_|ℱ'| given by the action of h on each equivalence class. Note that the equivalence classes correspond precisely to realisations of the smaller graph ', which we write as ℱ'={c_1,c_2,...,c_|ℱ'|}, with c_i = {ρ_i,ρ_i'}, and define
ψ(h)(c_i) := {h(ρ_i),h(ρ_i')} = c_j, for some j.
Note that if the permutation h ∈ H is actually a Galois group element σ∈(), then the definition of ψ can be equivalently written as
ψ(σ)(v_1,v_2,...,v_n-1) = (σ(v_1),σ(v_2),...,σ(v_n-1)).
We can now state our main result, firstly in its inductive (and more general) form.
For each minimally rigid graph which is constructed from a minimally rigid subgraph ' by a single Henneberg 1-step, we have
() ≅{ h ∈ H : ψ(h) ∈('), ∀ρ, ρ' ∈ℱ, a_ijn(ρ) = a_ijn(ρ')
a_ijn(hρ) = a_ijn(hρ')}.
Alternatively, if the graph is itself a type-1 graph, our main result can be stated in a direct form. Since is a type-1 graph, there exists a sequence m_1,m_2,...,m_n-2 of type-1 moves which construct from a single edge. Each move m_l consists of an ordered triple of vertices of , that is, m_l = (i_l,j_l,k_l), with k_l the newly constructed vertex and i_l,j_l the two previously constructed vertices which are then connected to k_l. With this notation, Theorem <ref> implies the following.
Let be a minimally rigid type-1 graph with n vertices. Then we have
() ≅{ g ∈ S_|ℱ| : ∀ l=1,…, n-2, ∀ρ, ρ' ∈ℱ, a_i_lj_ln_l(ρ) = a_i_lj_ln_l(ρ')
a_i_lj_ln_l(g ρ) = a_i_lj_ln_l(g ρ')}.
Proof that Theorem <ref> implies Theorem <ref>
Let us denote by G_1 and G_2 the corresponding groups given in Theorem <ref> and <ref> respectively, for some type-1 graph on n vertices.
We begin by showing that G_2 ⊆ G_1, which is done by induction - we assume that for graphs with up to n-1 vertices, this inclusion is satisfied. We wish to show that g ∈ H, and furthermore that ψ(g) ∈'= |_n-1. Take ρ, ρ' ∈ℱ such that ρ∼ρ', that is, ρ|_n-1 = ρ'|_n-1. Since is a type-1 graph, each realisation ρ of corresponds to a choice of sign for the signed area of the triangle (i_l,j_l,k_l) constructed at each stage - informally speaking, this is a choice of whether to flip the triangle up or down. Since we have ρ|_n-1 = ρ'|_n-1, each choice up to the n-1'th vertex must have been the same, that is, we have a_i_lj_ln_l(ρ) = a_i_lj_ln_l(ρ') for each l=1,...,n-3. Since g ∈ G_2, each of these equalities also holds for gρ and g ρ' - meaning that the same sign choices have been made in the construction of these two realisations, again for l=1,...,n-3. But then we have (gρ)|_n-1 = (gρ')|_n-1, and therefore gρ∼ gρ', proving that g ∈ H.
We now wish to prove that ψ(g) ∈('). Note that since we have already proved g ∈ H, ψ can indeed be applied to g. It is in this step that we use induction; let G_2' be the corresponding group for the smaller graph ', which by induction is contained in ('). It is therefore enough to prove that ψ(g) ∈ G_2', meaning that we need to prove that for each pair μ,μ' ∈ℱ', we have a_i_lj_ln_l(μ) = a_i_lj_ln_l(μ') a_i_lj_ln_l(ψ(g) μ) = a_i_lj_ln_l(ψ(g) μ').
Since μ∈ℱ', there exists some ρ∈ℱ such that ρ|_n-1 = μ. Similarly, there is some ρ' ∈ℱ such that ρ'|_n-1 = μ'. In fact there are two such choices - it does not matter which we pick. We now have that ψ(g)(μ) = (gρ)|_n-1, and ψ(g)(μ') = (gρ')|_n-1. We then have, for each 1 ≤ l ≤ n-3, the chain of implications
a_i_lj_ln_l(μ) = a_i_lj_ln_l(μ') a_i_lj_ln_l(ρ) = a_i_lj_ln_l(ρ')
a_i_lj_ln_l(gρ) = a_i_lj_ln_l(gρ')
a_i_lj_ln_l((gρ)|_n-1) = a_i_lj_ln_l((gρ')|_n-1)
a_i_lj_ln_l(ψ(g)(μ)) = a_i_lj_ln_l(ψ(g)μ'),
as needed.
To show that () ≅ G_1 ⊆ G_2, we simply note that the conditions defining G_2 are given by polynomial relations which are defined over the base field 𝕂, with vertex coordinates as variables. Since the Galois group () must preserve polynomial relations (meaning that if f is a polynomial defined over 𝕂 in variables from 𝔼, then the image of any zero of f under an element of the Galois group of 𝔼 / 𝕂 must also be a zero of f), we see that G_1 ⊆ G_2.
§ PROOF OF THEOREM <REF>
In this section we prove Theorem <ref>.
We begin by defining the set
J = { h ∈ H : ψ(h) ∈('), ∀ρ, ρ' ∈ℱ, a_ijn(ρ) = a_ijn(ρ')
a_ijn(hρ) = a_ijn(hρ')}.
Our aim is to show that () = J. Note that J is a subgroup of H.
We have () ⊆ J.
For σ∈(), we have already seen that () ⊆ H, so that σ∈ H. Secondly, we want to show that ψ(σ) ∈('). For this we note that ψ(σ) is a permutation of the realisations ℱ' of the smaller graph, and as we have seen above, it is given by applying the field automorphism σ to each vertex coordinate. This permutation then corresponds to the field automorphism σ restricted to the intermediate field ⊆⊆ containing the vertex entries for only the smaller graph ', and is therefore an element of ('). Lastly, each σ∈ fulfils the condition of preserving the relation of equal signed area. Indeed, signed area can be given by a polynomial equation, and elements of the Galois group preserve polynomial relations.
In the next step we prove that |()| = |J|. We do this by making use of the map ψ:H → S_|ℱ'|. Indeed, we prove the following.
We have the following two equalities.
* ψ(()) = ψ(J).
* |() ∩(ψ)| =|J ∩(ψ)|.
Given Claim <ref>, an application of the first isomorphism theorem to the following two group homomorphisms given by restrictions of ψ,
ψ_J:J →ψ(J), ψ_():() →ψ(()),
yields |J| = |()|, which together with Claim <ref> finishes the proof.
§.§ Proof that |ψ(())| = |ψ(J)|
We begin by proving the first assertion in Lemma <ref>, namely that |ψ(())| = |ψ(J)|. Note that we have
ψ(()) ⊆ψ(J) ⊆ψ({h ∈ H : ψ(h) ∈(')}) ⊆(').
We show that ψ(()) = ('), closing the chain of inclusions above. In order to do this, we use the fact that for any tower of Galois extensions
⊆⊆,
a field automorphism in (/) can be extended to a field automorphism in (/ ). Suppose there exists a field automorphism σ' ∈(') such that ∄σ∈() such that ψ(σ) = σ'. However, as argued above, the map ψ corresponds to a restriction of the field automorphism σ∈() to the intermediate field given by the vertices of the smaller graph '. We then have a tower of Galois extensions ⊆⊆, and σ' ∈(/ ) which does not extend to a field automorphism σ∈(/ ), giving a contradiction. Therefore we have ψ(()) = ('). We then conclude that ψ(()) ⊆ψ(J) ⊆ψ(()), and therefore the sets are equal.
§.§ Proof that |() ∩(ψ)| =|J ∩(ψ)|
We now prove that |() ∩(ψ)| =|J ∩(ψ)|. In order to do this, we define the natural number k as the number of distinct square distances which occur between the vertices v_1 and v_2 among all realisations of '. That is,
k := | { ||ρ(v_1) - ρ(v_2)||^2 : ρ∈ℱ' }|.
Recall that v_1,v_2 are the base points of the Henneberg 1-step used to construct . Furthermore, we define λ_1,2(ρ) to be the distance between the vertices v_1 and v_2 in the realisation ρ of '. The integer k is therefore the number of distinct values of λ_1,2 found among all realisations of '. We apply this definition equally to realisations of the larger graph - that is, λ_1,2 applied to a realisation of still gives the squared distances between v_1 and v_2 in that realisation. We aim to show that both |() ∩(ψ)| and |J ∩(ψ)| have cardinality equal to 2^k.
§.§.§ Proof that |J ∩(ψ)|=2^k
We first show that |J ∩(ψ)|=2^k. In order to do this, we partition the realisations ℱ of in terms of the squared distance between v_1 and v_2, that is, two realisations ρ and ρ' are in the same part if λ_1,2(ρ) = λ_1,2(ρ'). We call this partition 𝒫_1, and note that is has k parts. We further refine this partition by splitting each part into two subsequent part, in terms of the signed area. That is, we define a partition 𝒫_2 of ℱ such that ρ and ρ' are in the same part iff λ_1,2(ρ) = λ_1,2(ρ') and a_v_1v_2v_n(ρ) = a_v_1v_2v_n(ρ'). We note that for two realisations ρ and ρ' with λ_1,2(ρ) = λ_1,2(ρ'), we have a_v_1v_2v_n(ρ) = ± a_v_1v_2v_n(ρ'), and therefore each part S ∈𝒫_1 contains precisely two parts, call them S^+ and S^-, of 𝒫_2, depending on the sign of the signed area. We analyse how ψ interacts with these partitions of ℱ.
We claim that an element α of J ∩(ψ) satisfies
ρ∈ S ∈𝒫_1 α(ρ) ∈ S,
that is, α preserves elements of 𝒫_1. Indeed, since α∈(ψ), ψ(α) acts as the identity on ℱ'. In particular, we must have α(ρ) ∈{ρ,ρ'} where ρ'|_n-1 = ρ|_n-1. Since α(ρ)|_n-1 = ρ_n-1, we have λ_1,2(ρ) = λ_1,2(α(ρ)).
Secondly, we claim that for a part S = S^+ ⊔ S^- of 𝒫_1, either α(ρ) = ρ for all ρ∈ S, or α(S^+)= S^- and α(S^-) = S^+. In words, this means that α can do one of two things to S; it can fix everything, or swap every realisation ρ with its equivalent realisation under ∼. We emphasize that each equivalence class {ρ,ρ'} is contained in a single S ∈𝒫_1, and precisely one element of the class is in S^+, the other being in S^-. This is because equivalent realisations have signed areas which are of opposite sign.
To prove this claim, we take WLOG an equivalence class {ρ, ρ'}, with ρ∈ S^+. Assume that α(ρ) = ρ. Note that therefore α(ρ') = ρ' ∈ S^-. Since α∈ J, we have that
a_v_1v_2v_n(ρ) = a_v_1v_2v_n(ρ') a_v_1v_2v_n(α(ρ)) = a_v_1v_2v_n(α(ρ')),
and therefore for any other realisation ρ_0 ∈ S^+, we must have a_v_1v_2v_n(ρ_0)= a_v_1v_2v_n(α(ρ_0)), by applying the above implication to ρ and ρ_0. Therefore α(ρ_0) = ρ_0. Since we can give the same argument for S^-, we conclude that for all ρ_0 ∈ S, α(ρ_0) = ρ_0. Using a similar argument for the case α(ρ) = ρ' yields the second case, where each realisation is swapped with the other element in its equivalence class, and so α(S^+)= S^- and α(S^-) = S^+, as claimed.
Therefore, in order to define α, for each part S ∈𝒫_1 we need only decide whether α swaps every realisation in S with its equivalent realisation under ∼, or if α pointwise fixes all of S. Since we get two choices for each part S ∈𝒫_1, we have |J ∩(ψ)| ≤ 2^k. We now have to show that |J ∩(ψ)| ≥ 2^k. To do this, we define 2^k permutations β:ℱ→ℱ as follows. Let I ⊆{1,2,...,k}, and let I^c = {1,2,...,k}∖ I. Note that there are 2^k choices for I. We then define the permuation
β_I(ρ) = ρ if ρ∈ S_i, i ∈ I
ρ' if ρ∈ S_i, i ∈ I^c.
Where ρ is in the equivalence class {ρ,ρ'}. Note that difference choices of I give different permutations, and therefore the set of all β_I is a set of 2^k distinct permutations. We prove that for all such β_I, we have β_I ∈ J ∩(ψ).
To do this, we first note that β_I ∈(ψ). Indeed, by definition, β_I(ρ) always remains within the class {ρ,ρ'}, and β_I is therefore in (ψ).
We now check whether β_I ∈ J. Note that since (ψ) ⊆ H, we immediately have β_I ∈ H. Furthermore, since ψ(β_I) is the identity permutation, we also have ψ(β_I) ∈('). The final property to check is preservation of equality of signed area. Suppose we take ρ_1 and ρ_2 with a_v_1v_2v_n(ρ_1) = a_v_1v_2v_n(ρ_2). Since they have the same signed area, there exists some i ∈{1,...,k} such that ρ_1,ρ_2 ∈ S_i. Note that here we have used the algebraic independence of the squared distances, which implies that two realisations with the same signed area a_v_1v_2v_n must have the same squared distance λ_1,2. Since these realisations lie in the same S_i, β acts on them in the same way, that is, either β(ρ_1) =ρ_1 and β(ρ_2) = ρ_2, or β(ρ_1) =ρ_1' and β(ρ_2) = ρ_2'. In either case the signed areas are the same after β; in the first case the signed areas do not change, and in the second they are negated. Therefore, β∈ J ∩(ψ), and so we conclude that |J ∩(ψ)| = 2^k.
§.§.§ Proof that |() ∩(ψ)| = 2^k
We now prove that |() ∩(ψ)| = 2^k. Let σ∈() be such that ψ(σ) is the identity permutation. We have already seen that ψ(()) = ('), so that considered as a field automorphism, ψ(σ) is the identity automorphism from to . Therefore, σ is a field automorphism → which fixes the field , that is, () ∩(ψ) = ( / ) (note that / is a Galois extension since / is a Galois extension). Since |( / )| = [ : ], it is enough to give the degree of this field extension.
The field is generated from by the vertex coordinates of the last vertex v_n, through all realisations of . For a realisation ρ∈ℱ = {ρ_1,...,ρ_|ℱ|}, let v_n(ρ) be the vertex coordinates of v_n in that realisation. We can write the tower of extensions
=: _0 ⊆_1 ⊆_2 ⊆ ... ⊆_|ℱ| = ,
where _i+1 := _i(v_n(ρ_i+1)). We claim that for all i=0,1,...,|ℱ|-1, we have [_i+1 : _i] ≤ 2. To prove this, we first show that _i+1⊆_i(a_v_1v_2v_n(ρ_i+1)), and that a_v_1v_2v_n(ρ_i+1) gives an extension of _i of degree at most 2. To prove that _i+1⊆_i(a_v_1v_2v_n(ρ_i+1)), we show that both vertex coordinates of v_n(ρ_i+1) can be expressed as a polynomial in terms of Δ := a_v_1v_2v_n(ρ_i+1). Indeed, we have the three equations
Δ = 1/2(x_1y_2 - x_1y_n + x_2y_n - x_2y_1 + x_ny_1 - x_ny_2)
λ_1,n = (x_1-x_n)^2 + (y_1-y_n)^2
λ_2,n = (x_2-x_n)^2 + (y_2-y_n)^2.
The last two equations allow us to express x_n linearly in y_n, and therefore from the first equation, both x_n and y_n can be expressed in the field _i(Δ). Secondly, from the Cayley-Menger determinant, Δ^2 can be written in as a polynomial in the squared edge lengths between the vertices v_1,v_2,v_n, which are all in the base field. Therefore, _i(Δ) is an extension of degree at most 2.
With this in hand, we define a subset I ⊆{0,1,...,|ℱ|-1} of indices where we have [_i+1:_i] = 2. By the tower law, we then have [:] = 2^|I|. We claim that for i,j ∈ I, we must have λ_1,2(ρ_i) ≠λ_1,2(ρ_j).
To prove this, we note that from the previous argument, we must have _i+1 = _i(Δ_i+1), where we define Δ_i+1 := a_v_1v_2v_n(ρ_i+1). Suppose that for some i,j ∈ I, we have λ_1,2(ρ_i+1) = λ_1,2(ρ_j+1). Note that this implies Δ_i+1 = ±Δ_j+1, since all edge lengths of the triangle v_1v_2v_n are the same. But then the two field extensions _i+1/_i and _j+1/_j cannot both be degree two, since they are both generated by the same element, say Δ_i+1, and there is an inclusion of base fields. We therefore have that [ : ] ≤ 2^k. Note that there is a one-to-one correspondence between all distances λ_1,2 and all possible areas of the triangle v_1v_2v_n. Let us call the set of all such areas A = {α_1,α_2,...,α_k}. Since each extension above is generated by such an area, we have
= (A).
In order to prove that [ :] = 2^k, we give the following lemma.
Let Ł be a field not of characteristic two, and let a_1,…, a_k ∈Ł. Let α_i^2=a_i, where α_i∈Ł. If we have that for each subset I ⊆{1,…, k }, the product
∏_i∈ Ia_i,
is not a square in Ł, then we have
[ Ł(α_1,…,α_k): Ł]=2^k.
To prove this lemma, we require the following auxiliary lemma.
Let Ł be a field not of characteristic two, and let a_1,a_2,...,a_k ∈Ł, such that a_i = α_i^2 for some α_i ∈Ł. Suppose that [Ł(α_1,...,α_k):Ł] = 2^k, and that γ∈Ł(α_1,...,α_k)∖Ł is an element such that γ^2 ∈Ł. Then there exists a set I ⊆{1,...,k} such that
γ∏_i ∈ Iα_i ∈Ł.
We begin with the base case k=1, in which case we have a single element a ∈Ł such that a = α^2, and [Ł(α):Ł]=2. By assumption, we are given an element γ∈Ł(α) ∖Ł with γ^2 ∈Ł. We can then write
γ = c_1 + c_2αγ^2 = c_1^2 + 2c_1c_2α + c_2^2a_1
for c_1,c_2 ∈Ł. Since γ^2 ∈Ł, we have c_1c_2 = 0. We cannot have c_2=0 since this implies γ∈Ł. If c_1=0, then we have γ = c_2α, and therefore the product γα = c_2 a ∈Ł, as needed.
We now go to the inductive step. We assume the lemma is true for k ≤ n. We have the elements a_1,a_2,...,a_n+1∈Ł, and their roots α_i in the algebraic closure. We have that [Ł(α_1,...,α_n+1):Ł]=2^n+1, and we are given an element γ∈Ł(α_1,...,α_n+1) ∖Ł such that γ^2 ∈Ł. By considering the extension
Ł(α_1,...,α_n+1) / Ł(α_1,...,α_n),
there exist c_1,c_2 ∈Ł(α_1,...,α_n) such that γ = c_1 + c_2α_n+1. Since γ^2 ∈Ł, and γ^2 = c_1^2 + 2c_1c_2 α_n+1 + c_2^2a_n+1, we must have c_1c_2=0. First suppose c_1=0, in which case γ = c_2α_n+1, and so γ^2 = c_2^2 a_n+1∈Ł, we find that c_2^2 ∈Ł. If c_2 ∉Ł, then we apply the induction hypothesis to Ł(α_1,...,α_n) with the element c_2, implying the existence of a set I' ⊆{1,...,n} such that c_2 ∏_i ∈ I'α_i ∈Ł. Define I = I' ∪{n+1}. We then have
γ∏_i ∈ Iα_i = c_2α_n+1∏_i ∈ Iα_i = c_2 a_n+1∏_i ∈ I'α_i ∈Ł,
as needed. If, on the other hand, c_2 ∈Ł, then we have γ =c_2α_n+1, and so the product γα_n+1 = c_2 a_n+1∈Ł, and so the set I ={n+1} satisfies the lemma.
In the second case we have c_2=0, and therefore γ = c_1. In this case we are again done by the induction hypothesis applied to the element c_1 ∈Ł(α_1,...,α_n) ∖Ł. Note that c_1 ∉Ł, as we would then have γ = c_1 ∈Ł, which contradicts our assumptions.
We prove the contrapositive, that is, if
[Ł(α_1,…,α_k): Ł]<2^k,
then there exists a subset I ⊆{1,…, k } such that Π_i∈ Ia_i is a square in Ł.
The proof is by induction. The base case k=1 is trivial and corresponds to the statement that [Ł(α_1):Ł] =1, so that α_1∈Ł, and so a_1 = α_1^2 is a square in Ł. Suppose that the lemma holds for k≤ n. We show that it also holds for k=n+1. Consider the field extension
Ł(α_1,…,α_n,α_n+1) / Ł(α_1,…,α_n).
We consider the degree of this extension. We have
[Ł(α_1,…,α_n,α_n+1): Ł(α_1,…,α_n)][ Ł(α_1,…,α_n):Ł]<2^n+1.
If we were to have [ Ł(α_1,…,α_n):Ł] < 2^n, then the proof would be complete by the induction hypothesis, as this implies the existence of a subset I ⊆{1,2,...,n}⊆{1,2,...,n+1} with the property that Π_i ∈ I a_i is a square in Ł. Therefore, we can assume [ Ł(α_1,…,α_n):Ł] = 2^n, so that we must have
[Ł(α_1,…,α_n,α_n+1): Ł(α_1,…,α_n)] =1.
If this is the case, then clearly we have α_n+1∈Ł(α_1,…,α_n). If we have α_n+1∈Ł, then we would have that a_n+1 = α_n+1^2 is a square in Ł and we are done by taking I={n+1}. Therefore, we may apply Lemma <ref> to this field, with the choice γ= α_n+1. This implies the existence of a set I' ⊆{1,...,n} such that we have
α_n+1∏_i ∈ I'α_i ∈Ł.
Defining the set I := I' ∪{n+1}, we now have that
∏_i ∈ I a_i = ( α_n+1∏_i ∈ I'α_i)^2
showing that this product is a square in Ł, as needed.
We now wish to apply Lemma <ref> to the field , with the elements a_i = α_i^2 being squared areas from the set A. In order to show that [𝔼: 𝔽] = 2^k, we need to show that for each subset I ⊆{1,...,k}, the product
∏_i ∈ I a_i
is not a square in 𝔽. Using the Cayley-Menger formula, each a_i can be written as
a_i = λ_1,2,i^2 - 2λ_1,2,i(λ_1,n + λ_2,n) + (λ_2,n - λ_1,n)^2,
where λ_1,2,i is the distance between v_1 and v_2 in a realisation with the square area of v_1v_2v_n equal to a_i. Note that λ_1,n and λ_2,n were fixed by the choice of Λ at the beginning of the proof. We consider the right hand side of (<ref>) as a polynomial in two variables X,Y, corresponding to λ_1,n and λ_2,n. We define the field 𝕂' := ℚ ( λ_i,j : {i,j}∈ E(')). Note that since Λ is algebraically independent over ℚ, the square lengths λ_1,n and λ_2,n are algebraically independent over 𝕂'. We further define a field 𝔼':= 𝕂'({ vertex coordinates of '}). As 𝔼' is an algebraic extension of ', λ_1,n and λ_2,n are algebraically independent over 𝔼'. We also note that 𝔽 = 𝔼'(λ_1,n,λ_2,n), so that in fact
𝔽≅Frac(𝔼'[X,Y]).
We denote by ϕ the isomorphism between 𝔽 and the fraction field of the polynomials 𝔼'[X,Y]. Recall that we aim to show that the product ∏_i ∈ I a_i
is not a square in 𝔽. Suppose that it were a square; that is, there exists some s ∈𝔽 such that
∏_i ∈ I a_i = s^2.
By (<ref>), this implies
∏_i ∈ I(λ_1,2,i^2 - 2λ_1,2,i(λ_1,n + λ_2,n) + (λ_2,n - λ_1,n)^2)= s^2.
Applying ϕ to each side then gives
∏_i ∈ I(λ_1,2,i^2 - 2λ_1,2,i(X + Y) + (Y - X)^2)= ϕ(s)^2.
Note that the left hand side of (<ref>) is a product of distinct polynomials. Since ϕ(s)^2 is itself a polynomial, we must have that ϕ(s) is a polynomial. Therefore, ∏_i ∈ I(λ_1,2,i^2 - 2λ_1,2,i(X + Y) + (Y - X)^2) must be a perfect square in the polynomial ring 𝔼'[X,Y]. For this to occur, at least one of the factors in (<ref>) must be reducible. We claim that this does not happen; that is, the polynomial λ_1,2,i^2 - 2λ_1,2,i(X + Y) + (Y - X)^2 is irreducible in 𝔼'[X,Y] for all i. We perform the simple change of variable X+Y → Z and X-Y → W, giving λ_1,2,i^2 - 2λ_1,2,iZ + W^2. If this polynomial factorises, it must factor into linear pieces. If we write
λ_1,2,i^2 - 2λ_1,2,iZ + W^2 = (W + c_1Z + c_2)(W + c_3Z + c_4),
we see that we must have c_1c_3=0, so without loss of generality assume that c_1 = 0. As no term WZ appears, we must then have c_3 =0. But now no term of Z is present, giving a contradiction. Therefore, the product ∏_i ∈ I a_i is not a square in 𝔽, as needed. Applying Lemma <ref> then gives
[:] = 2^k.
§ EXAMPLE
In this section, we will make use of Theorem <ref> to determine the Galois group of the following minimally rigid graph .
It is not hard to see that the Galois group of the minimally rigid subgraph ' ⊂ corresponding to vertices 1,2,3,4 is isomorphic to _2^2. We will show that Theorem <ref> implies that
() = D_4 ×_2.
The graph has eight realisations, which we label as ℱ={ρ_1, …, ρ_8}. Note that for any realisation of , we can find another realisation ρ' which is given by reflecting only vertex 5 in the line through 3 and 4, and that this pair of realisations form an equivalence class under the relation ∼. Without loss of generality, we assume that the equivalence classes are {ρ_i, ρ_i+1} for i=1,3,5,7, where ρ_i is chosen to be the realisation such that the signed area a_345(ρ_i) is positive. Theorem <ref> then tells us the following.
() ≅{ h ∈ H : ψ(h) ∈('), ∀ρ, ρ' ∈ℱ, a_345(ρ) = a_345(ρ')
a_345(hρ) = a_345(hρ')},
where we recall that
H:= { h ∈ S_8 : ∀ρ, ρ' ∈ℱ, ρ∼ρ' h(ρ) ∼ h(ρ') },
and that ψ(h) is the induced permutation in S_4 given by h acting on the equivalence classes {ρ_i,ρ_i+1} for i=1,3,5,7. Each equivalence class corresponds to a certain realisation of the subgraph ', and we choose our labelling in the following way:
* ρ_1 and ρ_2 correspond to the realisation of ' where both triangles 123 and 124 have positive orientation. Call this class A.
* ρ_3 and ρ_4 correspond to the realisation of ' where the triangle 123 has positive orientation, and 124 has negative orientation. Call this class B.
* ρ_5 and ρ_6 correspond to the realisation of ' where the triangle 123 has negative orientation, and 124 has positive orientation. Call this class C.
* ρ_7 and ρ_8 correspond to the realisation of ' where both triangles 123 and 124 have negative orientation. Call this class D.
It is now simple to write down which elements of S_4 are in the Galois group ('); we get the four permutations (in cycle notation)
(') = {(A)(B)(C)(D), (AB)(CD), (AC)(BD),(AD)(BC)}.
In addition to the condition ψ(h) ∈('), we also need h to preserve the equivalence classes given by the signed area a_345. These equivalence classes are the pairs
{ρ_1, ρ_7}, {ρ_2, ρ_8}, {ρ_3, ρ_5}, {ρ_4, ρ_6}.
Using Theorem <ref>, one can verify that the following elements of S_8 belong to ().
h_1=(1423)(5768), h_2=(34)(56), h_3=(18)(27)(36)(45).
Moreover, they fulfil the following equalities
h_1^4=h_2^2=h_3^2=e, h_2h_1h_2=h_1^-1, h_3h_1=h_1h_3, h_2h_3=h_3h_2.
This shows that the direct product D_4 ×_2, which is isomorphic to the group <h_1, h_2,h_3>, is a subgroup of (). On the other hand, we have |()|=16. Indeed, consider the following tower of fields
⊆⊆,
where is ℚ extended by the squared edge lengths of , is extended by all vertex coordinates of realisations of the subgraph ', and extends by vertex coordinates of all realisations of (in particular, by the coordinates of the final vertex 5). We therefore have (')=()/ (/). As we have seen in Section <ref>, the degree of 𝔼 over 𝔽 is 2^k, where k is the number of distinct distances between two non-edge vertices of ; in our example, k=2. Thus, |()|=|(')||(/)| = 16, and hence
()≅ D_4 ×_2.
The Galois group gives restrictions on the possible number of real realizations for real labellings. In this case, the field is a
subfield of . Since is a subfield of closed under all automorphisms that fix , the restriction of complex conjugation
to is a field automorphism of fixing , that is, an element ϵ of the Galois group. Note that ϵ^2 is the
identity. A compatible realization is real if and only if it is fixed by ϵ. If k is the number of real realizations, then
there must exist an element in the Galois group of order two or one that fixes exactly k elements.
In the example above, any involution in () has either no fixed points at all or exactly four fixed points. Hence the number
of real solutions for a generic real labelling of is either zero, four, or eight.
§ ACKNOWLEDGMENT
The first listed author was supported by the Austrian Science Fund FWF Project P33003. We thank Niels Lubbes and Matteo Gallet for helpful conversations.
amsplain
|
http://arxiv.org/abs/2306.03533v1
|
20230606093120
|
Deciding minimal distinguishing DFAs is NP-complete
|
[
"Jan Martens"
] |
cs.FL
|
[
"cs.FL",
"F.4.3"
] |
Real-Time Onboard Object Detection for Augmented Reality: Enhancing Head-Mounted Display with YOLOv8
The study has been supported by funding provided through an unrestricted gift by Meta.
Mikołaj Łysakowski1, Kamil Żywanowski1, Adam Banaszczyk1, Michał R. Nowicki12,
Piotr Skrzypczyński12, Sławomir K. Tadeja3
1 Poznań University of Technology, Centre for Artificial Intelligence and Cybersecurity
2 Poznań University of Technology, Institute of Robotics and Machine Intelligence
3 University of Cambridge, Department of Engineering, Institute for Manufacturing
July 31, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================
In this paper, we present a proof of the NP-completeness of computing the
smallest Deterministic Finite Automaton (DFA) that distinguishes two given
regular languages as DFAs. A distinguishing DFA is an automaton that
recognizes a language which is a subset of exactly one of the given
languages. We establish the NP-hardness of this decision problem by
providing a reduction from the Boolean Satisfiability Problem (SAT) to
deciding the existence of a distinguishing automaton of a specific size.
§ INTRODUCTION
We consider the problem of automatically explaining the inequivalence of
Deterministic Finite Automata (DFAs). In particular, we are interested in short
witnesses for the inequivalence. A straightforward approach to explain the
inequivalence of two DFAs would be to provide a distinguishing word, i.e. a word
that is accepted by one of the automata but not the other.
This method of finding minimal distinguishing words is well understood and
decidable in polynomial time <cit.>. An efficient
implementation is given in <cit.> that has the same runtime
complexity as the best known algorithm that decides language equivalence, known
as Hopcroft's minimization <cit.>.
In this work we are motivated by smaller witnesses of inequivalence in the form
of regular languages. These languages might contain invariants that provide a
shorter and more intuitive explanation. For example, consider the DFAs
𝒜 and ℬ shown in Figure <ref>. The shortest
distinguishing word for these DFAs is a^7. Indeed, we confirm a^7∈(𝒜) but a^7∉(ℬ). A different explanation
for the inequivalence of 𝒜 and ℬ could be: every odd
length sequence of a's is accepted by 𝒜 and not by ℬ.
We call a DFA a distinguishing automaton for two DFAs if the language
recognized is a subset of exactly one of the two DFAs. In the example from
Figure <ref>, we see that our distinguishing witness with invariant is
equivalent to a distinguishing automaton with only two states, i.e. the DFA
A_odd such that (A_odd) = {a^2i + 1| i∈}. An
automaton recognizing only the minimal distinguishing word a^7 would contain
at least eight states.
In the setting of model based development it can be key to understand the
differences between state based systems. This led us to study the synthesis of
distinguishing DFAs, and leads naturally to following decision problem.
k-DFA-DIST: Let A_1 and A_2 be DFAs such that (A_1) ≠(A_2), and k∈ a number. Decide if there is a DFA A_dist with
at most k states such that:
(A_dist) ⊆(A_1) (A_dist) ⊈(A_2).
The contribution of this work is that we prove the intractability of
k-DFA-DIST.
Deciding k-DFA-DIST is NP-complete.
The reduction from CNF-SAT that proves the NP-completeness is new to our
knowledge. We believe this reduction of CNF-SAT formulas to regular languages is
an intuitive method of showing DFA problems NP-complete.
There are some decision problems on DFAs that show some similarities, but are
different from the work here. For instance the early work of
Gold <cit.> and Pfleeger <cit.> in which it is shown
that learning minimal DFAs from (partial) observations is NP-complete. In the
line of this work by Gold, so-called separating languages are widely
studied in the literature <cit.>.
Here the separating problem is, given languages L_1 and L_2, to find a
separating language L_sep such that L_sep⊆ L_1 and L_sep∩ L_2 = ∅. Although this resembles our distinguishing problem, a
direct relation is not trivial.
Another influential work is due to Kozen <cit.>. This work
includes a proof of NP-hardness of deciding whether the intersection of a finite
number of DFAs is empty.
§ NOTATION & BACKGROUND
For two natural numbers i,j∈ we write [i,j] = i, i+1, … , j as
the closed interval from i to j. Given a finite alphabet Σ, a
sequence of elements of Σ is called a word. We define Σ^i as the
set of all words over Σ of length i, and Σ^* = ⋃_i∈Σ^i for all words over Σ. Given words u,v∈Σ^*, we write
u· v and uv for word concatenation. Additionally, given a number
i∈ and a word u∈Σ^* we write u^i for the concatenation of
i times the word u.
A Deterministic Finite Automata (DFA) A= (Q, Σ, δ, q_0, F) is a
five-tuple consisting of:
* Q a finite set of states,
* Σ a finite set of symbols called the alphabet,
* δ: Q ×Σ→ Q the transition function,
* q_0 ∈ Q the initial state, and
* F ⊆ Q the set of final states.
The transition function δ extends naturally to a transition function for
words δ^*: Q ×Σ^* → Q. This is done inductively as
follows:
δ^*(q,ϵ) = q
δ^*(q, aw) = δ^*(δ(q,a), w).
The language recognized by a DFA A = (Q, Σ, δ, q_0, F), is denoted
by (A), and consists of all words w∈Σ^* such that
δ^*(q_0, w) ∈ F.
The Myhill-Nerode theorem is a useful tool to establish the number of
states necessary to recognize a language. It is based on the equivalence
relation relating words that have the exact same accepting extensions.
Let x,y∈Σ^* be words and L⊆Σ^* a language, then x
≡_L y if and only if for all z∈Σ^* it holds that xz∈ L
yz ∈ L.
(Myhill-Nerode <cit.>) Let L
⊆Σ^* be a language, then L is regular if and only if the
relation ≡_L has a finite number of equivalence classes.
A more specific corollary of the theorem relates the number of equivalence
classes of ≡_L to the smallest number of states a DFA needs in order to
recognize L.
Let L be a regular language over an alphabet Σ, then the smallest
DFA A that recognizes L has k states where k is the number of
equivalence classes of the relation ≡_L.
§ REDUCTION
Before we introduce the reduction we define some notation in which we encode
truth values of propositions. In the reduction we represent truth assignments as
words over the Boolean alphabet = {, }. Given a set of
propositional variables = {p_1, …, p_k}, a truth assignment ρ:
→ is represented by the word a_1 … a_k∈^k, where a_i
= ρ(p_i) for every i∈ [1,k]. The set = ^k defines all words
that represent truth assignments.
Now we are ready to introduce our reduction from CNF-SAT in order to prove
Theorem <ref>. Let ϕ = C_1 ∧…∧ C_n be a CNF
formula over the propositional variables = {p_1, …, p_k}, we
define two regular languages over the alphabet Σ = ∪{♯}.
The first language L^-_ϕ⊆Σ^* is the finite set of at most
n concatenated truth assignments separated by a ♯, i.e.
L^-_ϕ = {w_1♯… w_j♯
| j ∈ [1,k] and w_1, … ,w_j∈}.
The second language L^+_ϕ⊆Σ^* is a superset of
L^-_ϕ. In addition to all the word of L^-_ϕ, the language
L^+_ϕ contains all words that have as prefix n truth assignments w_1,
… , w_n ∈ that consecutively satisfy all clauses C_1, …,
C_n, more precisely that is,
L^+_ϕ = L^-_ϕ∪{w_1 ♯⋯ w_n♯ w | w ∈Σ^* , w_i∈ and w_i satisfies C_i for all i∈[1,n]}.
The languages L^-_ϕ and L^+_ϕ are regular, and hence there are
automata that recognize these languages. In particular there are automata
recognizing these languages that are polynomial in size. One way of observing
this fact is by inspecting the number of Myhill-Nerode equivalence classes of
L^+_ϕ and L^-_ϕ.
Given a CNF formula ϕ, the languages L_ϕ^+ and L_ϕ^- are
recognizable by an automaton that is polynomial in the size of ϕ.
The next lemma proves the key fact of our reduction. A truth assignment that
satisfies a CNF formula ϕ as recurring pattern forms a small distinguishing
automaton. Inversely a distinguishing automaton smaller than a certain size
necessarily implies the existence of a satisfying truth assignment for ϕ.
Let ϕ = C_1 ∧…∧ C_n be
a CNF formula over k propositional letters = {p_1, …, p_k}.
Then ϕ is satisfiable if and only if there is a DFA A_dist with at
most k+2 states such that (A_dist) ⊆ L^+_ϕ and
(A_dist) ⊈L^-_ϕ.
We prove both directions of the implication separately.
(⇒) Assume ϕ is satisfiable, then there is a satisfying truth
assignment ρ that is mapped to the word w_ρ = ρ(p_1)…ρ(p_k)
∈. We define the language L_dist = {(w_ρ·♯)^i |
i ∈}, and show that L_dist witnesses this implication.
First we show that L_dist⊆ L^+_ϕ. Assume i∈, if
i≤ n then by definition (w_ρ·♯)^i ∈ L_ϕ^- and hence
also in (w_ρ·♯)^i ∈ L_ϕ^+. If i > n, since ρ is a
satisfying assignment, it holds for any w'∈Σ^* that (w_ρ·♯)^n w' ∈ L^+_ϕ, and thus also (w_ρ·♯)^n (w_ρ·♯)^i-n∈ L^+_ϕ. By covering both cases this means L_dist⊆ L^+_ϕ.
Next, we observe that (w_ρ·♯)^n+1∉L^-_ϕ, and thus
L_dist⊈L^-_ϕ. Hence, since L_dist⊆ L^+_ϕ
any DFA that recognizes L_dist is a distinguishing automaton.
The minimal DFA A_dist such that (A_dist) = L_dist contains one
loop with k+1 states containing all positions of the word w_ρ·♯
and a sink state to reject all other words. Thus, if ϕ is satisfiable we
can construct A_dist with k+2 states that distinguishes L^+_ϕ and
L^-_ϕ, which was to be showed.
(⇐) We assume A_dist is a DFA with at most k+2 states
such that for the language accepted L̂ = (A_dist) it holds that
L̂⊆ L_ϕ^+ and L̂⊈L_ϕ^-. We show
that this means ϕ is satisfiable.
Since L̂∖ L_ϕ^- ≠∅ and L̂⊆
L_ϕ^+ there is a word w∈ L^+_ϕ∖ L^-_ϕ accepted by
A_dist. By definition w is of shape w = w_1 ♯… w_n♯ w'
where w'∈Σ^+ and w_1,…, w_n∈ and for every i∈
[1,k] the word w_i represents a satisfying truth assignment for the clause
C_i. Next we show that w_1 represents a satisfying truth assignment for
ϕ by counting the number of equivalence classes of ≡_L̂ for
the prefixes of w_1·♯, together with the postfix w_post = w_2
♯… w_n♯ w' that witnesses an accepting postfix for w_1♯.
We define the set U as the set containing all prefixes of w_1=a_1… a_k,
i.e.
U = {ϵ}∪{a_1… a_j | j∈ [1,k]}.
If v,u ∈ U and v≠ u then v≢_L̂ u, since there is a
σ∈Σ^* such that vσ = w and w∈L̂ and uσ∉L̂. This means there are |U| = k+1 distinct classes of
≡_L̂. Lastly, since ♯ z ∉L̂ for any
z∈Σ^* we can also conclude that ♯≢_L̂ u for all
u∈ U.
Since we assumed that A_dist has at most k+2 states, by
Corollary <ref> there are at most k+2 equivalence classes of
≡_L̂. Since trivially w_1♯≢_L̂♯,
by the pigeonhole principle there is a prefix u ∈ U such that at w_1
♯≡_L̂ u.
It can not be the case that u = a_1 … a_i for some i∈ [1,k], since
a_1…a_i ·a_i+1 …a_k ♯w_post ∈L̂
w_1♯ ·a_i+1…a_k ♯w_post ∉L̂.
By eliminating all alternatives we conclude u=ϵ. Using this equivalence
and since ϵ· w_1♯ w_post∈ L_dist we derive that
w_1♯· w_1♯ w_post∈ L_dist. In particular, this means
that (w_1♯)^n · w_post∈ L_dist. By definition of L_ϕ^+
this means that the truth assignment w_1 satisfies all clauses C_1, …,
C_n and hence it is a satisfying assignment for ϕ. This witnesses that
ϕ is a satisfying formula.
This lemma allows us to prove Theorem <ref>.
Membership of NP follows naturally. For two DFAs A_1 and A_2 we can, in
polynomial time, check if (A_1) ⊆(A_2). This can be
done by computing the emptiness of (A_1) ∩(A_2).
Moreover, either A_1 or A_2 itself necessarily already is a
distinguishing automaton, so the minimal distinguishing DFA is definitely
polynomial in size.
NP-hardness is a direct consequence of Lemma <ref> and of the
fact that L_ϕ^-⊆ L_ϕ^+, so the language of any
distinguishing automaton is a subset of L_ϕ^+ and not vice-versa.
*Acknowledgements: The author thanks Tim Willemse for raising the
question of distinguishing transition-systems with invariants. Thanks also to
Jan Friso Groote and Anna Stramaglia for providing helpful suggestions on this
document.
plain
|
http://arxiv.org/abs/2306.07641v1
|
20230613092427
|
Unitary transformations within density matrix embedding approaches: A novel perspective on the self-consistent scheme for electronic structure calculation
|
[
"Quentin Marécat",
"Benjamin Lasorne",
"Emmanuel Fromager",
"Matthieu Saubanère"
] |
cond-mat.str-el
|
[
"cond-mat.str-el",
"physics.chem-ph"
] |
d
|
http://arxiv.org/abs/2306.05746v1
|
20230609082429
|
Martin's conjecture for regressive functions on the hyperarithmetic degrees
|
[
"Patrick Lutz"
] |
math.LO
|
[
"math.LO"
] |
Regressive functions on the hyperarithmetic degrees]Martin's conjecture for regressive functions on the hyperarithmetic degrees
Department of Mathematics, UCLA
[email protected]
We answer a question of Slaman and Steel by showing that a version of Martin's conjecture holds for all regressive functions on the hyperarithmetic degrees. A key step in our proof, which may have applications to other cases of Martin's conjecture, consists of showing that we can always reduce to the case of a continuous function.
[
Patrick Lutz
================
§ INTRODUCTION
Martin's conjecture is a proposed classification of the limit behavior of functions on the Turing degrees under strong set theoretic hypotheses (namely the Axiom of Determinacy). The full conjecture is still open, but several special cases have been proved. In particular, in <cit.>, Slaman and Steel proved that Martin's conjecture holds for all “regressive” functions on the Turing degrees.
If f 2^ω→ 2^ω is a Turing-invariant function such that f(x) ≤_T x for all x then either f is constant on a cone or f(x) ≡_T x on a cone.
They also asked whether the analogous theorem for hyperarithmetic reducibility holds. In other words, is it possible to prove a version of Martin's conjecture for regressive functions on the hyperarithmetic degrees? Their motivation was as follows. A regressive function on the Turing degrees can be written as a countable union of continuous functions. Their argument works by using this fact to reduce to the case where f is continuous and then showing that if such an f is not constant on any cone then for all x in some cone, it is possible to find y ≡_T x such that x is coded into f(y).
In their coding argument, they relied strongly on the properties of continuous functions. In contrast with regressive functions on the Turing degrees, regressive functions on the hyperarithmetic degrees can only be written as countable unions of Borel functions. Thus Martin's conjecture for regressive functions on the hyperarithmetic degrees forms a natural test case to see whether their coding argument can be extended to deal with functions which our not continuous.
The main result of this paper is to answer their question in the affirmative. Namely we will prove the following theorem.
Let f 2^ω→ 2^ω be a hyp-invariant function such that f(x) ≤_H x for all x. Then either f is constant on a cone of hyperarithmetic degrees or f(x) ≡_H x on a cone of hyperarithmetic degrees.
There are a few interesting things to note about our proof. First, instead of adapting Slaman and Steel's methods to work with non-continuous functions, we instead show that f—despite potentially being far from continuous—can be replaced by a hyp-equivalent function which is continuous. We still have to modify their coding argument to work with hyperarithmetic reducibility rather than Turing reducibility, but in doing so we make heavy use of the fact that we can assume we are dealing with a continuous function.
This suggests that in some cases of Martin's conjecture where the functions being considered are not continuous, it may still be possible to replace them with related functions which are continuous. This idea has already borne fruit in the form of <cit.>, where it is combined with a refined version of the coding arguments introduced in this paper to prove part 1 of Martin's conjecture for order-preserving functions.
Second, our results cast at least a little doubt on the idea that any use of determinacy in proving Martin's conjecture will be “local” (that is, the idea that only Borel determinacy is needed when dealing with Borel functions, and so on). Our proof seems to use more than Borel determinacy, even when the functions being considered are assumed to be Borel (specifically, our proof uses analytic determinacy). In section <ref>, we show that Borel determinacy is sufficient, but this requires a more careful analysis that was not needed for the proof.
Third, our reduction to the case of a continuous function is quite flexible and seems to work in many different degree structures, including the arithmetic degrees. Somewhat surprisingly, it seems much harder to adapt the coding argument used by Slaman and Steel, even once we are allowed to assume we are dealing with a continuous function. In this paper, we have to use a somewhat different coding argument than the one used by Slaman and Steel, and in doing so we have to rely on the Σ^1_1-bounding theorem. Also, we have so far not been able to modify either our coding argument or Slaman and Steel's to work for arithmetic reducibility (in our opinion, the regressive case of Martin's conjecture on the arithmetic degrees is an interesting open question).
§ PRELIMINARIES
In this section we will provide some background on hyperarithmetic reducibility and on Martin's conjecture and then state some lemmas that we will use in the proof of Theorem <ref>. All the lemmas are standard, with the exception of Lemma <ref>, which we will see has a simple proof using standard techniques. For the reader intimidated by the axiom of determinacy, we note that the only way we will use determinacy in this paper is in the form of Lemma <ref>.
§.§ Background on Hyperarithmetic Reducibility
The easiest definition of hyperarithmetic reducibility is that y ≤_H x if y is Δ^1_1(x) definable (in which case we will often say that y is hyperarithmetic in x). It is not very hard to see that this relation is transitive and thus deserves the title “reducibility.” As usual, we can then define hyperarithmetic equivalence and the structure of the hyperarithmetic degrees.
But there is another characterization of hyperarithmetic reducibility which is often useful and which we will now explain. Let ω_1^x denote the least countable ordinal with no presentation computable from x. Work of Davis, Kleene and Spector shows that for any α < ω_1^x, there is a notion of the α^th iterate of the jump of x which is well-defined up to Turing equivalence <cit.>. We denote this α^th jump of x by x^(α). Kleene proved in <cit.> that y is hyperarithmetic in x if and only if y ≤_T x^(α) for some α < ω_1^x.
It will be helpful later in the paper if we make some of this more precise. Suppose r is a real which codes a linear order ≤_r on which has a minimum element, 0_r. If x is any real, then a jump hierarchy on r which starts with x is a set H ⊂^2 such that the 0_r^th column of H is x and for each n 0_r, the n^th column of H is equal to the jump of the smaller columns of H (smaller according to the ordering given by ≤_r). In other words, if we define
H_n = {i |⟨ n, i⟩∈ H}
H_< n = {⟨ m, i⟩| m <_r n and ⟨ m, i⟩∈ H}
then we have H_0_r = x and H_n = (H_< n)' for all n 0.
If ≤_r happens to be a presentation of a well-order then there is always a unique H satisfying the conditions above. Moreover, if α < ω_1^x and r codes a presentation of α which is computable from x then the Turing degree of the unique jump hierarchy on r starting from x is independent of the specific choice of r. Such a jump hierarchy is considered to be the α^th jump of x (which is only well-defined up to Turing degree). This makes precise the alternative characterization of hyperarithmetic reducibility mentioned above.
It is also worth mentioning here that hyperarithmetic reducibility is closely connected to Borel measurability. Just as every continuous function is computable relative to some oracle, every Borel function is hyperarithmetic relative to some oracle. More precisely, if f is Borel then there is some countable ordinal α, some r which codes a presentation of α, some real y and some Turing functional Φ such that for all x, f(x) = Φ((x ⊕ y)^(α)), where (x⊕ y)^(α) is taken to mean the unique jump hierarchy on r starting from x ⊕ y.
§.§ Background on Martin's Conjecture
As mentioned in the introduction, Martin's conjecture is a proposed classification of the limit behavior of functions on the Turing degrees under strong set theoretic hypotheses. It is traditionally divided into two parts. We will only discuss the first part here, since that is all that is relevant for this paper.
Very roughly, part 1 of Martin's conjecture states that if f is a function from the Turing degrees to the Turing degrees then either f(x) is constant for all large enough x or f(x) ≥_T x for all large enough x. There are three things to explain here. First, a caveat: the conjecture is actually stated not in terms of functions on the Turing degrees, but in terms of Turing invariant functions on the reals. Second, we need to state precisely what “for all large enough x” really means. Third, the conjecture is false in and is instead stated as a conjecture in the theory + (or sometimes + + _, though we will not need to use _ in this paper). We will now explain each of these points in more detail.
First, let's define precisely what we mean by a Turing invariant function on the reals. A function f 2^ω→ 2^ω is called Turing invariant if for all x and y in 2^ω,
x ≡_T y f(x) ≡_T f(y).
The point is that a Turing invariant function f induces a function on the Turing degrees. Using the Axiom of Choice, it is clear that every function on the Turing degrees arises from a Turing invariant function on the reals, but this may fail in (though it is true again if we assume _, a strengthening of the Axiom of Determinacy). So Martin's conjecture is actually only classifying the behavior of functions on the Turing degrees which come from Turing invariant functions on the reals.
Since it will be useful to us, we will also mention here the definition of a Turing invariant set of reals. A subset A ⊆ 2^ω is called Turing invariant if for all x and y in 2^ω,
x ≡_T y (x ∈ A ↔ y ∈ A).
Next, let's explain what we mean by “all large enough x.” The key concept is that of a cone of Turing degrees (which is actually a Turing invariant subset of 2^ω rather than a subset of the Turing degrees): a cone of Turing degrees is a set of the form {x ∈ 2^ω| x ≥_T y} for some fixed y. This y is called the base of the cone and the cone is sometimes referred to as the cone above y. What we mean by “all large enough x” is simply “for all x in some cone.”
Third, we will mention a few things about the Axiom of Determinacy. The Axiom of Determinacy (often written ) is an axiom of set theory which is inconsistent with the Axiom of Choice and equiconsistent with the existence of infinitely many Woodin cardinals <cit.>. We will not give a definition of the Axiom of Determinacy here, but simply mention the following fact, which is one of the main consequences of for computability theory.
If A is a Turing invariant subset of 2^ω then either A contains a cone or A is disjoint from a cone.
There is also a weak form of determinacy called “Borel determinacy” which is provable in and which is enough to prove a theorem <ref> if the set A is assumed to be Borel.
We can now give a formal statement of part 1 of Martin's conjecture.
Assuming +, if f 2^ω→ 2^ω is a Turing invariant function then either f(x) ≥_T x for all x in some cone or there is some y such that f(x) ≡_T y for all x in some cone.
In a slight abuse of terminology, the latter possibility in the conjecture is often written as “f is constant on a cone” (even though it is the function that f induces on the Turing degrees that is constant, not f itself).
Finally, we mention that for many degree structures besides the Turing degrees (and in particular for the hyperarithmetic degrees), it is possible to state a sensible version of Martin's conjecture by just swapping out Turing reducibility for the appropriate alternative notion of reducibility in the definitions of “Turing invariant function” and “cone of Turing degrees.” This is reasonable to do in part because Theorem <ref> works for pretty much any notion of reducibility stronger than Turing reducibility (and also for many which are weaker).
§.§ Determinacy Lemmas
We now state a few lemmas that will help us apply determinacy even in situations where we have to deal with non-Turing invariant sets of reals. The key notion is that of a “pointed perfect tree.”
A perfect tree is a tree, T, such that every node in T has a pair of incompatible extensions which are both in T.
A pointed perfect tree is a perfect tree, T, such that every path through T computes T.
If T is a tree, we will use [T] to refer to the set of paths through T.
The reason pointed perfect trees are useful to work with is that if T is a pointed perfect tree then [T] contains a representative of every Turing degree which is above the Turing degree of T. Next, we will see that determinacy can be used to get pointed perfect trees. For a proof of Lemma <ref>, see <cit.>, Lemma 3.5.
A set A ⊆ 2^ω is cofinal in the Turing degrees if for all x there is some y ≥_T x such that y ∈ A (note that A is not required to be Turing invariant).
Suppose A ⊆ 2^ω is cofinal in the Turing degrees and h is a function on A with countable range. Then there is a pointed perfect tree on which h is constant.
The following lemma will be our only use of determinacy in the proofs in the rest of this paper. Essentially it is a kind of computable uniformization principle provable from .
Suppose R is a binary relation on 2^ω such that
* The domain of R is cofinal in the Turing degrees: for all z there is some x ≥_T z and some y such that (x, y) ∈ R
* and R is a subset of Turing reducibility: for every (x, y) ∈ R, x ≥_T y.
Then there is a pointed perfect tree T and a Turing functional Φ such that for every x ∈ [T], Φ(x) is total and (x, Φ(x)) ∈ R. In other words, Φ is a computable choice function for R on [T].
For each x in the domain of R there is some e such that Φ_e(x) is total and R(x, Φ_e(x)) holds. Let e_x denote the smallest such e. By determinacy (in the form of lemma <ref>), there is a pointed perfect tree T on which e_x is constant. Let e be this constant value. Then T and Φ_e satisfy the conclusion of the lemma.
It will be useful below to note that if A and h in Lemma <ref> are Borel then the result is provable in , and similarly that if R in the above lemma is assumed to be Borel, then that result, too, is provable in .
§.§ Pointed Perfect Tree Lemmas
Now we will state a couple of lemmas that are helpful when working with pointed perfect trees. These lemmas do not require the Axiom of Determinacy. The first lemma can be proved using the same kind of arguments as in Spector's construction of a minimal degree and the second is a routine application of compactness (see <cit.> for proofs).
Suppose T is a pointed perfect tree and Φ is a Turing functional such that Φ(x) is total for every x ∈ [T]. Then either Φ is constant on a pointed perfect subtree of T or Φ is injective on a pointed perfect subtree of T.
Suppose T is a perfect tree and Φ is a Turing functional such that Φ(x) is total for every x ∈ [T] and Φ is injective on [T]. Then for each x ∈ [T],
Φ(x) ⊕ T ≥_T x.
In fact, this reduction is even uniform in x (though we won't need to use that fact in this paper).
§.§ Computable Linear Orders
To work with hyperarithmetic reducibility, we will need to make use of a few facts about computable linear orders and computable well-orders. Proofs can be found in <cit.>.
One of the most important facts about computable well-orders is the Σ^1_1-bounding theorem. Essentially it says that every Σ^1_1-definable collection of well-orders is bounded below a computable ordinal. The theorem comes in multiple flavors, depending on whether we are talking about sets of programs which compute presentations of well-orders, or real numbers which are presentations of well-orders and depending on whether the Σ^1_1 definition is boldface, lightface, or lightface relative to some fixed real. Below, we just state the two versions that we will need in this paper.
Suppose that x is a real and A is a Σ^1_1(x) definable set of codes for programs such that for every e in A, Φ_e(x) is a presentation of a well-order. Then there is some α < ω_1^x which is greater than every ordinal with a presentation coded by an element of A.
If A is a Σ^1_1 definable set of presentations of well-orders then there is some α < ω_1 which is greater than every ordinal with a presentation in A.
We will also need some ideas originally introduced by Harrison in <cit.>.
If x is a real and r is a real computable from x that codes a presentation of a linear order, then r is a pseudo-well-order relative to x if it is ill-founded but contains no infinite descending sequence which is hyperarithmetic in x.
If r is a presentation of a linear order that is computable from a real x then the assertion “r has no infinite descending sequence which is hyperarithmetic in x” is equivalent to a Σ^1_1(x) formula.
If r is a pseudo-well-order relative to x and H is a jump hierarchy on r that starts with x then H computes every real which is hyperarithmetic in x.
§ PROOF OF THE MAIN THEOREM
In this section, we will prove Theorem <ref>. Before we launch into the details of the proof, we will give an outline of the general strategy. And before we do that, we will recall the general strategy followed by Slaman and Steel in their proof of Theorem <ref>. The steps of their proof are essentially as follows.
* First, use determinacy to show that there is a pointed perfect tree on which f is computable. Then use Lemma <ref> to show that we can also assume f is injective.
* Next, show that if x is in the pointed perfect tree, then every function computable from x is dominated by a function computable from f(x). The idea is that if x computes a function which is not dominated by any function computable from f(x) then x can diagonalize against f(x) by using this function to guess convergence times for f(x) programs. The diagonalization produces a real y in the same Turing degree as x such that f(x) cannot compute f(y), thereby contradicting the Turing invariance of f.
* Once you can assume that f is computable and injective on a pointed perfect tree and that if x is in this tree then every function computed by x is dominated by a function computed by f(x), use a coding argument to show that f(x) ≥_T x. The coding argument works by coding bits of x into the relative growth rates of two fast growing functions computed by f(x).
Our proof makes three main modifications to this outline. First, instead of showing that f is computable on some pointed perfect tree, we show that f is hyp-equivalent to some computable function on a pointed perfect tree. Thus we may work with that function instead of f. Second, instead of showing that every fast growing function computed by x is dominated by a function computed by f(x), we show that every well-order computed by x embeds into a well-order computed by f(x)—in other words that ω_1^x = ω_1^f(x). Third, instead of coding bits of x into the relative growth rates of fast growing functions computed by f(x), we code the bits of x into the Kolmogorov complexities of initial segments of reals computed by f(x) (though it is not necessary to know anything about Kolmogorov complexity to follow our argument). Also, to be able to carry out the coding argument, we will first have to use a trick involving Σ^1_1-bounding. To sum up, here's an outline of our proof.
* First, we will use determinacy to replace f with a hyp-equivalent function which is computable on a pointed perfect tree. By using Lemma <ref>, we can also assume that f is injective.
* Next we show that ω_1^f(x) = ω_1^x for all x in the pointed perfect tree. The idea is if ω_1^f(x) was less than ω_1^x then x would be able to diagonalize against f(x) by using ω_1^f(x) jumps.
* Once we are able to assume that f is computable and injective and that ω_1^f(x) = ω_1^x, we will use a coding argument to show that f(x) ≥_H x. In our coding argument, it will be important to know that there is a single ordinal α < ω_1^x such that for every real y in the same Turing degree as x, f(x)^(α) computes f(y). We will prove this fact using Σ^1_1-bounding.
And now it's time to present the actual proof.
§.§ Replacing f with an injective, computable function
First we will show that f can be replaced by a computable function. This is the only part of the proof that uses determinacy.
Suppose f 2^ω→ 2^ω is hyp-invariant and hyp-regressive. Then there is a Turing functional Φ and a pointed perfect tree T such that for all x ∈ [T], Φ(x) is total and Φ(x) ≡_H f(x).
Consider the following binary relation, R.
R(x, y) x ≥_T y and f(x) ≡_H y.
The idea is that a computable function which is hyp-equivalent to f is exactly a computable function which uniformizes R. To show that such a function exists, it suffices to check that we can apply Lemma <ref>.
To check that we can apply Lemma <ref>, we need to check that R is cofinal in the Turing degrees and that R is a subset of Turing reducibility. The latter is clear from the definition of R. For the former, fix any real x and we will show that some real which computes x is in the domain of R. Since f(x) ≤_H x, there is some α < ω_1^x such that x^(α)≥_T f(x). Since x^(α)≡_H x and f is hyp-invariant, f(x^(α)) ≡_H f(x). Thus R(x^(α), f(x)) holds and so x^(α) is an element of the domain of R which computes x.
Thus we may apply Lemma <ref> to get a pointed perfect tree T and a Turing functional Φ such that for all x ∈ [T], Φ(x) is total and Φ(x) ≡_H f(x).
For the rest of the proof we will simply assume that f is computable on a pointed perfect tree. It will also be convenient to assume that f is injective on a pointed perfect tree, which we show next.
Suppose T is a pointed perfect tree and f 2^ω→ 2^ω is a hyp-invariant function which is computable on [T]. Then either f is constant on a cone of hyperdegrees or f is injective on a pointed perfect subtree of T.
By Lemma <ref>, either f is constant on a pointed perfect subtree of T or f is injective on a pointed perfect subtree of T. In the former case, f is constant on a cone of hyperdegrees and in the latter case, we are done.
For the rest of the proof, we will deal with the case of a hyp-invariant function, f, which is computable and injective on a pointed perfect tree, T. We will show that for any x in [T], f(x) ≥_H x. There are two cases: when ω_1^f(x) < ω_1^x and when ω_1^f(x) = ω_1^x. We will show that the first case is impossible and that if we are in the second case then we can use the coding argument mentioned above.
§.§ Proving that f preserves ω_1^x
We will now show that for any x ∈ [T], ω_1^f(x) = ω_1^x. We will do this by deriving a contradiction from the assumption that ω_1^f(x) < ω_1^x (note that since f(x) ≤_H x we cannot have ω_1^f(x) > ω_1^x). The basic idea is that in this case we can diagonalize against f(x). Namely, we can use ω_1^f(x) jumps of x to compute a real y so that f(x) cannot compute f(y) with fewer than ω_1^f(x) jumps (and hence f(x) cannot be hyp-equivalent to f(y)). Since ω_1^x > ω_1^f(x), this y can be made hyp-equivalent to x, which violates the hyp-invariance of f. We now give the formal proof.
Suppose T is a pointed perfect tree and f is a hyp-invariant function which is computable and injective on [T]. Then for every x ∈ [T], ω_1^f(x) = ω_1^x.
Suppose for contradiction that for some x ∈ [T], ω_1^f(x) < ω_1^x. Let α = ω_1^f(x). The key point is that for every y ∈ [T] which is hyp-equivalent to x, x^(α) computes y.
Why is that? Well, if y is in the same hyperdegree as x then f(y) is in the same hyperdegree as f(x). So by definition of α, there is some β < α such that f(x)^(β)≥_T f(y). We then have the following calculation.
x^(α) ≥_T x^(β) because β < α
≥_T x^(β)⊕ T because T is pointed
≥_T f(x)^(β)⊕ T because f(x) ≤_T x
≥_T f(y)⊕ T by definition of β
≥_T y by Lemma <ref>.
We can now finish the proof easily. Since T is pointed, we can pick some y ∈ [T] which is Turing equivalent to x^(α + 1). Since α < ω_1^x, this y is hyp-equivalent to x. But it obviously is not computable from x^(α), so we have reached a contradiction.
§.§ Coding argument
In this part of the proof, we will explain how to code x into some real of the same hyperarithmetic degree as f(x). The argument has some similarity to the proof of a basis theorem for perfect sets given by Groszek and Slaman in <cit.> (which itself has some similarity to the coding argument used in <cit.>). Before giving the coding argument, however, we will first show that for every x ∈ [T] there is a uniform bound on the number of jumps that f(x) takes to compute f(y) for any y ∈ [T] which is Turing equivalent to x.
Suppose T is a pointed perfect tree, f is a hyp-invariant function which is computable on [T], and x ∈ [T]. Then there is some α < ω_1^x such that if y ∈ [T] is Turing equivalent to x then f(y) ≤_T f(x)^(α).
The main idea is just to use Σ^1_1-bounding. Let A be the set of programs e such that Φ_e(f(x)) computes a linear order r for which
* r has no infinite descending sequence which is hyperarithmetic in f(x)
* and there is some y ≡_T x in [T] and some jump hierarchy H on r starting from f(x) such that H does not compute f(y).
By Lemma <ref> (and since f(x) is computable from x), A is Σ^1_1(x). I claim that every program in A computes a well-order.
Suppose instead that A contains a program e computing an ill-founded order, r. Thus r is a pseudo-well-order relative to f(x). Since e is in A, there must be some y≡_T x in [T] and some jump hierarchy on r starting with f(x) which does not compute f(y). And since f is hyp-invariant, we must have f(x) ≡_H f(y). But by Lemma <ref>, any jump hierarchy on r which starts with f(x) computes everything in the hyperdegree of f(x), and in particular f(y). This is a contradiction, so all programs in A must compute well-orders.
Since A is Σ^1_1(x) and contains only programs computing well-orders, Σ^1_1-bounding implies that there is some α < ω_1^x which bounds every well-order in A. This implies that for every y ≡_T x in [T], f(y) is computable from f(x)^(α + 1).
We now come to the coding argument. As we have discussed, it replaces a different coding argument used by Slaman and Steel, and while their argument codes information into the relative growth rates of two fast-growing functions, ours codes information into the relative Kolmogorov complexities of initial segments of three reals (though the reader does not need to be familiar with Kolmogorov complexity to understand the proof below).
Suppose T is a pointed perfect tree and f is a hyp-invariant function which is computable and injective on [T]. Then f(x) ≥_H x for all x ∈ [T].
Let x ∈ [T]. Our goal is to show that f(x) ≥_H x. By Lemma <ref>, we know that ω_1^x = ω_1^f(x). By thinning T, we may assume that x is the base of T (i.e. T is a pointed perfect tree such that x ≡_T T), and hence that any element of T can compute x. We will use this fact below without further comment.
By Lemma <ref>, there is some α < ω_1^x such that for all y ∈ [T] in the same Turing degree as x, we have f(x)^(α)≥_T f(y). For the remainder of the proof, we will explain how to find reals a, b, c ∈ [T] which are hyp-equivalent to x such that x ≤_T f(x)^(α + 2)⊕ f(a) ⊕ f(b) ⊕ f(c).
To see why this is sufficient to complete the proof, first note that since f is hyp-invariant, f(a), f(b), and f(c) are all hyp-equivalent to f(x). Next, note that since ω_1^f(x) = ω_1^x, α is less than ω_1^f(x) and thus f(x)^(α + 2) is also hyp-equivalent to f(x). Therefore f(x)^(α + 2)⊕ f(a)⊕ f(b)⊕ f(c) is hyp-equivalent to f(x) and so if x is Turing below the former then it is hyp below the latter.
We will build a, b, and c in stages. At each stage we will keep track of the following data (supposing that the current stage is n):
* Initial segments A_n, B_n, and C_n of a, b, and c.
* Reals a_n, b_n, and c_n in [T] and Turing equivalent to x, which A_n, B_n, and C_n, respectively, are initial segments of. Think of a_n, b_n, c_n as the current “targets” for a, b, c.
* Initial segments Ã_n, B̃_n, and C̃_n of f(a), f(b), and f(c). These are the longest initial segments of f(a), f(b), and f(c) that can be determined from knowing the initial segments A_n, B_n, and C_n of a, b, and c (recall that f is continuous on T).
* Indices for programs e_a, n, e_b, n, and e_c, n. Think of these as “guesses” as to which programs compute f(a_n), f(b_n), and f(c_n) from f(x)^(α).
At the same time, f(x) will be using f(a), f(b), and f(c) to try to follow along with this construction by keeping track of the initial segments Ã_n, B̃_n, and C̃_n and the “guesses” e_a, n, e_b, n, and e_c, n. On each step of the construction we will update the data to code the next bit of x.
On each step, two of a, b, and c will be used to code the next bit of x and the third will play a “helper” role of coding some information to help f(x) follow along with the construction. Which of a, b, and c is playing this “helper” role will simply rotate between them on each step. So, for instance, a will play the helper role every third step.
We will make sure that at the beginning of step n, the “guess” corresponding to whichever real is playing the helper role on step n is correct. E.g. if a is in the helper role on step n then we will need that e_a, n is really the index of a program computing f(a_n) from f(x)^(α). We will see that the construction ensures this.
We will code the next bit of x into the relative sizes of the guesses for the two reals which are not playing the helper role. E.g. if a is playing the helper role on step n then we will code the next bit of x into which of e_b, n + 1 and e_c, n+ 1 is larger—if x(n) = 0 then we will make sure e_b, n + 1 > e_c, n + 1 and if x(n) = 1 then we will make sure e_b, n + 1 < e_c , n + 1.
To make things more concrete, let's suppose that we are on step n, a is in the helper role, and the next bit of x is a 0 (so we need to make sure e_b, n + 1 > e_c, n + 1). We can assume that e_a, n is correct—i.e. that Φ_e_a, n(f(x)^(α)) = a_n—and we need to make sure that this holds of e_b, n + 1 at the end of this step. Here's what we do.
* The target for c will stay the same—i.e. set c_n + 1 = c_n.
* Let e_c, n + 1 be the true guess for c_n = c_n + 1—i.e. the least e such that Φ_e(f(x)^(α) = f(c_n) (we know that such an e must exist because we are assuming c_n is in [T] and Turing equivalent to x and thus f(c_n) is computable from f(x)^(α)).
* Choose some new target b_n + 1 in [T] of the same Turing degree as x so that b_n + 1 extends B_n and so that for the least e for which Φ_e(f(x)^(α)) = f(b_n + 1), we have e > e_c , n + 1. We can do this because f is injective on T and there are infinitely many reals in [T] extending B_n which are Turing equivalent to x.
* Let m be a number large enough that e_c, n + 1 is the least e such that Φ_e(f(x)^(α)) is total and agrees with the first m bits of f(c_n + 1) and likewise for e_b, n + 1.
* Choose some new target a_n + 1 in [T] of the same Turing degree as x which also agrees with the old initial segment A_n of a but which disagrees with a_n and for which the first place such that f(a_n + 1) disagrees with f(a_n) is greater than m, say m'.
* Let Ã_n + 1 = f(a_n + 1) m'.
* Let A_n + 1 be a long enough initial segment of a_n + 1 to ensure that f(a) m' = f(a_n + 1) m' and thus that the first place at which f(a_n) and f(a_n + 1) disagree is m'.
* Set B̃_n + 1 and C̃_n + 1 to be f(b_n + 1) m' and f(c_n + 1) m'.
* Let B_n + 1 and C_n + 1 be long enough initial segments of b_n + 1 and c_n + 1 to ensure that f(b) and f(c) agree with the first m' bits of f(b_n + 1) and f(c_n + 1) (recall that f is continuous on [T]).
Note that by construction, the guesses e_b, n + 1 and e_c, n + 1 are correct. Now let's describe what's happening from f(x)'s perspective.
* First we look at f(a). Since it's a's turn to be the helper, we (as f(x)) know we should look for the first place where f(a) disagrees with Φ_e_a, n(f(x)^(α)) (which, recall, agrees with f(a_n)). So this allows us to retrieve m'.
* Now look at f(b) m' and f(c) m'. These are the new B̃_n and C̃_n. Calculate the least e such that Φ_e(f(x)^(α)) is total and agrees with f(b) up to m'. This is e_b, n + 1. Do the same thing for c.
* Now we check which of e_b, n + 1 and e_c, n + 1 is bigger. That tells us the next bit of x.
* At this point we have the correct guesses for b and c. We may not have a correct guess for a (or even a guess at all) but that doesn't really matter. The only one for which it is vital we have a correct guess at the beginning of the next step is the one that is going to be in helper mode and this will not be a twice in a row (since helper mode always rotates).
To carry out the entire construction to build a, b, and c, we just need to know x and f(x)^(α + 2) (the +2 is needed to figure out which programs are total). Since x computes f(x) and α < ω_1^x, this means that a, b, and c are hyperarithmetic in x. And since x ≡_T T and T is pointed, x is computable by a, b, and c and thus they are all in the same hyperdegree. At the same time, all that is required to do the parts “from f(x)'s perspective” is f(x)^(α + 2) (again, the +2 is needed to check which programs are total) along with f(a), f(b), and f(c). Hence x ≤_T f(x)^(α + 2)⊕ f(a) ⊕ f(b) ⊕ f(c).
§ THE CASE OF BOREL FUNCTIONS
It is popular to suppose that any proof of Martin's conjecture will only use determinacy in a “local” way—that is, the proof will still work in when restricted to Borel functions, just by replacing the original uses of with analogous uses of Borel determinacy.
In this section, we will see that the main result of this paper does hold in when restricted to Borel functions, but that proving this requires using a trick not present in the proof presented above. The trouble is that even if we only consider Borel functions, the proof of Lemma <ref> appears to require analytic determinacy rather than Borel determinacy. However, this can be avoided by a more careful analysis and an appeal to Σ^1_1-bounding.
Here's the key idea. If f is hyp-regressive then we know that for each x there is some α < ω_1^x such that x^(α) computes f(x). We will use Σ^1_1-bounding to find a single α which works for all x. After this, it will be straightforward to modify the proof of Lemma <ref> to only use Borel determinacy.
In the next lemma we will prove this key point. Note that since we are restricting ourselves to Borel functons, we can drop the “hyp-regressive” requirement—every Borel function f is automatically regressive on a cone of hyperarithmetic degrees.
Let f 2^ω→ 2^ω be a Borel function. Then there is some α < ω_1 such that for all x on a cone of hyperdegrees, α < ω_1^x and x^(α)≥_T f(x).
As noted above, since f is Borel, f(x) ≤_H x on a cone of hyperdegrees. For the rest of the proof, we will implicitly work on this cone and thus we may assume f(x) ≤_H x for all x.
We start by simply writing down the definition of hyperarithmetic reducibility: for each x, we know that f(x) ≤_H x and hence that there is some α < ω_1^x such that x^(α) computes f(x). Our goal is to show that there is some α < ω_1 which is large enough to work for all x. We will do so by using Σ^1_1-bounding.
Let A be the set of reals r which code presentations of linear orders such that for some x,
* x computes r
* r has no infinite descending sequences which are hyperarithmetic in x
* and there is a jump hierarchy H on r starting from x such that H does not compute f(x).
By Lemma <ref> plus the fact that f is Borel, the set A is Σ^1_1 definable (note that this is boldface rather than lightface because f is Borel but not necessarily lightface Δ^1_1).
Next, I claim that A only contains well-orders. Suppose not and that A contains an ill-founded order, r. Let x witness that r is in A. Then r is a pseudo-well-order relative to x. But by Lemma <ref>, this means that any jump hierarchy on r starting with x computes everything hyperarithmetic in x, and in particular, computes f(x). This contradicts the definition of A.
Since A is Σ^1_1 and contains only well-orders, Σ^1_1-bounding implies that there is some α < ω_1 which bounds everything in A. By the definition of A this means that for every x either ω_1^x ≤α or x^(α + 1)≥_T f(x). So if we go to a cone on which everything computes a presentation of α then we obtain the conclusion of the lemma.
We can now prove the Borel version of Theorem <ref>.
Let f 2^ω→ 2^ω be a hyp-invariant Borel function. Then either f is constant on a cone of hyperdegrees or f(x) ≥_H x on a cone of hyperdegrees.
By the previous lemma, we can assume there is some α < ω_1 such that for all x on a cone of hyperdegrees, α < ω_1^x and x^(α)≥_T f(x). Let a be the base of such a cone and let r be a presentation of α computable from a. For the rest of the proof, we will work on the cone above a and we will interpret x^(α) to mean the unique jump hierarchy on r that starts with x.
The main idea of the proof is to go through the proof of Theorem <ref> and make sure that every time that proof used determinacy, we can actually get by with just Borel determinacy. The only part of that proof in which we used determinacy was in the proof of Lemma <ref>. In particular, we used determinacy by applying Lemma <ref> to the binary relation R defined by
R(x, y) x ≥_T y and f(x) ≡_H y.
The problem is that even if f is Borel, this relation is not Δ^1_1, but only Π^1_1 (since, in general, the formula x ≡_H y is only Π^1_1). We will remedy this problem by showing that the relation R can be replaced by the relation S defined by
S(x, z) x ≥_T z and ∃ y ≤_T x (y ≥_T a x ≤_T y^(α) f(y) = z).
In particular, we will show that the domain of S is cofinal in the Turing degrees. The requirement that y must compute a is necessary to ensure that y^(α) is well-defined (and note that it implies that x must also compute a).
Why is this sufficient? Let's first assume that we can show that the domain of S is cofinal and see why that is enough to complete the proof. Since the definition of S is Δ^1_1, and satisfies the conditions of Lemma <ref>, there is a pointed perfect tree T and a Turing functional Φ such that for all x ∈ [T], S(x, Φ(x)) holds.
We now claim that Φ(x) ≡_H f(x). To see why, let y be a witness to the truth of S(x, Φ(x)). Then y ≥_T a and so α < ω_1^y. Also y ≤_T x and x ≤_T y^(α), hence x and y are hyp-equivalent. Since f is hyp-invariant, this implies that Φ(x) = f(y) ≡_H f(x).
Thus we have recovered the conclusion of Lemma <ref> and the rest of the proof works unchanged.
Why is this true? Now we will show that S has cofinal domain. The proof is very similar to the proof of lemma <ref>. Let x be any real. By joining with a if necessary, we may assume that x is in the cone above a. Since x is in the cone above a, we know that f(x) ≤_T x^(α) and so S(x^(α), f(x)) holds (as witnessed by x itself). Since x ≤_T x^(α), we have succeeded in finding something in the domain of S which is above x.
plain
|
http://arxiv.org/abs/2306.02864v1
|
20230605133501
|
Leveraging Large Language Models for Topic Classification in the Domain of Public Affairs
|
[
"Alejandro Peña",
"Aythami Morales",
"Julian Fierrez",
"Ignacio Serna",
"Javier Ortega-Garcia",
"Iñigo Puente",
"Jorge Cordova",
"Gonzalo Cordova"
] |
cs.AI
|
[
"cs.AI",
"cs.CL"
] |
P[1]>p#1
Leveraging Large Language Models in the Domain of Public Affairs
A. Peña, A. Morales, J. Fierrez, et al.
BiDA - Lab, Universidad Autónoma de Madrid (UAM), Madrid 28049, Spain
VINCES Consulting, Madrid 28010, Spain
Leveraging Large Language Models for Topic Classification in the Domain of Public Affairs
Alejandro Peña10000-0001-6907-5826, Aythami Morales10000-0002-7268-4785, Julian Fierrez10000-0002-6343-5656, Ignacio Serna10000-0003-3527-4071, Javier Ortega-Garcia10000-0003-0557-1948,
Íñigo Puente2, Jorge Córdova2, Gonzalo Córdova2
July 31, 2023
=============================================================================================================================================================================================================================================
The analysis of public affairs documents is crucial for citizens as it promotes transparency, accountability, and informed decision-making. It allows citizens to understand government policies, participate in public discourse, and hold representatives accountable. This is crucial, and sometimes a matter of life or death, for companies whose operation depend on certain regulations. Large Language Models (LLMs) have the potential to greatly enhance the analysis of public affairs documents by effectively processing and understanding the complex language used in such documents. In this work, we analyze the performance of LLMs in classifying public affairs documents. As a natural multi-label task, the classification of these documents presents important challenges. In this work, we use a regex-powered tool to collect a database of public affairs documents with more than 33K samples and 22.5M tokens. Our experiments assess the performance of 4 different Spanish LLMs to classify up to 30 different topics in the data in different configurations. The results shows that LLMs can be of great use to process domain-specific documents, such as those in the domain of public affairs.
§ INTRODUCTION
The introduction of the Transfomer model <cit.> in early 2017 supposed a revolution in the Natural Language Domain. In that work, Vaswani et al. demonstrated that an Encoder-Decoder architecture combined with an Attention Mechanism can increase the performance of Language Models in several tasks, compared to recurrent models such as LSTM <cit.>. Over the past few years, there has been a significant development of transformer-based language model architectures, which are commonly known as Large Language Models (LLM). Its deployment sparked a tremendous interest and exploration in numerous domains, including chatbots (e.g., ChatGPT,[https://openai.com/blog/chatgpt] Bard,[https://blog.google/technology/ai/bard-google-ai-search-updates/] or Claude[https://www.anthropic.com/index/introducing-claude]), content generation <cit.>, virtual AI assistants (e.g., JARVIS <cit.>, or GitHub's Copilot[https://github.com/features/preview/copilot-x]), and other language-based tasks <cit.><cit.><cit.>. These models address scalability challenges while providing significant language understanding and generation abilities. That deployment of large language models has propelled advancements in conversational AI, automated content creation, and improved language understanding across various applications, shaping a new landscape of NLP research and development. There are even voices raising the possibility that most recent foundational models <cit.><cit.><cit.><cit.> may be a first step of a artificial general intelligence <cit.>.
Large language models have the potential to greatly enhance the analysis of public affairs documents. These models can effectively process and understand the complex language used in such documents. By leveraging their vast knowledge and contextual understanding, large language models can help to extract key information, identify relevant topics, and perform sentiment analysis within these documents. They can assist in summarizing lengthy texts, categorizing them into specific themes or subject areas, and identifying relationships and patterns between different documents. Additionally, these models can aid in identifying influential stakeholders, tracking changes in public sentiment over time, and detecting emerging trends or issues within the domain of public affairs. By leveraging the power of large language models, organizations and policymakers can gain valuable insights from public affairs documents, enabling informed decision-making, policy formulation, and effective communication strategies. The analysis of public affairs documents is also important for citizens as it promotes transparency, accountability, and informed decision-making.
Public affairs documents often cover a wide range of topics, including policy issues, legislative updates, government initiatives, social programs, and public opinion. These documents can address various aspects of public administration, governance, and societal concerns. The automatic analysis of public affairs text can be considered a multi-label classification problem. Multi-label classification enables the categorization of these documents into multiple relevant topics, allowing for a more nuanced understanding of their content. By employing multi-label classification techniques, such as text categorization algorithms, public affairs documents can be accurately labeled with multiple attributes, facilitating efficient information retrieval, analysis, and decision-making processes in the field of public affairs.
This work focuses on NLP-related developments in an ongoing research project. The project aims to improve the automatic analysis of public affairs documents using recent advancements in Document Layout Analysis (DLA) and Language Technologies. The objective of the project is to develop new tools that allow citizens and businesses to quickly access regulatory changes that affect their present and future operations. With this objective in mind, a system is being developed to monitor the publication of new regulations by public organizations The block diagram of the system is depicted in Figure <ref>. The system is composed of three main modules: i) Harvester module based on web scrappers; ii) a Document Layout Analysis (DLA) module; and iii) a Text Processing module. The Harvester monitors a set of pre-defined information sources, and automatically downloads new documents in them. Then, the DLA module conducts a layout extraction process, where text blocks are characterized and automatically classified, using Random Forest models, into different semantic categories. Finally, a Text Processing module process the text blocks using LLMs technology to perfom multi-label topic classification, finally aggregating individual text predictions to infer the main topics of the document.
The full system proposed in Figure <ref> serves us to adapt LLMs to analyze documents in the domain of public affairs. This adaptation is based on the dataset used in our experiments, generated in collaboration with experts in public affairs regulation. They annotated over 92K texts using a semi-supervised process that included a regex-based tool. The database comprises texts related to more than 385 different public affairs topics defined by experts.
From all the analysis tool that can be envisioned in the general framework depicted in Figure <ref>, in the present paper we focus in topic classification, with the necessary details of the Harverster needed to explain our datasets and interpret our topic classification results. Other modules such as the Layout Extractor are left for description elsewhere.
Specifically, the main contributions of this work are:
* Within the general document analysis system for analyzing public affairs documents depicted in in Figure <ref>, we propose, develop, and evaluate a novel functionality for multi-label topic classification.
* We present a new dataset of public affairs documents annotated by topic with more than 33K text samples and 22.5M tokens representing the main Spanish legislative activity between 2019 and 2022.
* We provide experimental evidence of the proposed multi-label topic classification functionality over that new dataset using four different LLMs (including RoBERTa <cit.> and GPT2 <cit.>) followed by multiple classifiers.
Our results shows that using a LLM backbone in combination with SVM classifiers suppose an useful strategy to conduct the multi-label topic classification task in the domain of public affairs with accuracies over 85%. The SVM classification improves accuracies consistently, even with classes that have a lower number of samples (e.g., less than 500 samples).
The rest of the paper is structured as follows: In Section <ref> we describe the data collected for this work, including data preprocessing details. Section <ref> describes the development of the proposed topic classification functionality. Section <ref> presents the experiments and results of this work. Finally, Section <ref> summarizes the main conclusions.
§ DATA COLLECTION AND ANALYSIS
The major decisions and events resulting from the legislative, judicial and administrative activity of public administrations are public data. Is a common practice, and even a legal requisite, for these administrations to publish this information in different formats, such as govermental websites or official gazettes[https://op.europa.eu/en/web/forum]. Here, we use a regex-powered tool to follow up parliamentary initiatives from the Spanish Parlament, resulting in a legislative-activities text corpora in Spanish. Parliamentary initiatives involve a diverse variety of parliament interactions, such as questions to the government members, legislative proposals, etc.
Raw data were collected and processed with this tool, and comprise initiatives ranging from November 2019 to October 2022. The data is composed of short texts, which may be annotated with multiple labels. Each label includes, among others, topic annotations based on the content of the text. These annotations were generated using regex logic based on class-specific predefined keywords. Both topic classes and their corresponding keywords were defined by a group of experts in public affairs regulations. It is important to note that the same topic (e.g., “Health Policy”) can be categorized differently depending on the user's perspective (e.g., citizens, companies, governmental agencies). We have simplified the annotation, adding a ID number depending on the perspective used (e.g., “Health Policy_1” or “Health Policy_2”). Our raw data is composed of 450K initiatives grouped in 155 weekly-duration sessions, with a total number of topic classes up to 385. Of these 450K samples, only 92.5K were labeled, which suppose roughly 20.5% of the samples. However, almost half of these are annotated with more than one label (i.e. 45.5K, 10.06% of samples), with a total number of labels of 240K. Figure <ref> presents the distribution of the 30 most frequent topics in the data, where we can clearly observe the significant imbalance between classes. The most frequent topic in the raw data is “Healthcare Situation”, appearing in more then 25K data samples. Other topics, such as “Health Policy”, have an important presence in the data as well. However, only 8 out of these 30 topics reach 5K samples, and only 5 of them are present in at least 10K. This imbalance, along with the bias towards health-related subjects in the most frequent topics, is inherent to the temporal framework of the database, as the Covid-19 pandemic situation has dominated significant public affairs over the past 3 years. Note that Figure <ref> depicts the thirty most frequent topics, whereas 385 topics are present in the data. To prevent the effects of major class imbalances, we will now focus on the 30 topics of Figure <ref>.
§.§ Data Curation
We applied a data cleaning process to the raw corpora to generate a clean version of the labeled data. We started by removing duplicated texts, along with data samples with less than 100 characters. Some works addressing Spanish models applied a similar filtering strategy with a threshold of 200 characters <cit.> with the aim of obtaining a clean corpus to pre-train transformer models. Here we set the threshold to 100, as our problem here does not require us to be that strict (i.e., we do not want to train a transformer from scratch). Instead, we desired to remove extremely short text, which we qualitative assessed that were mainly half sentences, while retaining as much data as possible. In this sense, we filter text samples of any length starting with lowercase, to prevent half sentences to leak in. We also identified bad quality/noisy text samples to start with “CSV” or “núm”, so we remove samples based on this rule. Finally, given the existence of co-official languages different from Spanish in Spain (e.g., Basque, Galician or Catalan), which are used by a significant percentage of Spanish citizens, we filter data samples from these languages. Due to the lack of reliable language detectors in these co-official languages, and the use of some linguistic, domain-specific patterns in the parliamentary initiatives, we identified a set of words in these languages and use it to detect and filter out potential samples not written in Spanish. We applied this process several times to refine the set of words.
At data sample level, we clean texts by removing excessive white spaces and initiative identifiers in the samples. We then filter URLs and non-alphanumeric characters, retaining commonly used punctuation characters in Spanish written text (i.e., ()-.¿?¡!_;). After applying all the data curation process, we obtain a multi-label corpus of 33,147 data samples, with annotations on the 30 topics commented above. Table <ref> presents the number of samples per topic category. Note that the number of samples of each topic has significantly decreased compared to the proportions observed in the raw data (see Figure <ref>). The impact of the data curation process is different between topics, leading to some changes in the frequency-based order of the topics. The topic with most data samples in the curated corpus is still “Healthcare Situation”, but the number of samples annotated with this topic has been reduced by half. On the other hand, we have several topics with less than 1K samples, setting a lower limit of 518.
§ METHODOLOGY AND MODELS
As we previously mentioned in Section <ref>, the samples in our dataset may present more than one topic label. Hence, the topic classification task on this dataset is a multi-label classification problem, where we have a significant number of classes that are highly imbalanced. This scenario (i.e., high number of classes, some of them with few data samples, with overlapped subjects between classes) leads us to discard a single classifier for this task. Instead of addressing the problem as a multi-label task, we break it into small, binary detection tasks, where an individual topic detector is trained for each of the 30 classes in a one vs all setup. This methodology, illustrated in Figure <ref>, represents a big advantage, as it provides us a high degree of versatility to select the best model configuration for each topic to deploy a real system. During inference, new data samples can be classified by aggregating the predictions of the individual classifiers <cit.>.
The architecture of the binary topic models is depicted in Figure <ref>. We use a transformer-based model as backbone, followed by a Neural Network, Random Forest, or SVM classifier. In this work, we explore different transformer models, pretrained from scratch in Spanish by the Barcelona Supercomputing Center in the context of the MarIA project <cit.>. We included both encoder and decoder architectures. These model architectures are the following:
* RoBERTa-base. An encoder-based model architecture with 12 layers, 768 hidden size, 12 attention heads, and 125M parameters.
* RoBERTa-large. An encoder-based model architecture with 24 layers, 71,024 hidden size, 16 attention heads, and 334M parameters.
* RoBERTalex. A version <cit.> of RoBERTa-base, fine-tuned for the Spanish legal domain.
* GPT2-base. A decoder-based model architecture with 12 layers, 768 hidden size, 12 attention heads, and 117M parameters.
We listed above the configurations reported in <cit.> for the open-source models available in the the HuggingFace repository of the models.[https://huggingface.co/PlanTL-GOB-ES] The RoBERTa models <cit.> are versions of BERT models <cit.>, in which an optimized pre-training strategy and hyperparameter selection was applied, compared to the original BERT pre-training. The Spanish versions of these models were pre-trained following the original RoBERTa configuration, with a corpus of 570 GB of clean Spanish written text. The RoBERTalex model is a fine-tuned version of Spanish RoBERTa-base, trained with a corpus of 8.9 GB of legal text data. On the other hand, GPT2 <cit.> is a decoder-based model of the GPT family <cit.><cit.><cit.><cit.>. As such, the model is aimed to generative tasks (note that modern versions of GPT models, such as InstructGPT <cit.> or GPT4 <cit.> are fine-tuned to follow human instructions, so they cannot be considered generative models in the same way as earlier GPT models), different from the RoBERTa family, which is specialized in text understanding. The version used of GPT2 was trained using the same corpus as the RoBERTa models. All the models use byte-level BPE tokenizer <cit.> with vocab size of 50,265 tokens, and have the same length for the context windows, i.e. 512. While left padding is used in the RoBERTa models, right padding is advisable for the GPT2 model.
§ EXPERIMENTS
As exposed in Section <ref>, due to the nature of the dataset collected for this work, we address multi-label topic classification by training a binary topic classifier for each class (one vs all), and then aggregating the individual predictions on a versatile way (e.g., providing rank statistics, topics over a fixed threshold, etc.). Hence, our experiments will focus on assessing the performance of different topic classifiers configurations, and the potential of the newly available Spanish language models in unconstrained scenarios (i.e., multi-label political data, with subjective annotations based on private-market interest). Section <ref> will evaluate first the performance of different transformer-based models on our dataset, and then explore the combination of the best-performance model with SVM and Random Forest classifiers.
We conduct all the experiments using a K-fold cross validation setup with 5 folds, and report mean and average results between folds. We select True Positive Rate (TPR), and True Negative Rate (TNR) as our performance measures, due to the class imbalances in the parliamentary dataset. We use in our experiments the models available in the HuggingFace transformers library[https://huggingface.co/docs/transformers/index], along with several sklearn tools. Regarding the hardware, we conducted the experiments in a PC with 2 NVIDIA RTX 4090 (with 24 GB each), Intel Core i9, 32GB RAM.
§.§ Topic Classification in the Domain of Public Affairs
Recalling from Figure <ref>, our topic detector architecture is mainly composed of i) a transformer backbone, and ii) a classifier. We train the transformer models with a binary neural network classification output layer. For each topic, we train the detector using Weighted Cross Entropy Loss to address the class imbalance in a “One vs All” setup. Topic classifiers are trained for 5 epochs using a batch size of 32 samples, and freezing the transformer layers. Table <ref> presents the results of the topics classifiers using the four transformer models explored in this work (i.e., RoBERTa-base <cit.>, RoBERTa-large <cit.>, RoBERTalex <cit.>, and GPT2-base <cit.>). We can observe a general behavior across the RoBERTa models. The classifiers trained for the topics with more samples obtain higher TPR means, close to the TNR mean values. In these cases, the classifiers are able to distinguish reasonably well text samples in which the trained topic is present. These results are, in general, consistent across folds, exhibiting moderate deviation values. This behavior degrades from Topic 9 onwards, where the low number of samples (i.e., less than 2K) leads to an increase of the TNR to values over 90% with a decay of TPR. However, we can observe some exceptions in the classifiers using RoBERTa-base as backbone (topics 11, 12, 24), where TNR scales to values close to 100% while preserving TPR performances over 80%. Furthermore, RoBERTa-base classifiers exhibit better results than the RoBERTa-large classifiers (probably due to the constrained number of samples), and even than the RoBERTalex models. Remember that both RoBERTa-base and RoBERTalex are the same models, the latter being the RoBERTa-base model with a fine-tuning to the legal domain that, a priori, should make it more appropriate for the problem at hand. Regarding GPT2-based classifiers, we observe similar trends to those of the RoBERTa models, but exhibiting lower performances. This is not surprising, as the GPT model was trained for generative purposes, rather than text understanding like RoBERTa.
It's worth noting here the case of Topic 1, which obtains the lowest TNR mean value in all models, with deviation values over 0.15, despite being the topic with more data samples (i.e. a third of the data). We hypothesize that the low performances when detecting negative samples is mostly due to the overlap with the rest of the topics, as this topic focuses on general healthcare-related aspects (remember from Table <ref> that half of the topics are related with healthcare).
From the results presented in Table <ref>, we can conclude that RoBERTa-base is the best model backbone for our task. Now, we want to assess if a specialized classifier, such as Support Vector Machines (SVM) or Random Forests (RF), can be used to fine tune the performance to the specific domain. For these classifiers, we used RoBERTa-base as feature extractor to compute 768-dimensional text embeddings from each of the text samples. We explored two approaches for these embeddings: i) using the embedding computed for the [CLS] token, and ii) averaging all the token embeddings (i.e., mean pooling). In the original BERT model <cit.>, and hence the RoBERTa model, the [CLS] is a special token appended at the start of the input, which the model uses during training for the Next Sentence Prediction objective. Thus, the output for this embedding is used for classification purposes, serving the [CLS] embedding as a text representation. We repeated the experiment using both types of representations, and end up selecting the first approach after exhibiting better results. Table <ref> presents the results of the topic models using RoBERTa-base text embeddings together with a SVM and Random Forest classifier. In all cases, we use a complexity parameter of 1 and RBF kernel for the SVM, and a max depth of 1,000 for the Random Forest. We note that these parameters can be tuned for each topic to improve the results. The first thing we notice in Table <ref> is the poor performance of the RF-based classifiers, which are the worst among all the configurations. Almost for all the topics under 2K samples, the TNR saturates to 1, and the TPR tends to extremely low values. From this, we can interpret that the classifier is not learning, and just predicting the negative, overrepresented class. However, the performance on the topics over 2K samples is far from the one observed for the RoBERTa models of Table <ref>. This could be expected, as the RF classifier is not the best approach to work with input data representing a structured vector subspace with semantic meaning, such as text/word embedding subspaces, specially when the number of data samples is low. On the other hand, the SVM performance clearly surpass all previous configurations in terms of TPR. While the results are comparable with those of RoBERTa-base
with NN for the first 5 topics, this behavior is maintained for all topics, regardless of the number of data samples. Almost all classifiers achieve a TPR over 80%, except for topics 15, 17 and 18. Nevertheless, the results in these topics increase with the SVM (e.g., for topic 15, where RoBERTa-base with the NN classifier achieved a TPR mean of 61%, here we obtain a 70%). TNR values are, in general, slightly lower, but this could be caused because in previous configurations, topic classifiers tend to exhibit bias towards the negative class as the number of samples falls (i.e., similar to the behavior of the RF classifier). Interestingly, the high deviation observed in the Topic 1 TNR appears too in both SVM and RF classifiers, which could support our previous hypothesis. As we commented before, we suspect that an hyperparameter tuning could improve even more the SVM results on our data.
§ CONCLUSIONS
This work applies and evaluates Large Language Models (LLMs) for topic classification in public affairs documents. These documents are of special relevance for both citizens and companies, as they contain the basis of all legislative updates, social programs, public announcements, etc. Thus, enhancing the analysis of public documents using the recent advances of the NLP community is desirable.
To this aim, we collected a Spanish text corpora of public affairs documents, using a regex-powered tool to process and annotate legislative initiatives from the Spanish Parlament during a capture period over 2 years. The raw text corpora is composed of more than 450K initiatives, with 92K of them being annotated in a multi-label scenario with up to 385 different topics. Topic classes were defined by experts in public affairs regulations. We preprocess this corpus and generate a clean version of more than 33K multi-label texts, including annotations for the 30 most frequent topics in the data.
We use this dataset to assess the performance of recent Spanish LLMs <cit.><cit.> to perform multi-label topic classification in the domain of public affairs. Our experiments include text understanding models (three different RoBERTa-based models <cit.>) and generative models <cit.>, in combination with three different classifiers (i.e., Neural Networks, Random Forests, and SVMs). The results show how text understanding models with SVM classifiers supposes an effective strategy for the topic classification task in this domain, even in situations where the number of data samples is limited.
As future work, we plan to study in more depth biases and imbalances <cit.> like the ones mentioned before presenting Figure <ref>, and compensating them with imbalance-aware machine learning procedures <cit.>. More recent LLMs can be also tested for this task, including multilingual and instruction-based models, which have shown great capacities in multiple NLP tasks, even in zero-shot scenarios. We will also continue our research by exploring the incorporation of other NLP tasks (e.g. text summarization, named entity recognition) and multimodal methods <cit.> to our framework, with the objective of enhancing automatic analysis of public affairs documents.
§ ACKNOWLEDGMENTS
This work was supported by VINCES Consulting under the project VINCESAI-ARGOS and BBforTAI (PID2021-127641OB-I00 MICINN/FEDER). The work of A. Peña is supported by a FPU Fellowship (FPU21/00535) by the Spanish MIU. Also, I. Serna is supported by a FPI Fellowship from the UAM.
splncs04
|
http://arxiv.org/abs/2306.10520v1
|
20230618105352
|
RetinexFlow for CT metal artifact reduction
|
[
"Jiandong Su",
"Ce Wang",
"Yinsheng Li",
"Kun Shang",
"Dong Liang"
] |
eess.IV
|
[
"eess.IV",
"cs.CV"
] |
[email protected]
[1]
Shenzhen Insititute of Advanced Technology, CAS
P.O. Box 1212
Shenzhen
Guangdong
China
518034
[email protected]
[1]
Institute of Computing Technology, CAS
P.O. Box 1212
Beijing
Beijing
China
[email protected]
Shenzhen Insititute of Advanced Technology, CAS
P.O. Box 1212
Shenzhen
Guangdong
China
518034
[email protected]
Shenzhen Insititute of Advanced Technology, CAS
P.O. Box 1212
Shenzhen
Guangdong
China
518034
[email protected]
Shenzhen Insititute of Advanced Technology, CAS
P.O. Box 1212
Shenzhen
Guangdong
China
518034
Metal artifacts is a major challenge in computed tomography (CT) imaging, significantly degrading image quality and making accurate diagnosis difficult. However, previous methods either require prior knowledge of the location of metal implants, or have modeling deviations with the mechanism of artifact formation, which limits the ability to obtain high-quality CT images. In this work, we formulate metal artifacts reduction problem as a combination of decomposition and completion tasks. And we propose RetinexFlow, which is a novel end-to-end image domain model based on Retinex theory and conditional normalizing flow, to solve it. Specifically, we first design a feature decomposition encoder for decomposing the metal implant component and inherent component, and extracting the inherent feature. Then, it uses a feature-to-image flow module to complete the metal artifact-free CT image step by step through a series of invertible transformations. These designs are incorporated in our model with a coarse-to-fine strategy, enabling it to achieve superior performance. The experimental results on on simulation and clinical datasets show our method achieves better quantitative and qualitative results, exhibiting better visual performance in artifact removal and image fidelity.
<ccs2012>
<concept>
<concept_id>10010520.10010553.10010562</concept_id>
<concept_desc>Computer systems organization Embedded systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010575.10010755</concept_id>
<concept_desc>Computer systems organization Redundancy</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010553.10010554</concept_id>
<concept_desc>Computer systems organization Robotics</concept_desc>
<concept_significance>100</concept_significance>
</concept>
<concept>
<concept_id>10003033.10003083.10003095</concept_id>
<concept_desc>Networks Network reliability</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
[1000]Computing Methodologies Network
[500]Computing Methodologies Reconstruction
RetinexFlow for CT metal artifact reduction
Dong Liang
July 31, 2023
===========================================
§ INTRODUCTION
Computed tomography (CT) is an indispensable imaging technology that assists clinical decision-making for medical diagnosis and treatment with high-quality anatomical representations of human body. However, metallic implants inserted into the patient's body, such as dental fillings and hip prostheses, would lead to corrupted information in X-ray projections (sinograms) and cause undesirable star-shape or streak artifacts in the reconstructed CT images <cit.>. These artifacts not only destroy the observation of anatomical details for affect the clinical diagnosis, but also make dose calculation problematic in radiation therapy to limit the diagnostic value of CT scans <cit.>. With the widely use of metallic implants, how to reduce metal artifacts has become an important problem, which gains increasing attention from the CT community.
Numerous metal artifact reduction (MAR) methods have been proposed in the past decades. Since the metal artifacts in CT images have non-local, structured characteristics, the previous MAR methods <cit.> mainly focus on the sinogram domain by modeling the physical effects of the presence of high atomic number metals. However, the metal trace regions in sinogram domain are often severely corrupted such that these methods are limited in achieving satisfactory results. The other perspective regards the metal trace regions as the missing areas and fills them by linearly interpolating with the adjacent unaffected projection views <cit.>. As these methods cannot accurately recover the metal trace information, the inconsistency between interpolated values and the unaffected values often causes secondary artifacts in CT images. In addition, several works <cit.> recover the affected sinogram by estimating the prior information of various tissue from the other uncorrupted image. Recently, some deep learning methods <cit.> have proposed to directly learn a mapping function from the sinogram domain to the artifact-reduced image. We call these methods working in sinogram domain as methods based on sinogram-domain enhancement (SE). Despite the success achieved by the above SE methods, there are still significant limitations due to the requirement of metal trajectories and the newly generated artifacts in reconstructed CT images. In practice, it is difficult to obtain the location of metal implants, and the additionally secondary artifacts makes the MAR uneffective.
Meanwhile, some researchers <cit.> consider MAR as an image-domain restoration (IR) problem, and reduce the metal artifacts with image-to-image translation networks, which no longer rely on the position information of metals. For example, Huang et al. <cit.> introduce deep residual learning to reduce metal artifacts in cervical CT images. Wang et al. <cit.> propose to use the conditional generative adversarial network (cGAN) <cit.> to reduce metal artifacts in CT images. Then, Liao et al. <cit.> introduce a artifact disentanglement network that disentangles the metal artifacts from CT images in the latent space by unsupervised learning. Recently, Lin et al. <cit.> develop a dual-domain learning method to improve the performance of MAR by involving sinogram enhancement as a procedure. Wang et al. <cit.> propose an deep interpretable convolutional dictionary Network for MAR, which uses LI <cit.> to enhance sinogram by considering the non-local repetitive streaking priors of metal artifacts. As there is no physical priors to regularize the models, there exist unreasonable patterns of anatomical structures and image contrast in the recovery, which limits the usage of image domain methods in real clinical scenarios.
In this work, we propose a novel image-domain method, named RetinexFlow, for reducing metal artifacts in reconstructed CT images. Different from the previous SE/IR methods, we formulate MAR as a combination of decomposition and completion tasks. For decomposition task, we design a feature decomposition encoder by Retinex theory <cit.>, which bridges the physical prior modeling missing in previous works. It decomposes the variant component and inherent component in a CT image, and then extracts the inherent feature. For completion task, we convert it to a distribution transformation task, and design a conditional feature-to-image flow module to complete the metal
artifact- free CT image step by step through a series of invertible transformations. Since the transformation is working at the distribution level, it does not depend the information of metal implants for every image to assist the artifact removal, greatly reducing the interaction complexity.
To sum up, our contributions are as follows:
* We formulate MAR as a combination of decomposition and completion tasks. To avoid two-stage training for decomposition and completion separately, we propose an end-to-end conditional learning framework in a coarse-to-fine way.
* Inspired by the Retinex theory, the CT image is decomposed to inherent component and variant component. We set the feature decomposition encoder for coarse extracting the inherent feature of the object itself in the CT image.
* We further use normalizing flow to refine in feature space, which rather than directly processing the image or sinogram domains. It progressively narrows down the solution space, resulting in a cleanest solution.
* The quantitative and qualitative results on the simulated DeepLesion dataset demonstrate that RetinexFlow is capable to remove artifacts while preserving anatomy details. Moreover, when testing on the clinical CT pelvic dataset from different anatomies, our method shows better generalization performance and is effective in removing artifacts.
§ BACKGROUND AND MOTIVATION
§.§ Problem formulation
Different human body tissues have different X-ray attenuation coefficients μ. Considering the 2D CT image, we use X = μ(x, y) to represent the anatomy structure. According to the Lambert-Beer law <cit.>, with a polychromatic X-ray source, sinograms Y of anatomy structures are determined by the following model with energy distribution η(E):
Y = -log ∫η(E)exp{-PX(E)}dE,
where P denotes the forward projection operator. In practical CT imaging, we recover the 2D image X(E) from the measured sinograms Y. Normally, without metal implants, X(E) is approximately equal to a constant relative to the X-ray energy E, and therefore, X=X(E). Then reconstruction X^† can be inferred with various imaging algorithms P^† <cit.>. When there exists the metal implants M(E), X(E) would suffer from large variations and X = X(E) + M(E). Thus, the equation (<ref>) becomes:
Y = - log ∫η(E)exp{PX(E) -PM(E)}dE.
In this case, when we still back-project the corrupted sinograms and reconstruct with P^†, this results
X_M^† = P^† Y = X^† - P^† lo g∫η(E) exp{ -PM(E) }dE.
Thus, the reconstruction error between X_M^† and X is formulated as
e_M = X_M^† - X.
If the projection data is consistent, then e_M should be close to 0 over the entire object area. However, when projection data is corrupted, we will not be able to obtain accurate reconstruction results. Clearly, at this time, e_M will have a significant deviation over the entire object area. As the counted mean value of each line in obtained sinograms w/wo metals in Fig. <ref>, when there are metal implants in the human body, the number of photons reaching the detector is greatly attenuated. Similar, the numerical values of the sinograms will undergo significant attenuation, manifested as streak artifacts or black shadows in the image.
§.§ Motivation
To minimize e_M, SE methods fill the metal-corrupted areas with estimated values, while the required prior knowledge of the metal trace makes it practically ineffective. Besides, directly correcting the sinograms, especially the extra peak shown in Fig. <ref>, across metals of different size and shape is difficult. The IR methods formulate the MAR problem as an image restoration problem. Such methods define the linear relationship between image content and metal artifacts, and use disentanglement procedure to separate the two components to obtain a corrected image. However, there exist unreasonable patterns of anatomical structures and image contrast in the Restored image without using physical priors. Some researchers have attempted to combine both domains by introducing sinogram enhancement in the image domain to improve network performance. However, these methods still directly use the image domain improved output as the final reconstructed image, which may result in anatomical structure changes in the output image.
In addition, Notice that anatomy structure of CT image content is much less than natural image content, making it easier to overfit to special size or shape of metal. And the presence of black shadow areas in the image, especially when facing with large or irregular metal-implants (some examples are shown in Fig. <ref>), it is difficult to solve the missing value filling problem.
All above discussions motivate us to find a new solution for MAR. We regard MAR as a combination of decomposition and completion tasks, and further model it in image domain without depending on sinogram.
§ METHOD
Consider MAR problem in image domain, as show in Fig. <ref>, different metal implants present different white spots with different metal artifacts in the reconstructed CT image. Inspired by Retinex Theory <cit.>, which assume that images under the same scene with different light conditions can have different illumination components, while share the same reflectance component. we regard the metal implant as a "light source" in a CT image. And a metal artifact CT image X is formulated as:
X = L ⊙ R,
where L represents the illumination component decided by metal implants, R represents the reflection component of the inherent properties of the object itself, and ⊙ means the element-wise product.
If such a decomposition for the CT image is accurately estimated, the corresponding metal artifact-free image can be obtained. However, the estimated metal artifact-free component R_est may contain unexpected degradations, such as noise and contrast biases, by existing method <cit.>. For further improving the estimation R_est, we propose RetinexFlow model, which is shown in Fig. <ref>, for obtain the cleanest solution with the greatest conditional probability in a coarse-to-fine way. It contains two main modules, the first one is for decomposing L and R, and give a coarse estimation R_coarse. Then, we use a conditional normalizing flow to refine R_coarse for obtain the cleanest metal artifact-free image Y_est.
§.§ Feature decomposition encoder
We design a preliminary feature decomposition extractor (FDE), which will reduce the search of the feasible artifact-free solution space. First, we take an normalization operator on input (metal artifact) image X for suppressing the influence of metal artifacts and other noise, which is:
N = dX/∑_k=1^dvec(X),
where vec(·) means the vectorization operator, d is the dimension of vec(X), respectively. Secondly, in order to preserve more details about edge and structure, we further conduct the vertical and horizontal gradients of N as feature S:
S= concat(∇_c N, ∇_r N),
where concat(·) is a concatenation operator. Finally, the result of concat(X, N, S) as the input of RRDB <cit.>. Notice that the dimension of concat(X, N, S) is larger than the preliminary input, we remove the upsampling layer in RRDB <cit.> for keeping the same dimension, we denoted it as modified RRDB (mRRDB). Thus, the first estimated metal artifact-free component R_coarse= g_θ(X) is accomplished, and the structure of FDE is shown in Fig. <ref> (a).
§.§ Completion flow
Given an coarse estimated of R_coarse= g_θ(X) about the metal artifact-free component, the metal artifact-free image X_est can get from it. To obtain the cleanest solution, the next task is completion task. We design a feature-to-image multi-scale conditional flow module, named completion flow (CF), based on Glow <cit.> and Real-NVP <cit.>, to restore the metal artifact-free CT image Y_est.
Based on the change-of-variable formula, flow-based methods <cit.> map the distribution of images to a simple prior distribution, which can realize the exact conversion between the latent feature space and image space through well-designed reversible network structures. Our CF module has a multi-scale structure consisting of L level for modeling a one-to-many mapping between a feature and its feasible solution space (image), which is shown in Fig. <ref> (b).
Concretely, the distribution p_Y|R_coarse(Y|R_coarse,θ) with f_θ that maps data pairs (R_coarse,Y) to latent variables z=f_θ(Y;R_coarse). Since the network is reversible, we can always accurately reconstruct Y_est=f_θ^-1(Z;R_coarse) from the latent variable Z.
For each scale, for facilitating information exchange along the channel dimension, an squeezing layer is used to compress the input along the channel axis and spatial dimensions. Each level performs operations in series, and a single scale operation consists of K reversible flow steps, which perform more refined reasoning. Concretely , a flow step consists of three components: actnorm, 1 × 1 invertible convolution, and coupling layer.
* Actnorm is to normalize the input;
* 1 × 1 convolution acts as a permutation convolution, mixing information along the channel dimension;
* Coupling layer introduces non-linearity, and the composition of multiple coupling layers allows for the model to have stronger representational power.
§.§ Loss function
Given a large number of training pairs (X_i, Y_i) (i is the sample number) of images w/wo metal artifacts, we define the MAR problem as the conditional probability distribution problem of learning to project metal-free images Y from metal artifact images X by minimizing the negative log-likelihood (NLL) of the loss function:
ℒ (θ;X_i,Y_i) = -log p_Y_i|g_θ(X_i),θ
= -log p_z(f_θ(Y_i;g_θ(X_i)))-log| ∂ f_θ/∂ Y_i(Y_i;g_θ(X_i))|,
where g_θ(·) is FE, and represents the Jacobian determinant, illustrating the density transformation caused by the reversible network f_θ.
§ EXPERIMENT
To verify the effectiveness and generalization of our method, we verify the proposed methods on both synthetic and clinical data. In addition, we also give enough discussion about RetinexFlow itself in this section.
§.§ Dataset and Experimental Setup
Dataset. We use the DeepLesion <cit.> dataset for training and verification. Specificly, we randomly select 4200 total clean CT images from the DeepLesion <cit.> to synthesize metal-corrupted images, where 4000 images are for training and the other 200 images are for testing. Then, following the procedures from Zhang et al. <cit.>, 100 metal masks of different size and shape are generated for corruption synthesis, of which 90 masks are used in training and 10 masks are used in testing. The simulation is conducted in a fan-beam geometry with 640 projections uniformly spaced between 0-360 degrees. All these CT images are of the size 416×416.
For further demonstrate the generalization and clinical value of our method, we choose the CT pelvic1K dataset <cit.> for testing, which has many real metal artifact images. We extracted 230 2D CT images with metal implants from a 3D sequence, and each image has a size of 512 × 512.
Compared methods. We compare our method with several state-of-the-art CT MAR methods, including traditional iterative SE methods (LI <cit.> and NMAR <cit.>), IR method based on deep generative model (ADN <cit.>), and IR methods based on dual domain learning (Dudonet++ <cit.>, DICDNet <cit.>). Among these methods, LI <cit.>, NMAR <cit.>, and DICDNet <cit.> require prior information about the metal implant as a constraint, while ADN <cit.>, Dudonet++ <cit.>, and our RetinexFlow only require a single CT image as input.
Implementations.
We implement all the experiments with the Pytorch framework. During the training, models are trained for 50 epochs on a single NVIDIA A6000 GPU with a learning rate of 2×10^-4 and a batch size of 6 .We use the Adam optimizer <cit.> with parameters (β1, β2) = (0.5, 0.999). Whatever the synthetic or clinical experiment, we set flow level number L=3, flow-step number K=6, and hidden channel number c=64 in CF module, and freeze the 1 × 1 reversible convolution for stable training.
Evaluation Metrics. Images are quantitatively evaluated with the peak signal-to-noise ratio (PSNR) and strutured similarity index (SSIM) <cit.>.
§.§ Results on simulation data
Quantitative performance
The quantitative comparison results of our proposed method and other methods on the simulation data are shown in Table <ref>. It is obvious that the deep-learning-based methods outperform traditional MAR methods in terms of PSNR and SSIM, indicating the superiority of data-driven methods in the MAR problem. Especially, the dual-domain-learning-based DICDNet performs better over others by incorporating LI correction results as reference values. With the introduction of Retinex theory on modeling physical prior, our method further improves the quantitative performance although we only require image-domain input and do not need metal trace as input. Specifically, We achieve a 4.21dB improvement in terms of PSNR and a little improvement in terms of SSIM over the second DICDNet, demonstrating the effectiveness of such Retinex inspired modeling in the MAR problem.
Qualitative performance
We further show the visual comparisons between our method and comparative methods on the simulation data in Fig. <ref>. To enhance the display effect, the simulated metal implants are masked in red(All subsequent experiments are presented using this approach). Traditional method like LI <cit.> and NMAR <cit.> cannot accurately reconstruct the accurate anatomical structure of metal implants and the surrounding bone and soft tissues,
while our method can achieve this. When the metal implants are relatively large, both the ADN <cit.> and DuDoNet++ <cit.> fail to preserve the details of the original image well. Although DICDNet <cit.> takes LI <cit.> results as inputs, the method introduces secondary artifacts which is clinically undesirable. In contrast, our method can remove most of the streaking artifacts while effectively filling the black shadow regions. Although RetinexFlow is a image-domain method, which do not utilize the LI <cit.> results as references, the Retinex inspired learning help correct the constrast between anatomies, avoiding secondary artifacts while retaining the original structure details.
§.§ Ablation studies
Next, we discuss the effectiveness of several designs of our proposed method. All experiments are conducted on the simulation data to allow for quantitative and qualitative analysis.
§.§.§ Why we use FE?
To illustrate why we use the designed FE as the head of IF, we first give a toy example. For comparison, a modified network without FE, which still include the mRRDB encoder, is used to demonstrate the effectiveness of FE. The quantitative results of the two methods are shown in the Table. <ref>. It can be observed that the network trained using FE module performs better with 1.98dB improvement in terms of PSNR, indicating the effectiveness of the proposed FE module.
The corresponding visual result is shown in the Fig. <ref>. It is some artifacts that remains in the CT image, and the completion in the black shadow region is not satisfied, which has been processed by FlowNet with convolutional encoder mRRDB (without FE). Whatever quantitative or qualitative results indicate the effectiveness of FE in reducing metal artifacts.
§.§.§ Influence of Flow-step Number
CF is an important module for refining the inherent feature, and transform it to metal artifact-free image, which has multiple non-linear flow-steps. Here, we compare the impact of different numbers of flow-step in the RetinexFlow model, and show the results in Table. <ref>. We observe that the performance of RetinexFlow consistently increases when we enlarge the number of flow-step from 1 to 6. Within a certain range of flow-step numbers, the higher the flow-step number, the better the quantitative performance. And the same phenomenon has also been shown in Fig.
<ref>, which means the CF gets deeper, the processing of features gets more refined.
§.§.§ Freeze 1 × 1 invertible convolution or not
Based on Glow <cit.>, some previous works <cit.> employ the 1 × 1 invertible convolution layer before each affine coupling layer to mix information. However, such operation introduces instability during model training. For better use the 1 × 1 invertible convolution layer, We further explore whether to freeze it. As shown in Table <ref>, when we freeze the 1 × 1 invertible convolution layer, the model achieves 0.8dB improvement than the non-freeze one in the term of PSNR. Thus we the 1 × 1 invertible convolution layer, which seems not suitable for our task.
§.§.§ Influence of hidden channels
To ablate the model width, we train our network with different number of hidden channels in two conditional layers, which is shown in Table <ref>. Decreasing the number of hidden layers leads to more artifacts in complex structures, and a larger number of channels leads to better image quilts for CT MAR. Therefore, we set the number of hidden channels as 64 in our model.
§.§ Results on clinical data
In order to further verify the generalization and clinical value of the proposed RetinexFlow model, we next evaluate the performance of our proposed RetinexFlow on clinical "CT pelvic1K dataset <cit.>", which has many real metal artifact images. As there are no available ground truths, we only perform qualitative comparisons. Notice that all the deep-learning-based methods are trained on the DeepLesion <cit.> dataset, not the clinical data "CT pelvic1K dataset <cit.>".
As shown in Fig. <ref>, we find that traditional methods such as LI <cit.> and NMAR <cit.> have removed certain metal artifacts, while they introduce secondary artifacts when completing the projection domain. Among the deep-learning-based methods, ADN <cit.> removes most of the streaks and dark shadows, but changes the image sharpness and still leaves residual streak-like artifacts. Although DICDNet <cit.> can preserve the image structures well, it also introduces secondary artifacts in the final display results because of the utilization of LI. In contrast, with the Retinex inspired learning, image contrast prior is physically modeled. And the coarse to fine processing of RetinexFlow ensures that no new artifacts are introduced. Therefore, RetinexFlow removes most of the streaks and dark shadows while preserving most details of the original image.
§ CONCLUSION
In this work, we formulate metal artifacts reduction problem as a combination of decomposition and completion tasks. And we propose a novel end-to-end image domain model based on Retinex theory and conditional normalizing flow, named RetinexFlow, to solve it. To obtain the cleanest metal-artifact-free image, the coarse to fine RetinexFlow first decomposing the metal implant component and inherent structure component, and then refines the extracted inherent feature to the cleanest metal artifact-free image. Experimental results of our experiments on simulation data indicate that the proposed method achieves the best performance, quantitatively and qualitatively. Though only using simulation data for training, our method shows superior generalization ability on clinical Splveic data. Our future work will focus on more complex scenarios in real situations, as well as further reducing workload in actual production in an unsupervised/self-supervised manner.
To Robert, for the bagels and explaining CMYK and color spaces.
ACM-Reference-Format
|
http://arxiv.org/abs/2306.05347v1
|
20230608164952
|
First constraints on the strength of the extragalactic magnetic field from $γ$-ray observations of GRB 221009A
|
[
"Timur A. Dzhatdoev",
"Egor I. Podlesnyi",
"Grigory I. Rubtsov"
] |
astro-ph.HE
|
[
"astro-ph.HE"
] |
firstpage–lastpage
[
Pinaki Chaudhuri
July 31, 2023
====================
The extragalactic magnetic field (EGMF) could be probed with γ-ray observations of distant sources. Primary very high energy (VHE) γ-rays from these sources absorb on extragalactic background light photons, and secondary electrons/positrons from the pair production acts create cascade γ-rays. These cascade γ-rays could be detected with space γ-ray telescopes such as Fermi-LAT. The γ-ray burst GRB 221009A was an exceptionally bright transient well suited for intergalactic γ-ray propagation studies. Using publicly-available Fermi-LAT data, we obtain upper limits on the spectrum of delayed emission from GRB 221009A during the time window of 30 days after the burst, and compare these with model spectra calculated for various EGMF strengths B, obtaining lower limits on B. We show that the values of B ≤ 10^-18 G are excluded. For some optimistic models of the VHE spectrum of GRB 221009A, the values of B ≤ 10^-17 G are excluded.
gamma-ray burst: individual: GRB 221009A — magnetic fields — gamma-rays: general — methods: data analysis — methods: numerical
§ INTRODUCTION
GRB 221009A, an exceptionally bright <cit.> and relatively nearby (redshift z = 0.1505 <cit.>) γ-ray burst, has been detected with the WCDA and KM2A arrays of the LHAASO experiment in the energy range E > 500 GeV <cit.>. In particular, the detection of γ-rays above the energy of 10 TeV from GRB 221009A was reported.
TeV γ-rays from GRB 221009A are strongly absorbed on extragalactic background light (EBL) photons by means of the pair production (PP) process (γγ→ e^+ e^-). The secondary electrons and positrons[hereafter collectively called “electrons”] produced in the PP acts get deflected on the extragalactic magnetic field (EGMF); these secondary electrons then produce cascade γ-rays by means of the inverse Compton (IC) scattering (e^-γ→ e^-'γ^' or e^+γ→ e^+'γ^'). The energy, angular, and temporal characteristics of this cascade γ-ray echo are sensitive to the EGMF strength and structure, thus allowing to probe the EGMF with observations of extragalactic sources <cit.>.
Several GRBs were detected in the very high energy (VHE, ) domain before GRB 221009A <cit.>. For one of them, GRB 190114C, it was shown that the intensity of the cascade γ-ray echo is below the sensitivity of the operating telescopes even for the EGMF strength B = 0 <cit.> (hereafter D20), and this conclusion was confirmed by <cit.>. <cit.> show that for GRB 130427A the cascade echo is detectable with the existing γ-ray telescopes for B > 10^-18 G under certain optimistic assumptions on the high intensity of this GRB in the VHE domain. Unfortunately, the GRB 130427A was not detected at TeV energies and thus the latter constraints on B remain conjectural.
In this Letter, we report on the constraints on the EGMF strength from γ-ray observations of GRB 221009A with LHAASO <cit.> and Fermi-LAT <cit.>. We describe our analysis of Fermi-LAT data in Section <ref>. The constraints on B are reported for two different shapes of the primary γ-ray transient spectrum: 1) smoothly broken power law (SBPL) based exclusively on Fermi-LAT measurements and 2) log-parabolic (LP) spectrum combining information from both Fermi-LAT and LHAASO (see Section <ref>). The intergalactic cascade pair echo calculation procedure is outlined in Section <ref>. The main results are presented in Section <ref>; then follows a brief discussion (Section <ref>) and conclusions (Section <ref>). Appendix <ref> and Appendix <ref> contain additional information on the data analysis, simulations and results.
§ FERMI-LAT DATA ANALYSIS
We select Fermi-LAT data within 90 days of observation, starting at the Fermi-GBM trigger time T_0 <cit.>. We reconstruct the spectral energy distribution (SED=E^2dN/dE) of GRB 221009A in the time window from T_0 to T_0 + δ T_L, where δ T_L = 2 × 10^3 s is the duration of the LHAASO observation of the source according to <cit.>. This SED is shown in Fig. <ref> as red circles with statistical uncertainties. Some details of this analysis are presented in Appendix A.
We derive upper limits (95 % C.L.) on the SED of the cascade γ-ray echo from GRB 221009A, starting at T_0 + δ T_A, where δ T_A = 2×10^5 s is an approximate duration of the γ-ray afterglow of GRB 221009A visible with Fermi-LAT <cit.>, and ending at T_0 + δ T_A + δ T_E, with δ T_E = 30 days. The results for the upper limits are shown in Fig. <ref> (red horizontal bars with downward arrows). Some details of this analysis are presented in Appendix A as well. The comparison of the upper limits for δ T_E = 10, 30 and 90 days is shown in Fig. <ref> (see Appendix A).
§ THE PRIMARY G-RAY SPECTRUM OF GRB 221009A
As we eagerly expect a refereed publication of the spectrum of GRB 221009A over the time window from T_0 to T_0+T_L following the announcement made by <cit.>, we already have some means of constraining the primary γ-ray spectrum of this transient. We fit the Fermi-LAT spectrum shown in Fig. <ref> with a power-law function, obtaining the best-fit index γ_1 = 1.56. The broadband γ-ray spectrum of GRB 221009A could be characterised with the following SBPL function:
dN/dE = K_s( E/E_s)^-γ_1[1+ ( E/E_b)^ϵ]^-(γ_2-γ_1)/ϵ,
where K_s = 3.36 × 10^-2 TeV^-1cm^-2s^-1 is the normalization factor, γ_2 = 2, E_b= 10 GeV, ϵ = 1, and E_s = 422 MeV is the reference energy. This option of the primary spectrum serves to represent the minimal VHE γ-ray intensity case. The corresponding SED is shown in Fig. <ref> as black curve.
<cit.> reported the observation of more than N_γ = 5×10^3 γ-rays from GRB 221009A in the energy range between 500 GeV and 18 TeV. Assuming N_γ = 5×10^3, β≥ 0 and taking the effective areas of the WCDA and KM2A arrays according to the Supplementary Information for <cit.>, we estimate the parameters of the LP spectrum as follows:
dN/dE = K_l( E/E_l)^-α-βln(E/E_l),
where K_l = 3.12 × 10^-2 TeV^-1cm^-2s^-1, α = 1.57, β = 0, and the reference energy E_l = 1.33 GeV. In this case the best fit is a pure power-law function[a particular case of the LP function] shown in Fig. <ref> as green line. Finally, we consider another option of the primary spectrum with K_l = 3.34 × 10^-2 TeV^-1cm^-2s^-1, α = 1.30, β = 4 × 10^-2, and E_l = 1.33 GeV (shown in Fig. <ref> as blue curve). While performing the fitting for the latter two cases we added the following additional constraint on the fit: N_γ - δ N_γ < N_γ-fit < N_γ + δ N_γ with δ N_γ = √(N_γ), where N_γ-fit is the estimated number of γ-ray events with the energy E > 500 GeV that would be registered with the LHAASO detector.
§ SIMULATION OF THE PAIR ECHO FROM GRB 221009A
We calculate the observable SED of the intergalactic cascade pair echo using the publicly available code ELMAG3.03 <cit.> in the time window from T_0 + δ T_A to T_0 + δ T_A + δ T_E. The maximal energy of the primary γ-rays is set to 100 TeV. The general scheme of calculations follows D20.
We assume the EBL model of <cit.>. As in D20, the EGMF was modeled as isotropic random nonhelical turbulent field with a Kolmogorov spectrum and Gaussian variance B_RMS (hereafter simply B) following the approach of <cit.>. The minimal and maximal EGMF spatial scales were set as L_min = 5 × 10^-4 Mpc and L_max = 5 Mpc[this corresponds to the coherence length of L_c≈ 1 Mpc], respectively, with 200 field modes in total. Full three-dimensional simulation was employed. We neglect collective (plasma) energy losses for cascade electrons <cit.>. For sufficiently large values of B, the width of the pair echo's observable angular distribution θ_obs is comparable to or larger than the width of the point spread function (PSF) of Fermi-LAT θ_PSF. This could affect the reconstructed point-like spectrum of the source. In what follows we neglect the latter effect since θ_obs≪θ_PSF for the values of B ≲ 10^-18 G <cit.>.
§ RESULTS
§.§ The SBPL primary spectrum option
The simulated γ-ray spectra of the cascade echo for δ T_E = 30 days are shown in Fig. <ref> for B = 10^-19 G (black curves), B = 10^-18 G (green curves) and B = 10^-17 G (blue curves). An additional sharp cutoff in the primary γ-ray spectrum was introduced; solid curves correspond to the cutoff energy E_c = 20 TeV, short-dashed curves — to E_c = 10 TeV, long-dashed curves — to E_c = 100 TeV (see Section <ref>). We conclude that the case of B = 10^-18 G is excluded even for E_c = 10 TeV. A plot including additional EGMF strength values in range from B = 10^-21 G to B = 3 × 10^-17 G for the case of E_c = 20 TeV is presented in Appendix <ref> (see Fig. <ref>), leading to the exclusion of the range of B values from B = 2 × 10^-21 G to B = 2 × 10^-18 G. The option of B < 2 × 10^-21 G is already excluded as stems from the negative results of the search for cascade echo from blazars <cit.>.
§.§ The LP primary spectrum option
Results similar to those presented in Fig. <ref> are shown in Fig. <ref> and Fig. <ref> for the case of the power-law and log-parabolic primary spectra (green line and blue curve in Fig. <ref>, respectively). In these cases, the values of (approximately) B < 10^-17 G could be excluded.
§ DISCUSSION
The obtained results are directly relevant for a large-scale EGMF with the coherence length λ >10-100 kpc. In this case the typical electron energy loss length L_E-e < λ <cit.>. In the opposite case of a “turbulent” EGMF (L_E-e > λ) the resulting limits on B become even stronger <cit.>.
The obtained constraints depend on the assumed EBL model. Similar to D20, we performed calculations of the pair echo spectrum for a modified EBL model with the intensity normalization factor K_EBL= 0.7. The resulting borderline values of B typically change only slightly, within ≈ 20 %.
Finally, we note that the advent of the next-generation space γ-ray telescopes such as MAST <cit.> could dramatically improve the pair echo detectability prospects. The improved sensitivity of the Cherenkov Telescope Array (CTA) <cit.> in the energy range of 100 GeV – 10 TeV could significantly facilitate the measurement of the intrinsic spectrum, reducing the uncertainty of the pair echo characteristics.
§ CONCLUSIONS
Using the γ-ray observations of GRB 221009A with LHAASO and Fermi-LAT, we were able, for the first time, to obtain constraints on the EGMF strength from GRB emission. We show that the values of B ≤ 10^-18 G are excluded for all values of the EGMF coherence length and for all considered models of the primary γ-ray spectrum. The resulting constraints reveal moderate dependence on the cutoff energy and weak dependence on the EBL normalization. For some options of the primary γ-ray spectrum model, the values of B ≤ 10^-17 G are excluded.
§ ACKNOWLEDGEMENTS
The work of TD and GR was supported by the Russian Science Foundation, grant no. 22-12-00253.
§ DATA AVAILABILITY
The datasets used for the Fermi-LAT data analysis presented in this work are publicly available at the Fermi-LAT data server[<https://fermi.gsfc.nasa.gov/cgi-bin/ssc/LAT/LATDataQuery.cgi>]. The ELMAG 3.03 code is provided by its authors[<https://elmag.sourceforge.net>].
mnras
§ FERMI-LAT DATA ANALYSIS DETAILS
The region of interest (ROI) is a circle with the radius of 20^∘, centred at the position of the GRB (α_J2000 = 288.264 ^∘, δ_J2000 = 19.773 ^∘). We have applied the energy selection from 100 MeV to 300 GeV. For other selection parameters, we use standard recommendations for point-like sources[<https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Data_Exploration/Data_preparation.html>].
We then perform unbinned likelihood analysis of the selected data with FermiTools version 2.20[<https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/>] assuming instrument response functions. We construct a model of observed emission including the following sources: 1) GRB 221009A itself, modeled as a pointlike source with power-law spectrum at the center of the ROI, 2) all sources from the Fermi 8-Year Point Source Catalog (4FGL) <cit.> located within 17^∘ from the center of the ROI, and 3) galactic and isotropic diffuse γ-ray backgrounds using models and provided by the Fermi-LAT Collaboration. For GRB 221009A, we set both spectral index and normalization as free parameters; normalizations for the diffuse backgrounds are left free.
We first perform the fit in 100 MeV – 300 GeV energy range keeping free all parameters for the pointlike and extended sources from the 4FGL catalog within 5^∘ from the center of the ROI. For the sources between 5^∘ to 17^∘ from the center of the ROI we fix all the parameters at their values from the 4FGL catalog. For the case of the SED measurement over the first 2 × 10^3 s after the trigger, this procedure is performed over the time window of 10^5 s after the trigger to better constrain the parameters of the steady sources; for the case of the delayed emission search, the relevant time window is from T_0 + δ T_A to T_0 + δ T_A + δ T_E. In the latter case the energy range is 100 MeV – 500 GeV instead of 100 MeV – 300 GeV. Then we repeat the fit in the energy bin of the interest with all parameters of pointlike and extended sources fixed.
Using this model of the observed emission, we obtain the SED of GRB 221009A over the first 2 × 10^3 s after the trigger with the maximum likelihood method. No significant γ-ray flux was detected from this GRB after T_0 + δ T_A. Therefore, we place upper limits on the SED of the delayed emission. We follow the procedure simular to the one implemented in the user-contributed PYTHON script SED.py[<https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/SED_scripts_v13.1.tgz>] to calculate these. The upper limits for different values of δ T_E = 10, 30 and 90 days are shown in Fig. <ref>.
§ ADDITIONAL PLOT FOR THE CASE OF THE SBPL PRIMARY SPECTRUM
In Fig. <ref> we show model curves for observable cascade SEDs for ten different values of B. Solid curves denote B = 10^-21 G (black), B = 3×10^-21 G (red), B = 10^-20 G (green), B = 3×10^-20 G (blue), B = 10^-19 G (magenta). Dashed curves denote B = 3×10^-19 G (black), B = 10^-18 G (red), B = 3×10^-18 G (green), B = 10^-17 G (blue), B = 3×10^-17 G (magenta).
|
http://arxiv.org/abs/2306.01880v1
|
20230602192208
|
The effect of heavy ions on the dispersion properties of kinetic Alfvén waves in astrophysical plasmas
|
[
"Nicolás Villarroel-Sepúlveda",
"Rodrigo A. López",
"Pablo S. Moya"
] |
physics.plasm-ph
|
[
"physics.plasm-ph",
"physics.space-ph"
] |
Departmento de Física, Facultad de Ciencias,
Universidad de Chile, Las Palmeras 3425, 7800003, Ñuñoa, Santiago, Chile
[email protected], [email protected]
Departamento de Física, Universidad de Santiago de Chile, Usach, 9170124, Santiago, Chile
[email protected]
Spacecraft measurements have shown Kinetic Alfvén Waves propagating in the terrestrial magnetosphere at lower wave-normal angles than predicted by linear Vlasov theory of electron-proton plasmas. To explain these observations, it has been suggested that the abundant heavy ion populations in this region may have strong, non-trivial effects that allow Alfvénic waves to acquire right-handed polarization at lower angles with respect to the background magnetic field, as in the case of typical electron-proton plasma.
We study the dispersion properties of Alfvénic waves in plasmas with stationary phase-space distribution functions with different heavy ion populations. Our extensive numerical analysis has allowed us to quantify the role of the heavy ion components on the transition from the left-hand polarized electromagnetic ion-cyclotron (EMIC) mode to the right-hand polarized kinetic Alfvén wave (KAW) mode.
We used linear Vlasov-Maxwell theory to obtain the dispersion relation for oblique electromagnetic waves. The dispersion relation of Alfvén waves was obtained numerically by considering four different oxygen ion concentrations ranging between 0.0 and 0.2 for all propagation angles, as a function of both the wavenumber and the plasma beta parameter.
The inclusion of the heavy O^+ ions is found to considerably reduce the transition angle from EMIC to KAW both as a function of the wave number and plasma beta. With increasing O^+ concentrations, waves become more damped in specific wavenumber regions. However, the inclusion of oxygen ions may allow weakly damped KAW to effectively propagate at smaller wave-normal angles than in the electron-proton case, as suggested by observations.
The effect of heavy ions on the dispersion properties of kinetic Alfvén waves in astrophysical plasmas
N. Villarroel-Sepúlveda
1
R. A. López 2
P. S. Moya 1
July 31, 2023
====================================================================================================================
§ INTRODUCTION
Kinetic Alfvén waves (KAWs) are Alfvénic waves that propagate at large wave-normal angles, when the mode turns compressive and develops a large parallel electric field due to kinetic effects, which give this mode its name. These characteristic kinetic effects become relevant under two conditions in typical electron-proton plasma: 1) in the hot electron case (β>m_e/m_p) as the perpendicular wavelength reaches the order of the scale of the protons’ gyroradius <cit.> and 2) in the cold electron case (β<m_e/m_p) as it reaches the electron's inertial length <cit.>. In this latter case, the waves can also be referred to as inertial Alfvén waves <cit.>. Kinetic Alfvén waves are dispersive in the subproton scale, meaning they can effectively interact with the plasma particles in this domain <cit.>. The KAW is particularly relevant for the study of space and astrophysical plasmas, as many authors have suggested that it plays a crucial role in several kinetic processes, such as the energy transfer from larger scales toward smaller electron scales through a turbulence cascade <cit.>, magnetic reconnection <cit.>, and plasma particle energization in the magnetosphere <cit.> and solar atmosphere <cit.>.
The large-amplitude parallel electric perturbations enable KAWs to accelerate charged particles along geomagnetic field lines, which allows for the strong acceleration and subsequent energization of electrons, in particular <cit.>. Among other observations, satellite measurements have provided direct evidence of electron acceleration by KAWs in the plasma sheet boundary layer <cit.> and the equatorial inner magnetosphere <cit.>. The spectral properties of KAW also provide mechanisms for the anomalous transport <cit.> and heating of ions <cit.>, all phenomena that have also been observed in the inner magnetosphere associated with magnetic shock impacts <cit.>, substorms <cit.>, and geomagnetic storms <cit.>. With respect to this central role in regulating large-scale to electron-scale physical processes in space plasmas, the characterization of KAWs and their properties becomes a task of great relevance for our understanding of magnetospheric plasma and the role of wave-particle interactions in plasma phenomena.
For small propagation angles with respect to the mean magnetic field, the Alfvénic mode in the same frequency range as KAWs corresponds to the non-compressive electromagnetic ion-cyclotron (EMIC) mode. As the name suggests, the EMIC mode allows strong cyclotron resonance with the ions in the plasma, but not with the electrons <cit.>. This feature is a consequence of the left-handed polarization of EMIC waves in the plasma frame, although the polarization of these waves can also be linear as a limit to the left-hand elliptical polarization <cit.>. However, when the wave vector develops a large component perpendicular to the background magnetic field, these waves become compressive as they acquire strong electric field fluctuations parallel to the mean magnetic field, as well as shifting from left-hand to right-handed polarization (or linear, this time as a limit to the right-hand elliptical polarization) in the plasma frame <cit.> as the plasma's diamagnetic current becomes larger than its Hall current <cit.>. So, for fixed values of the wavenumber, plasma beta, and other parameters, the left-hand polarized EMIC mode transitions to the right-handed KAW mode as the propagation angle increases. As shown by <cit.>, this transition angle depends on the wavenumber, and the plasma beta parameter (the ratio between thermal and magnetic pressure in the system). These waves have become the object of study in a wide range of plasma environments, with a particular interest in the heliospheric and magnetospheric environments, since spacecraft observations have consistently shown that the aforementioned Alfvénic modes propagate in the solar wind and different regions of Earth's magnetosphere, such as the magnetopause, plasma sheet, magnetosheath, and the inner magnetosphere <cit.>. Furthermore, the excitation and propagation of KAW and other Alfvénic waves have been theoretically proposed in different space plasma environments, such as planetary magnetospheres resembling those of Mars and Saturn <cit.>; dusty plasmas, as in planetary disks and cometary tails <cit.>; neutron stars or black hole surrounding plasmas <cit.>; and many other astrophysical plasma environments <cit.>. The excitation of KAW and ion-acoustic waves by hot ion beams and velocity shear, through both resonant and non-resonant instabilities, has also been extensively and in depth in plasmas composed of hot electrons and cold ions <cit.>. This mechanism for the excitation of ULF waves has been linked to the study of the polar cusp region of Earth's magnetosphere, but the results are more general and can be applied to any plasma of these characteristics.
Most studies on KAWs have focused on the typical case of ideal electron-proton plasma (see, for example, <cit.>). Space plasmas are, however, constituted by different plasma species, not only electrons and protons. Since the dispersion properties for oblique waves in a collisionless plasma depend on the different ion relative density <cit.>, these populations often result significant enough not to be neglected. For the particular case of the inner magnetosphere, spacecraft observations have shown KAWs propagating at lower angles with respect to the background magnetic field than predicted by the electron-proton theory applied to this environment. Previous studies have proposed that this puzzling behavior may be explained by the role played by the presence of heavy ions in this region of space <cit.>, where the concentration of He^+ ions range approximately from 5% up to 20% and of O^+ ions from 20% up to 80% of the total ion population depending on the level of geomagnetic activity <cit.>, with higher O^+ ion concentrations tightly linked to strong geomagnetic activity <cit.>. According to <cit.>, the heavy ions introduce non-trivial changes to the susceptibility of the plasma, subsequently allowing for the presence of KAWs at lower angles than those predicted using an electron-proton plasma approximation.
The relevance of heavy ions in the different physical processes that occur in space and astrophysical plasmas is not restricted to the case of the terrestrial magnetosphere. Populations of heavy ions such as oxygen
and nitrogen
are present, in varying concentrations, in the atmospheres of Venus <cit.>, Mars <cit.>, Saturn <cit.> and its moon Titan <cit.>, and Jupiter <cit.> and its moon Juno, where sulfur ions are also an important component <cit.>, and are likely components of the magnetospheres of close-in exoplanets <cit.>. The solar wind and other regions of the heliosphere are characterized by relatively high abundances of alpha particles, as well as smaller populations of heavier ions such as oxygen, nitrogen, neon, sulfur, silicon, and heavy metals <cit.>. Oxygen, nitrogen,
and heavy metal ions
have also been found to be present in the accretion disks of black holes and active galactic nuclei <cit.>, while all of the previously mentioned atoms have also been observed in planetary nebulae at high abundances and various stages of ionization, along with carbon, sulfur, silicon, and other ion species <cit.>. Some of the solar wind ions, such as oxygen and sulfur, are also important components of cometary plasmas in conjunction with molecular ions, which are formed through the ionization of neutral gas <cit.>. Plasma simulation studies that consider some of the previously mentioned environments show that the presence of heavy ions effectively affects the plasma dynamics and excitation of electromagnetic fluctuations <cit.>.
This relevance of heavy ions on the properties of electromagnetic waves in plasmas lays the groundwork for an in-depth analysis of the effect of these ions on the transition from left-hand polarized to right-hand polarized Alfvén waves in a collisionless plasma as the propagation angle progresses from parallel to perpendicular. Since the cyclotron motion of charged particles in a background magnetic field depends only on the strength of the field and the charge-to-mass ratio of the particles, the latter quantity acquires particular importance in the properties of oblique waves, determining the resonance frequencies, among other aspects of the plasma wave dynamics <cit.>. Although different astrophysical environments are intrinsically dissimilar when it comes to their composition, it is interesting to note that many of the most abundant heavy ions present in astrophysical plasmas (see references in the previous paragraph for deeper insights) have similar charge-to-mass ratios, N^+ <cit.>, O^+ <cit.>, S^2+ <cit.>, Cl^2+ <cit.>, and even Fe^3+ <cit.> and Fe^4+ <cit.> have charge/mass ratios between r_H/19 and r_H/14, where r_H=e/m_H is the charge/mass ratio of a H^+ ion. Thus, the plasma wave dynamics of a multi-ion plasma composed of H^+ and any of the aforementioned heavy ions may be somewhat similar.
In this manuscript, we aim to elucidate how the inclusion of heavy ions affects the transition angle from the left-hand polarized electromagnetic ion-cyclotron (EMIC) waves to the right-hand polarized KAW. As O^+ ions happen to be one of the most common ion species in many of the astrophysical plasmas mentioned above and because it is the most abundant heavy ion species in Earth's inner magnetosphere, where their concentration ranges from 20% to 50% of the total ion population <cit.>, we consider the case study of an e^--H^+-O^+ plasma with magnetospheric parameters. Our goal is to examine the potential role of heavy ions on the dispersion properties of Alfvénic waves in the kinetic regime in a collisionless magnetized plasma. We use linear Vlasov-Maxwell theory considering different concentrations of oxygen ions to analyze the dispersion relation and polarization extensively, as a function of the wave number and plasma beta parameter, in hopes of providing a theoretical explanation for the observation of KAWs propagating at oblique angles in the terrestrial magnetosphere. We use this information to make predictions about KAW propagation in other astrophysical plasma environments where heavy ions are ubiquitous.
§ PLASMA MODEL FOR A MULTI-ION COLLISIONLESS PLASMA
In this study, we consider a collisionless plasma composed of electrons, H^+ ions, and a smaller (but significant) population of the heavier O^+ ions, whose charge/mass ratio is ∼ r_H/16. We impose the quasi-neutrality condition ∑_s q_s n_s=0, where q_s and n_s represent the charge and density of the species “s.” Since all species that compose the plasma have elementary charge ± e, this condition is fulfilled when n_0=n_H^++n_O^+=n_e, with n_0 and n_e as the total ion and electron densities, respectively. We can write this relation in terms of the ion concentrations, such that n_H^+ = n_0 η_H, n_O^+=n_0 η_O^+, and η_H + η_O^+=1. Here, η represents the relative density of each species. In the following, we analyze the properties of Alfvénic waves for concentrations of oxygen ions given by η_O^+=0.05, η_O^+=0.10 and η_O^+=0.20 and compare them to the typical electron-proton case. A plasma of these characteristics can be considered a simplified model for astrophysical plasmas where the phase-space distribution function of the particles is stationary, and the heavy ion populations are predominantly composed of only one species. This is the case, for example, for certain regions of Earth's inner magnetosphere <cit.>, Saturn's inner plasmasphere <cit.>, or the plasma torus in the jovian magnetosphere, where O^+ and S^++ (which have roughly the same charge-to-mass ratio) make up most of the heavy ion population <cit.>. This is also considering that the three of these planets have dynamo-generated magnetic fields with similar dipole moments <cit.>, making them comparable plasma environments.
We consider isotropic temperatures, such that T_∥ s=T_⊥ s=T_s, for all species to reduce the space of parameters to only the wave number, plasma beta, and oxygen ion concentration. These three quantities are the main focus of our current investigation (for an analysis of the dependence of the dispersive properties of Alfvénic waves on the temperature anisotropy of the species in a similar context, see <cit.>).
Concerning the plasma beta parameter, observations over the full range of magnetic local time in the ring current region of the inner magnetosphere have shown that its value for protons usually ranges from β_H^+∼ 10^-3 to β_H^+∼ 2, both during quiet and active geomagnetic times. While some observations with β_H^+ values greater than 4 have also been recorded in this region, these events correspond to only about 0.2% of the total of high beta observations, according to <cit.>. In contrast, the probability of a high beta event with β_H^+≤3 rounds to 98%. Thus, to study the dependence of the dispersion properties on the parallel plasma beta parameter, we limited our domain of study to 10^-3≤β̅_ H^+≤ 4, where β̅_ s=8π n_0 k_B T_s/B_0^2, with B_0 the background magnetic field, T_ s is the temperature of species “s” in the direction parallel to the background magnetic field, and k_B is the Boltzmann constant. This modified plasma beta parameter, β̅, introduced by <cit.>, accounts only for the parallel temperature of the species when the particle density stays fixed and is related to the regular beta parameter by β_s=η_sβ̅_ s. The plasma beta parameters of H^+ ions observed in Saturn's magnetosphere are fully contained in this range, with 0<β_H^+<2 <cit.>.
Unlike other astrophysical environments such as the solar wind, particle populations within inner planetary magnetospheres usually do not drift significantly with respect to one another in the direction of the field lines. We therefore suppressed this drift for the sake of simplicity. By taking this aspect into consideration, as well as the temperature isotropy mentioned above, we can solve the dispersion relation for oblique waves by using the linear Vlasov-Maxwell theory for nondrifting Maxwellian velocity distributions given by:
f_s(v_∥,v_⊥)=n_sπ^-3/2/α_s^3 exp{-v_⊥^2/α_ s^2-v_∥^2/α_ s^2},
for each particle species, where α_s=√(2k_BT_s/m_s) is the thermal speed of each species, with m_s as the mass of each species. We consider waves propagating obliquely to the mean magnetic field 𝐁=B_0𝐳̂, with 𝐤=k_⊥𝐱̂+k_∥𝐳̂ = ksinθ𝐱̂+kcosθ𝐳̂, where θ is the propagation or wave-normal angle. From this, we obtain the dispersion relation:
𝐃_k·δ𝐄_k=0,
where 𝐃_k is the full dispersion tensor for oblique waves and δ𝐄_k represent the electric field eigenmodes (see Appendix A for further detail). With the information of the dispersion tensor, we can compute the polarization of the transverse component of the waves as taken with respect to the background magnetic field, as defined by <cit.>, using:
P(k)=iδ E_kx/δ E_ky
.
As usual in the context of plasma physics, right-handed polarization is defined by a timewise gyration of the fields in the direction of the background magnetic field according to the right-hand rule, whereas in left-hand polarized waves, the fields gyrate in the opposite sense. Thus, right-hand (left-hand) polarized waves have fields gyrating in the sense of the Larmor gyration of negatively (positively) charged particles.
Considering bounded electric perturbations, a linear polarization is achieved whenever P(k)=0 or P(k)=±∞. Circular polarization occurs when P(k)=±1; where the plus sign implies right-handed polarization and the minus sign is associated with left-handed polarization. Other values of finite Re(P)>0
and Re(P)<0 indicate a right-handed and left-handed elliptical polarization, respectively.
It is crucial to note, however, that the main interest of this study is the sign of polarization and not its exact value. Because of this, polarization values in the following results are shown normalized to their highest values, which sometimes correspond to numerical infinity, in Figures <ref> and <ref>. To avoid dividing by very big numbers while normalizing the polarization in Figures <ref> and <ref>, which would result in the maps showing polarization values very close to 0 everywhere, except for very hot (or cold) small pockets, we chose to set the most extreme values of the polarization at Re(P)=±100 and normalize to this number. Therefore, Re(P)=±1 will not imply circular polarization as for the results shown and discussed hereafter.
We solved the dispersion relation of Alfvénic waves using the DIS-K solver [The full code is publicly available and can be found at <https://github.com/ralopezh/dis-k>.] <cit.>, in the limit of isotropic Maxwellian distributions as given by Eq. (<ref>). We identified the Alfvén waves and differentiated them from other solutions that possess right-handed polarization at near-perpendicular angles by analyzing the phase velocity of the waves in the MHD limit (see Appendix B). For a fixed value of the beta parameter, we obtained the real and imaginary parts of the wave's frequency normalized to the protons' cyclotron frequency Ω_p, as well as their polarization – all as a function of the wavenumber normalized to the inverse of the proton inertial length ω_pp/c, with ω_pp as the protons' plasma frequency and c as the speed of light in vacuum. As a validation task, this routine was able to accurately reproduce Fig. 3 of <cit.> for the transition angle from left-hand polarized to right-hand polarized Alfvén waves at kc/Ω_p=0.05 as a function of the beta parameter in an isotropic electron-proton plasma (see Fig. <ref> in Appendix C).
For all calculations, we considered isothermy between the plasma species, such that β̅_e=β̅_H^+=β̅_O^+=β̅. Since η_e=1, it follows that β_e=β̅. Thus, the beta parameter for electrons β_e is used in the following sections as a variable from which the beta parameter of each species can be obtained, given the different ion concentrations.
§ RESULTS
§.§ Effect of oxygen ions on the dispersion relation of Alfvénic waves
Figure <ref> shows the real (left) and imaginary (center) parts of the frequency, and the normalized polarization of Alfvénic waves versus the normalized wave number, for various oxygen ion concentrations and different propagation angles between 30^∘ and 90^∘. We did not include lower angles because (in those cases) the waves lose their positive polarization; this means that all solutions correspond to EMIC waves. In the figure, the top row shows the typical electron-proton case with zero oxygen concentration. The concentration increases for the subsequent rows, from η_O^+ = 0.05 to η_O^+ = 0.2. These results show a substantial modification of the dispersion properties of the waves with the inclusion of oxygen ions. One of the more apparent modifications is that, unlike in the electron-proton case, the real part of the frequency does not pass through zero for some wave-normal angles. This phenomenon, seen in the frequency curves for propagation angles of 70º and 80º for η_O^+≥0.05, is a consequence of the separation of the solution into two frequency bands, one asymptotic to the oxygen's gyroradius (which is not plotted in the Figure <ref>) and the other to the proton's gyroradius. We only included the solutions that address the proton's gyrofrequency, as this is the one we can feasibly compare to the solutions obtained in the electron-proton plasma, in line with the main objective of this study. This separation of the Alfvén mode into different frequency bands has been previously discussed by <cit.>, among others. In addition, it is a consequence of the apparition of forbidden frequency domains in the neighborhoods of the different ion's gyrofrequencies. We also observe a small wave number domain where the mode becomes slightly damped – a phenomenon that is not present in the first case. This increased damping due to the presence of O^+ (or other heavy ions) has been previously observed by <cit.> and associated with particle energization by the waves. Consistently with recent results <cit.>, Alfvénic waves in a plasma infused with heavy ions acquire positive polarization in a wave number domain that grows with the propagation angle. Moreover, it is important to note that this is not the case in an electron-proton plasma. Finally, from Figure <ref>, we also observe that the wave-number domain in which the polarization is positive increases in extension as the oxygen ion concentration increases, with the wave propagating at 70^∘ (purple curves), losing its negative polarization entirely in the given wave number domain for η_O^+=0.2.
Figure <ref> displays the dispersion properties of Alfvénic waves, this time as a function of the plasma beta parameter for a fixed wavenumber of ck/ω_pp=0.5. This value of the wavenumber (approximately 0.1c/ω_pp, where c/ω_pp is the proton's inertial length) will be considered a lower limit to the kinetic regime. This is supported by the fact that at oblique angles, the kinetic solutions depart from their MHD counterparts at approximately this wave number (see Figure <ref> for propagation at 50^∘ and 60^∘). As a result, kinetic effects should be observable at the chosen value for this quantity.
From Figure <ref>, we find that the real part of the frequency becomes consistently more significant with increasing oxygen ion concentration for larger propagation angles. We also observe that when no oxygen ions are present, the frequency curve for propagation at 30º tends to decrease consistently with the value of the plasma beta parameter, as does the curve for propagation at 40º until the frequency reaches 0. This behavior disappears as the concentration of oxygen ions becomes higher; we can no longer see this steady decrease in the real part of the frequency for propagation at 40º when η_O^+=0.05. The same holds true for propagation at 30º when η_O^+ reaches 0.1. Moreover, the most interesting aspect regarding the effect of oxygen ion populations on the spectral properties of the waves becomes evident when we analyze the third column of the figure. We note that most of the initially negatively polarized waves acquire positive polarization as the beta parameter increases, even for low oxygen ion concentrations. This result is consistent with previous works <cit.>.
§.§ The Transition from EMIC to Kinetic Alfvén Waves
In order to extensively analyze the transition from negatively polarized EMIC waves to the KAW mode, we plot solutions (e.g., shown in Figs. <ref> and <ref>) in the form of heat maps, such that we may consider the propagation at every possible wave-normal angle. Figure <ref> shows the Alfvénic mode properties for the frequency and polarization as functions of the wavenumber for plasma beta values of β_e = 0.01 and β_e=0.10. Figure <ref>, displays the same quantities as functions of the beta parameter, with wavenumbers fixed at ck/ω_pp=0.5 and ck/ω_pp=1.0. In Fig. <ref>, for all plots of the real part of the polarization (right column), the plasma beta appears in logarithmic scale, as in <cit.>. This choice allows us to thoroughly analyze the transition from a cold to a warm plasma and facilitate insights into the behavior of the plasma's polarization for low beta values, which tend to be more common in the inner magnetosphere <cit.>.
Additionally, in Figs. <ref> and <ref>, the white lines in all maps specify the contour curve of zero polarization, which indicates where the transition from EMIC (negative polarization) to KAW (positive polarization) occurs. We also plot the contours for characteristic values of the damping rate to identify the overlap between weakly damped waves and positive polarization at oblique angles (weakly damped KAWs solutions).
As previously noted in Figs. <ref> and <ref> and further displayed in Figs. <ref> and <ref>, we observe that the inclusion of oxygen ion populations tends to increase the damping rate of the Alfvénic waves, especially for lower wavenumbers in Fig. <ref> and near-perpendicular propagation in Fig. <ref>. We also note that the contours of γ/Ω_p≤ -0.05 are mostly unchanged when plotted as functions of the wave number in Fig. <ref>. The same does not hold for the left panel of <ref>, when the contours are plotted as a function of the beta parameter with ck/ω_pp=0.5. This is something to be expected, as this value of the normalized wavenumber tends to be in the region of the damping rate bump caused by the inclusion of oxygen ions, as can be seen in Figs. <ref> and <ref>. Nevertheless, we observe from both Figs. <ref> and <ref> that the inclusion of the heavier oxygen ions effectively increases the domain of weakly damped right-hand KAWs, as the Re(P)=0 contours change drastically with an increasing population of oxygen ions toward lower angles in all cases considered.
It is worth noting that the main results for the two plasma beta values considered in Fig. <ref> are very similar when it comes to the effect of the heavy ions on the dispersive properties of the waves. The main differences come from the solutions having lower frequencies, being less damped, and changing their polarization at greater angles for bigger wave numbers, which are all results that were to be expected. It is interesting to note, however, that when the plasma beta is small (β_e=0.001), the polarization of the waves tends to becomes linear rather than right-hand polarized as the waves transition from EMIC to KAW, as expected. For a plasma beta of β_e=0.10, this is no longer the case and the waves can become right-hand elliptically polarized, although linear polarization is maintained at near-perpendicular propagation.
When analyzing the dispersion properties as functions of the beta parameter, we see from Fig. <ref> that the main results are again very similar. However, two main differences should be mentioned between the results considering the different fixed wave numbers. The first is that the lowering of the transition angle due to the presence of heavy ions is far more pronounced in the case where the wave number is smaller. This is to be expected when looking at Fig. <ref>, as the slope of the transition curve is much steeper when ck/ω_pp=0.5 than it is for ck/ω_pp=1.0. The second is the apparition of a wide domain where Re(P)=0 in the electron-proton case for ck/ω_pp=1.0. This region, shaded gray in the figure, is to be interpreted as a domain of linear polarization as the mode becomes aperiodic, which is also present in the other cases but for higher values of the plasma beta that are not pertinent to this study. A similar result was observed in Fig. 9 of <cit.> for Alfvén wave branch asymptotic to Ω_O^+, and it appears to be a feature from linear theory of propagation at oblique angles with high values of the plasma beta and wavenumber. Although this result and the effects that oxygen ions can play in the displacement of this linear polarization region are interesting, a detailed study and explanation of this phenomenon lie beyond the scope of this article and should be treated in future works.
Finally, from the results shown in Figs. <ref> and <ref>, we extracted the contours for the Re(P)=0 level, displaying them in Fig. <ref> for the different oxygen ion concentrations, both as functions of the wavenumber (top panels) and plasma beta parameter (bottom panels). With these plots, we can better grasp the effect of the oxygen ion concentration on the transition from negatively to positively polarized Alfvénic waves, namely, the transition from EMIC waves to KAWs. The results show a clear tendency for a higher oxygen ion concentration to lower the transition angle from EMIC waves to KAW, which is more pronounced when the dependence on the beta parameter is analyzed for relatively small wavenumbers. The contours displayed in Figure <ref> provide strong evidence that, according to linear theory, the propagation of KAWs at wave-normal angles under 70º remains impossible in the magnetospheric environment for low beta values, such as the low-density case considered in <cit.>, unless a significant population of oxygen ions is present in the plasma. As the propagation angle of KAW measured in the above article averages around 60º, propagation of right-hand polarized Alfvén waves must occur at even lower angles. Even for medium to high particle densities, this would be highly unlikely if the plasma consists only of electrons and protons. However, our results prove that this anomaly can be easily explained by considering oxygen ion populations of η_O^+∼ 0.2, which are consistent with those of the inner magnetosphere.
§ SUMMARY AND CONCLUSIONS
We present an extensive study on the effect of oxygen ions on the dispersion properties of Aflvénic waves in a multi-species electron-proton-oxygen ion plasma with no temperature anisotropy. Considering inner magnetospheric-type parameters, we first demonstrated that an increasing oxygen ion population allows the propagation of KAW at remarkably low wave-normal angles by heavily displacing the transitional region from left-hand polarized EMIC waves to right-hand polarized KAW in phase-space toward lower wave-normal angles. These results are consistent with recently published case studies <cit.>, as well as with observations of KAW propagating at about 60º with respect to Earth's magnetic field in the inner magnetosphere <cit.>.
Secondly, by fixing the wavenumber and allowing the plasma beta parameter to vary, we show that the behavior of the EMIC-KAW transition curve in θ-β space drastically changes with the oxygen ion concentration. While an increase in the plasma beta facilitates the propagation of KAW in a plasma with oxygen ion concentrations of η_O^+=0.00, 0.05, and 0.10, for a higher oxygen ion concentration of η_O^+=0.20, we see that an increase in the plasma beta parameter increases the transition angle from EMIC to KAW. However, for all the cases considered, the transition curve always lies below the case with less oxygen abundance for all values of the beta parameter. This decrease in the transition angle from EMIC to KAW is particularly pronounced for low-beta plasmas (β≤ 1), which tend to be fairly common configurations in planetary magnetospheres<cit.>.
It is also worth noting that an increase in the oxygen ion concentration increases the damping rate of the waves, particularly for small wave numbers (Figs. <ref>, <ref>). Thus, it is not surprising that Fig. <ref> shows a considerable decrease in the imaginary part of the frequency for ck/ω_pp=0.5 for fixed beta values as the oxygen ion concentration increases. We emphasize, however, that (as shown in Figs. <ref>) the inclusion of significant concentrations of oxygen ions only modifies the damping rate of the waves by small amounts compared to the electron-proton case, with the γ/Ω_p≤-0.05 domain remaining practically untouched as η_O^+ increases. As seen in Fig. <ref>, the same is not valid when the dependence on the beta parameter is analyzed. Nevertheless, for β̅<2, variations of the γ/Ω_p≥-0.05 domain are only slight between a 5% and a 20% oxygen ion concentration. This allows for considerable overlap between the regions of right-hand polarized and weakly damped Alfvénic waves for typical values of the beta parameter, allowing KAW to effectively propagate at smaller wave-normal angles than in the electron-proton case.
In summary, this article provides strong theoretical evidence for the existence of KAWs propagating at relatively low wave-normal angles in multi-species plasmas such as those present in regions of planetary magnetospheres of Earth, Saturn, Jupiter and other planets. This behavior is consistent with satellite measurements of the inner terrestrial magnetosphere reported in the last decade, particularly with the Van Allen Probes <cit.>. We attribute these observations to the high abundance of oxygen ions in this region of space. Indeed, according to our results, without the presence of oxygen ions, there would not be KAWs to be observed at such angles. Nonetheless, given that the kinetic dispersion relation of plasma waves depends not only on the wavenumber, concentration, and plasma beta, the effect of other parameters (such as temperature anisotropy and relative drift particle populations) are yet to be studied.
Furthermore, temperature anisotropies are known to give rise to kinetic micro-instabilities, which could compensate for the additional damping due to the presence of oxygen ions. The relative effect of ions with higher (such as He^+, He^2+, O^2+, and highly ionized particles that are also present in the solar wind) or lower (such as S^+ or light ionized heavy metals) charge-to-mass ratios on the dispersive properties of KAW is also yet to be determined. Future studies should therefore extend and refine these results, particularly on the basis of more realistic conditions, and provide definitive explanations for observations of plasma phenomena in the magnetosphere.
We thank the support of ANID, Chile, through National Doctoral Scholarship N^∘ 21220616 (NVS), Fondecyt Grant 1191351 (PSM), and Fondecyt Initiation Grant 11201048 (RAL).
aa
§ DETAILS OF THE DISPERSION TENSOR
Let us take a background velocity distribution function given by a bi-Maxwellian distribution with drift:
f_0s(v_∥,v_⊥)=n_0sπ^-3/2/α_⊥ s^2 α_∥ sexp{-v_⊥^2/α_⊥ s^2-(v_∥-U_s)^2/α_∥ s^2},
where n_0s is the mean number density of the s-th species, α_∥ s=(2k_BT_∥ s/m_s)^1/2 and α_⊥ s=(2k_BT_⊥ s/m_s)^1/2 are the thermal speeds of the species, and T_∥ s and T_⊥ s are the temperatures in the parallel and perpendicular direction with respect to the mean magnetic field 𝐁_0. The drift velocity along the magnetic field is given by U_s, while m_s and k_B denote the particle mass of the s-th species and the Boltzmann constant, respectively. We note that the distribution function in (<ref>) is a particular case presented here. The expressions utilized throughout this study can be easily obtained by taking U_s=0 and α_⊥ s=α_∥ s=α_s for all particle species.
The dispersion tensor of (<ref>), obtained by integrating the first order perturbation term of the linearized Vlasov equation over velocity space in cylindrical coordinates, is given by <cit.>:
D=[ D_xx iD_xy D_xz; -iD_xy D_yy iD_yz; D_xz -iD_yz D_zz ],
with
D_xx=1-c^2k_∥^2/ω^2+∑_sω_ps^2/ω^2∑_ℓ=-∞^∞ℓ^2Λ_ℓ(λ_s)/λ_s𝒜_ℓ,
D_xy=∑_sω_ps^2/ω^2∑_ℓ=-∞^∞ℓΛ'_ℓ(λ_s)𝒜_ℓ,
D_xz=c^2k_⊥ k_∥/ω^2 -∑_s q_s/|q_s|ω_ps^2/ω^2μ_s^-1/2∑_ℓ=-∞^∞ℓΛ_ℓ(λ_s)/√(2λ_s)ℬ_ℓ,
D_yy=1-c^2k^2/ω^2 + ∑_sω_ps^2/ω^2∑_ℓ=-∞^∞[ℓ^2Λ_ℓ(λ_s)/λ_s-2λ_sΛ'_ℓ(λ_s)]𝒜_ℓ,
D_yz=∑_sq_s/|q_s|ω_ps^2/ω^2μ_s^-1/2∑_ℓ=-∞^∞√(λ_s/2)Λ'_ℓ(λ_s)ℬ_ℓ,
and
D_zz =1-k_⊥^2c^2/ω^2+2∑_sω_ps^2/ω^2μ_s^-1U_s/α_∥ s[U_s/α_∥ s+2ξ_s]
+2∑_sω_ps^2/ω^2μ_s^-1∑_ℓ=-∞^∞Λ_ℓ(λ_s)𝒞_ℓ.
Here, we introduce the plasma frequency ω_ps=(4π n_0sq_s^2/m_s)^1/2 and gyrofrequency Ω_s=q_s B_0/m_s c of the species, while q_s is the charge of the s-th species of particles. Also, we define the functions:
𝒜_ℓ =(μ-1)+[ξ_s+(μ-1)ζ_ℓ s]𝒵(ζ_ℓ s),
ℬ_ℓ =-2(ξ_s + ζ_ℓ s𝒜_ℓ) and 𝒞_ℓ=ξ_s ζ_ℓ s+(ζ_ℓ s+U_s/α_∥ s)^2𝒜_ℓ
,
with the quantities
μ_s = α_⊥ s^2/α_∥ s^2=T_⊥ s/T_∥ s, ξ_s = ω - k_∥U_s/k_∥α_∥ s, ζ_ℓ s= ξ_s-ℓΩ_s/k_∥α_∥ s,
and λ_s=k_⊥^2α_⊥ s^2/2Ω_s^2,
as well as the special functions
Λ_ℓ(x)=e^-xI_ℓ(x) and Z(x)=1/√(π)∫_-∞^∞e^-t^2/t-xdt,
where in (<ref>) the I_ℓ is the integer index modified Bessel functions of the first kind, and Z(x) corresponds to the plasma dispersion function.
Moreover, from (<ref>), we can express the polarization of the electromagnetic waves defined in (<ref>) in terms of the components of the tensor in (<ref>) as follows:
P(k) =iE_kx/E_ky =D_xyD_zz+D_xzD_yz/D_xxD_zz-D_xz^2sign(ω).
§ IDENTIFICATION OF THE ALFVÉN SOLUTION
We identified the Alfvén solutions by comparing the obtained dispersion relations with those of MHD Afvén waves, given by ω_A=k_∥V_A, where V_A=B/√(4π n_H^+m_H^+) is the Alfvén speed for H^+ ions, and by considering the fact that these solutions are known to have P(k)=iE_kx/E_ky>0 at angles close to 90^∘.
Comparing our solutions to the MHD limit allows us to recognize the Alfvén solution from other modes in the same frequency range that satisfy the same condition for the scalar polarization in <ref>, such as the fast magnetosonic mode. The dispersion relation for this mode, in the classical MHD limit, is given by ω_f=ck√(V_s^2+v_A^2/c^2+v_A^2), where V_s≈√(β)V_A is the speed of sound in the plasma.
Figure <ref> shows the complex frequency and inverse real part of the polarization for Alfvén (continuous magenta) and fast magnetosonic (continuous teal) waves obtained from kinetic theory in an electron-proton plasma for propagation angles of 50^∘, 60^∘, 70^∘, and 80^∘. Dashed lines of similar colors indicate the MHD solutions mentioned above. Figure <ref> displays the same quantities in an electron-proton-O^+ plasma with η_O^+=0.10. The real part of the polarization is plotted instead of its inverse for convenience.
From Figure <ref>, we can see that the fast magnetosonic-whistler mode is completely decoupled from the Alfvén wave and, although both waves are right-hand polarized at higher propagation angles, the nature of their polarization is completely different; the polarization of the fast wave is closer to circular polarization, while the polarization of the KAW is nearly linear.
Figure <ref> shows that the real frequency of the waves is nearly unchanged by the inclusion of oxygen ions, except for the apparition of a small forbidden frequency range near ω=Ω_O^+, which causes the wave to split into two frequency bands. The imaginary frequency of the Alfvén mode is also only slightly changed by the apparition of a damping bump for small wavenumbers. We discuss both phenomena in depth in the analysis of Figure <ref>. We also note that in this case as well, the solutions are decoupled. The polarization of the waves, however, changes its nature completely because of the effects of the heavy ion population, allowing the Alfvén waves to acquire right-hand polarization at lower propagation angles.
§ REPRODUCTION OF TRANSITION CURVE BY <CIT.>
The methods utilized in this study allow us to replicate the curve of Re(P)=0 for obliquely propagating Alfvén waves displayed in Fig. 3 of <cit.>. Figure <ref> shows a heat map of the real part of the polarization of Alfvén waves in an electron-proton plasma for ck/ω_pp=0.05 as a function of the wave-normal angle and plasma beta parameter. Since the plasma is considered isothermic, the plasma beta parameter is the same for both particle species, so the subscript is dropped. The white curve corresponds to the contour of P=0. As expected, this curve shows similar behavior to the purple curve in the bottom panel of Figure <ref>, since both contours are obtained from plasmas of the same composition, but different values of the wavenumber. As in the polarization maps in Fig. <ref>, Fig. <ref> shows how an isothermal increase in the plasma beta parameter can significantly reduce the transition angle from EMIC to KAW in an electron-proton plasma.
|
http://arxiv.org/abs/2306.02066v1
|
20230603094359
|
Variational Gaussian Process Diffusion Processes
|
[
"Prakhar Verma",
"Vincent Adam",
"Arno Solin"
] |
cs.LG
|
[
"cs.LG",
"stat.ML"
] |
TOP QUARK PROPERTIES AT ATLAS AND CMS
Jan van der Linden
July 31, 2023
=====================================
Diffusion processes are a class of stochastic differential equations (SDEs) providing a rich family of expressive models that arise naturally in dynamic modelling tasks. Probabilistic inference and learning under generative models with latent processes endowed with a non-linear diffusion process prior are intractable problems. We build upon work within variational inference approximating the posterior process as a linear diffusion process, point out pathologies in the approach, and propose an alternative parameterization of the Gaussian variational process using a continuous exponential family description. This allows us to trade a slow inference algorithm with fixed-point iterations for a fast algorithm for convex optimization akin to natural gradient descent, which also provides a better objective for the learning of model parameters.-1
§ INTRODUCTION
Continuous-time stochastic differential equations (SDEs, <cit.>) are a ubiquitous modelling tool in fields ranging from physics <cit.> and finance <cit.> to biology <cit.> and machine learning <cit.>. SDEs offer a natural and flexible way to encode prior knowledge and capture the dynamic evolution of complex systems, where the stochasticity and nonlinearity of the underlying processes play a crucial role. In the particular setting where the drift of the SDE model is linear, the resulting process is a Gaussian process (GP, <cit.>), known as a general and powerful ML paradigm of its own. We focus on diffusion processes (DPs), which are a subset of SDEs with additional regularity conditions on the drift and diffusion functions (details in Ch. 2 <cit.>).
DPs with non-linear drifts cover a wide range of processes with multi-modal, skew, and fat-tailed behaviour (<ref> left).-1
r.35
[outer sep=0,inner sep=0,use Hobby shortcut]
blob=[draw=black!60, closed hobby, fill=none, opacity=0.75, line width=.75pt]
[blob,opacity=.25] ([closed]0.54,0.06).. (0.68,0.29).. (0.66,0.61).. (0.53,0.87).. (0.26,0.97).. (0.10,0.86).. (0.01,0.53).. (0.11,0.19).. (0.35,0.03);
[blob,fill=black!5,opacity=1] ([closed]0.47,0.04).. (0.60,0.12).. (0.68,0.34).. (0.65,0.60).. (0.50,0.81).. (0.31,0.83).. (0.14,0.59).. (0.14,0.39).. (0.23,0.16);
[blob,pattern=north east lines, pattern color=black!20, opacity=.75] ([closed]0.57,0.75).. (0.80,0.90).. (0.93,0.83).. (0.98,0.52).. (0.93,0.34).. (0.71,0.08).. (0.53,0.06).. (0.41,0.17).. (0.39,0.41);
[fill=black,inner sep=1pt,circle] (p) at (0.24,0.55) ;
[fill=black,inner sep=1pt,circle] (q) at (0.44,0.52) ;
[font=] at ((p) - (0,.05)) p_|;
[font=] at ((q) - (0,.05)) q;
[dashed] (p)–(q);
[font=,opacity=.5] at (0.21,0.86) SDEs;
[font=,align=center] at (0.35,0.72) DPs;
[font=,align=center] at (0.80,0.75) GPs;
[font=,align=center] at (0.53,0.35) Linear DPs
Markovian GPs;
Approximating p_| with q for inference and learning in DPs.
The generality of DPs, however, comes at a high practical cost: exact inference and parameter learning in non-linear DP models are computationally challenging or even intractable due to the infeasibility of direct simulation. Therefore, developing efficient and accurate methods for approximating DP models is crucial for theoretical and practical reasons. In this paper, we are concerned with Bayesian inference and learning in generative models with latent temporal processes endowed with an Itô DP prior.-2
Particularly, given a DP prior p and observations , we are interested in approximating the non-Gaussian DP posterior p_| with a linear-Gaussian DP q (<ref>).
The seminal work by <cit.> used the framework of variational inference <cit.> to derive an approximate inference algorithm, where the approximating variational family 𝒬 consists of time-variant linear (affine) DPs (, Markovian GPs, App. B in <cit.>):-1
q : _t = f_q(_t, t) t + _t, _0 ∼ q(_0),
s.t. f_q(_t, t) = _t+_t.
They also introduced an objective for approximate inference (the variational evidence lower bound or ELBO) and a fixed-point iteration algorithm to optimize it.
In this paper, we highlight shortcomings in the method proposed by <cit.>, which we refer to as : (i) the inference algorithm is slow to converge even in the simple setting of linear diffusions, and (ii) the parameterization of q via its drift function is ill-suited to the problem of learning the parameters of the prior diffusion from observations.
To tackle these issues, we keep the same problem formulation and objective, but introduce an alternative parameterization <cit.> of the variational process and a new optimization algorithm. Crucially, the new parameterization allows us to trade the slow fixed-point algorithm for a fast and better-understood algorithm for convex optimization,
akin to natural gradient descent <cit.>.
This stabilizes and drastically speeds up inference (<ref> middle) and facilitates
parameter learning.
The contributions of this paper are as follows: (i) We propose a novel site-based approach, the , for variational inference in diffusion processes that exploits the structure of the optimal variational posterior; (ii) We show how speeds up inference and learning under both linear and non-linear DP priors; and (iii) We demonstrate the feasibility and efficiency of our approach on a wide range of inference problems with DP priors featuring multi-modal, skew, and fat-tailed behaviours.-1
§.§ Related work
For inference in general non-linear Gaussian sequential models, particle filtering a.k.a. sequential Monte Carlo (SMC, <cit.>) methods are popular tools. When computing the posterior given some observations (the smoothing problem), conditional particle filters <cit.> have proven reliable in avoiding mode-collapse and particle degeneracy <cit.>.
For the task of model parameter learning, advances in automatic differentiation have opened new possibilities for black-box learning of continuous-time dynamics (, <cit.>). Close to our work grounded in the framework of variational inference, <cit.> introduced a tractable, sampling-based algorithm for inference and learning in general SDE models where the posterior process is parameterized via its non-linear drift and diffusion functions.
However, although general in scope, these methods are unnecessarily heavy and compute-intensive for many practical applications.
We take an alternative route, trading some generality—by restricting the posterior process to be a GP—for some efficiency. We do so by explicitly `Gaussianizing' the non-Gaussian observations and linearizing the non-linear diffusion. These principles are well known per se, dating back to extended Kalman(–Bucy) filtering/smoothing (overview in Ch. 10 <cit.>) with numerous extensions (reviewed in <cit.>) such as posterior linearization <cit.>. In ML, these methods have given rise to expectation propagation (EP, <cit.>) and variational inference (VI, <cit.>).-1
This work aims at improving originally proposed in <cit.> (overview in <ref>). The algorithm has been used as a base for various other methods making it an important building block, in drift estimation <cit.> and switching systems <cit.>,
and learning neural network drift functions <cit.>.
Our approach turns the hard problem of VI under a DP prior into an easier problem of VI under a GP prior. For the latter problem, it is common to restrict the variational process to a GP <cit.>. Efficient inference and learning algorithms that exploit the exponential family structure of the manifold of GPs and its geometry can then be derived which are now state of the art <cit.>. These algorithms are equivalent to natural gradient descent <cit.> and exploit the structure of the optimal variational process in the natural parameterization <cit.>, splitting the contribution of the prior and the observations into the posterior in an additive fashion.
These previous approaches consider models with GP priors and non-Gaussian likelihoods, while we focus on the more general problem of DP priors.
§ VARIATIONAL INFERENCE AND LEARNING FOR DIFFUSION PROCESSES
An Itô diffusion process (DP) with state dimension d can be defined by an SDE as
p : _t = f_p(_t, t) t + _t, s.t. _0 ∼ p(_0),
where the drift f_p: ^d×_+ →^d is a non-linear mapping, the diffusion term is linear (we drop dependency on t for notational convenience),
and _t denotes the Brownian motion with a spectral density _c.
The data ={(t_i,y_i)}_i=1^n comprises input–output pairs and is assumed to constitute independent and identically distributed noisy versions of state trajectory at n ordered discrete-time points via an observation model providing the likelihoods {p(y_i |_i = _t |_t=t_i)}_i=1^n.
The posterior process p_| can be shown to have the following structural properties:
(i) it shares the same diffusion coefficient as the prior; (ii) its can be expressed as the sum of the prior drift f_p, and a data and prior dependent term g resulting from a backward pass through the process and observations (see <ref>):-1
p_| : _t = f_p(_t, t) + g(, p, t) t + _t, s.t. _0 ∼ p_|(_0).
However, for most of the settings of interest, the posterior p_| is intractable and therefore approximate inference methods are used for both inference and learning.
Variational inference (VI) turns the inference problem into an optimization problem by introducing a variational distribution q over the latent variables and maximizing a lower bound ℒ(q) to the log marginal likelihood of the observations (or ELBO),
log p()
≥𝔼_q[ log p(|) ] - qp=ℒ(q),
where p is the prior distribution over and p(|) is the likelihood derived from the observation model.
The gap in the inequality can be shown to be log p() -ℒ(q) = qp_|𝒟. Thus, the bound is tight for q = p_|𝒟, when q is the posterior distribution.
Note here that we use probability density functions to refer to the associated distributions, as is commonly done in the field.
In the case of diffusion processes, the KL divergence between the variational process q() and the prior process p() can be expressed by using Girsanov theorem <cit.> leading to the ELBO,
ℒ(q) = _q()log p(|)
- 12_0^T_q(_t)f_q(_t, t) f_p(_t, t)^2__c ^-1 t
- q(_0)p(_0),
where ·^2__c^-1 is the weighed 2-norm associated to the inner product ⟨,⟩__c^-1 = ^⊤_c^-1, and we set the diffusion function of the posterior process to its optimal value, which is that of the prior.
The prior DP might have free model parameters (parameterizing the drift function as in <ref>) that need to be adjusted to best explain the observations, a task we refer to as the learning problem. In this scenario, the ELBO (q, ) depends on both the model parameters and the variational distribution q. Noting q^*() = max_q (q, ), the optimal variational distribution for fixed model parameters , a common objective to solve the learning problem is to maximize objective (q^*(); ). This nested optimization problem is intractable and usually replaced by coordinate ascent of the ELBO with respect to (q, ), a procedure known as variational Expectation–Maximization (VEM, <cit.>). The efficacy of VEM strongly depends on the choice of parameterization for q and its dependence on q <cit.>.-1
§.§ Gaussian variational inference for diffusion processes ()
<cit.> propose to restrict the variational process q() to be a diffusion with affine drift
Q = {
q : _t = (_t _t + _t) t + _t, _0 ∼ q(_0)
},
with _t ∈^d × d and _t ∈^d corresponding to the set of Markovian Gaussian processes (, Ch. 12 in <cit.>).
In this setting, the marginal distribution q(_t) of the process are fully characterised by the mean and covariance =(_t, _t).
We summarize their method here, leaving details to <ref>.-1
Constrained optimization
express the ELBO <ref> in terms of both the variational parameters =(_t, _t) via the drift function f_q and the marginal statistics via the expectations under q(_t), which are coupled through ODEs:
C[, ](t) =
[ _t - _t _t - _t; _t - _t _t - _t _t^⊤ - _c ] = 0 ∀ t.
Therefore, optimization of the ELBO can be expressed as the optimization of
ℒ(,) = 𝔼_q()log p(|)
12_0^T_q(_t)(_t _t + _t) f_p(_t, t)^2__c ^-1 t
q(_0)p(_0),
subject to constraint C[,](t) =0, ∀ t.
They propose to solve this constrained optimization problem via the method of Lagrangian multipliers (see <ref>).
This approach does not lead to a closed-form expression for the solution, but gives stationarity conditions for the optimal variational parameters,-1
(^*_t, ^*_t) = Π_q^*(_t)[f_p(·, t)] + g(, ^*, ^*, t),
where Π_q(_t)[f_p(·,t)] is the posterior linearization operator applied to the prior drift f_p(·,t) defined by -1
Π_q(_t)[f_p(·,t)] = (_t, _t) min 𝔼_q(_t)[_t _t + _t - f_p(_t, t)^2__c^-1].
Informally, the operator finds the best linear approximation of the drift in a squared loss sense, in expectation over the posterior process.
Fixed point iterative algorithm
These stationarity conditions imply an iterative algorithm,
(^(k+1)_t, ^(k+1)_t) = Π_q^(k)(_t)[f_p(·, t)] + g(, _t^(k), _t^(k), t) .
for finding the variational parameters. An appealing property of these fixed point updates is that they mimic (up to the posterior linearization) the additive expression of the exact posterior drift in <ref> (see <ref> for details). As follow-up, <cit.> made clear the connection to posterior linearization that is also used as an explicit sub-routine in approximation algorithms <cit.>.-2
Limitations of the method
The first issue is with the iterative algorithm in <ref>.
The iterative algorithm is introduced out of convenience because it leads to closed-form updates. However, it does not come with convergence guarantees.
Another issue stemming from the stationarity conditions <ref> is that unlike in the exact inference case <ref>, the deviation g from the prior drift depends on the optimal posterior (^*, ^*) instead of on the prior p. This dependence makes the fixed-point updates slow to converge—even for simple linear diffusion (see <ref> right).
A downstream problem is in the learning of model parameters (via ELBO optimization) due to the parameterization choice of q. Following <cit.>, we argue that when learning parameters via ELBO maximization, the best parameterization of the variational posterior q is one that completely decouples the contribution of the prior from that of the observations. It is not possible to achieve this when parameterizing q via its drift function. This is true even when exploiting the additive expression of the exact drift <ref> because the deviation term g still mixes the prior and the observations. The algorithm proposed in <cit.> inherits this more general problem of parameterization via the drift function.-3
In the next section, we describe our method and how it fixes the aforementioned issues by exploiting the exponential family structure of linear diffusion processes. By parameterizing the posterior in terms of its natural parameters, (i) we bypass the need to use a fixed point algorithm, trading it for a well-understood convex optimization algorithm, and (ii) we achieve a better separation of the prior and observation contributions to the posterior, speeding up the dynamics of learning.
§ CONJUGATE-VARIATIONAL INFERENCE FOR DIFFUSION PROCESSES ()
We focus on performing approximate inference in non-linear diffusion process priors and an arbitrary likelihood. Our approach hinges on the following steps: (i) We frame our method within the framework of variational inference, restricting the variational process to the set of linear diffusions; (ii) such a variational process belongs to an exponential family, and we parameterize it via its natural parameters; (iii) we speed up inference via natural gradient descent and ease learning by incorporating iterative posterior linearization of the prior in the framework. In the following sections, we proceed step-by-step in deepening and devising this setup.
§.§ Inference in models with linear diffusions and Gaussian observation model
Consider the following continuous-discrete Gaussian diffusion model with Gaussian observations:
_t = (_t _t + _t) t + _t
and
y_i | = ^⊤_t_i + ϵ_i,
with ϵ_i ∼(0, σ^2) and ∈^d (example of such a model in <ref>).
The diffusion process can be marginalized to state evaluations at data input {_i=_t_i}_i=1^N leading to the discrete-time Markov chain-1
_i+1 = _i _i + _i + ϵ̂_i, ϵ̂_i ∼(0, _i),
where (_i, _i, _i) are available in closed-form.
The state vector is distributed as a multivariate Gaussian, with Markovian structure, whose probability density function factorizes as p(_0,…,_N) = p(_0) ∏_i=1^N p(_i|_i-1). The marginalized process belongs to the exponential family which we define as containing non-degenerate multivariate Gaussians with a probability density
p(_0,…,_N) = exp [ ⟨𝖳(), _p ⟩ - A(_p) ],
where 𝖳() = [, btd(^⊤)] is the set of sufficient statistics with btd() sets entries of outside of the d-block tri-diagonals to zero (see <ref>). The natural parameters of the prior _p are related to the prior mean _p and covariance _p of the distribution via _p = [_p^-1_p, -12_p^-1].
Each of the likelihood factors p(y_i |_i) can be expressed as proportional to the density of an exponential family distribution conjugate to with sufficient statistics 𝖳(^⊤_i) and natural parameters _i^* = (σ^2y_i, -12σ^2), which we stack into the vector pair ^* = (σ^2, -1_N2σ^2) ∈^N× 2.
Thus, due to the conjugacy, the posterior distribution p(|) ∝ p(|) p() belongs to . More formally,
noting =_N ⊗∈^Nd × N and introducing the linear projection ϕ() = (_1, diag(_2)^⊤),
the natural parameters of the posterior separate contribution of the prior and the observation in an additive manner, _|=_p +ϕ(^*).
In this setting, the marginal likelihood also has a closed-form expression, p() =(_p, _p^⊤+σ^2 _n), allowing gradient-based optimization of prior hyperparameters.-2
§.§ Inference in models with linear diffusions and arbitrary observation model
When the observation model is not Gaussian, we can still marginalize the diffusion to a finite Markov chain. However, the posterior no longer belongs to an exponential family , and we need to resort to approximate inference.
Under the variational framework, restricting q to belong to has the convenient property that the optimal posterior has the same additive decomposition as in the conjugate case: _q^* = _p + ϕ(^*), with ^* = (_1^*, _2^*).
This is revealed by looking at the first-order stationary condition of the optimal distribution q^*. Distributions in can be parameterized via there natural parameters but equivalently by their expectation parameters =[𝖳()]. The gradient of the ELBO with respect to the expectation parameters of q is given by ∇_ = ∇__q[log p(|)] - (_q - _p), where we used the property ∇_qp=_q - _p. At the optimum,
∇_|_^* = 0 _q^* = _p + ∇__q[log p(|)]|_^*.
The gradient of the expected log-likelihood can be shown to be sparse and low rank, ∃^*, ϕ(^*) = ∇__q[log p(|)]|_^*.
To find the optimal variational parameters, it is sufficient to search the space of distribution in with natural parameters _q = _p + ϕ().
Conjugate-computation VI (CVI, <cit.>) gives an efficient algorithm to find the optimal ^* by running mirror descent with the Kullback–Leibler divergence (KL) as Bregman divergence. More precisely, using ρ step-size, it constructs a sequence of iterates ^(k) via
^(k+1) = max_ ⟨∇_ L(^(k)), ⟩ - 1/ρq(; )q(; ^(k)) .
This maximization can be computed in closed-form leading to updates in the natural parameterization
^(k+1) = (1-ρ) ^(k) + ρ ϕ^-1(
∇__q^(k)[log p(|)]
).
In the case of Gaussian observations, the gradient term in the updates is independent of where the gradient is evaluated and equal to ^*, so a single step of CVI with step-size ρ=1 leads to the optimum (see <ref> for an empirical example). In the more general setting of log-concave likelihoods, the iterations in <ref> are guaranteed to converge to the optimum.
§.§ Inference in non-linear diffusions with arbitrary observation model
r.5
Finally, we consider the challenging setting of non-linear diffusions. Similar to the algorithm proposed in <cit.>, we adopt the variational approach and restrict the posterior process to the set of linear diffusions.
However, instead of directly parameterizing the linear drift of the variational, we extend the finite Markovian exponential family to the continuous setting. Intuitively, such a continuous exponential family _c can be constructed as the infinite limit of small increments in the input space (the t → 0 limit). More precisely, we introduce the sufficient statistics 𝖳[] = [_t, _t(_t+_t)^⊤]_t ∈ [0,T], and denote by (·) and (·) the associated natural and expectation parameters which are now functions indexed by time.
Restricting the variational process to belong to _c means we force its drift to be linear. Our aim here is to parameterize the posterior so as to maintain, as much as possible, the additive separation of the contribution of the prior and observations to the natural parameterization of the posterior.-1
Unlike the case of linear diffusion, the Kullback–Leibler divergence in the ELBO and its gradient is no longer tractable. We side-step this problem by introducing a base distribution b∈_c that we will specify later to approximate the prior process p. With such a reference process b, we rewrite the ELBO as-2
(q) = 𝔼_q[ log p(|) ] + (qb - qp)_e(q, p, b) - qb .
This corresponds to the ELBO of an alternative inference problem in a generative model with linear prior diffusion b, observations , and an additional implicit likelihood whose expected logarithm is given by e(q,p,b). This term can be interpreted as an error quantifying how good b is in approximating p in the context of the variational distribution q. Crucially, e(q,p,b=p)=0 is only achievable when p is a linear diffusion. The first-order stationarity conditions for an optimum are
_q^* = _b
+ ∇__q[log p(|)]|_^*_ϕ_c(^*)
+ ∇_ e(q[],p, b)|_^*_^*,
where ^* is sparse in time and depends on the observations whereas ^* is dense and ϕ_c()(t) = (∑_i=1^N _i,1δ_t=t_i, ∑_i=1^N ^⊤_i,2δ_t=t_i).
By parameterizing the posterior as _q = _b + ϕ_c() +, we can perform inference via the same mirror descent procedure as in <ref> and get the iterative updates
^(k+1) = (1-ρ) ^(k) + ρ ϕ_c^-1(
∇__q^(k)[log p(|)]
),
^(k+1) = (1-ρ) ^(k) + ρ ∇_e(q,p,b) .
Details on the derivation of these updates are given in <ref>. We are left to choose the base process b. For inference (optimizing for q given a fixed generative model) and under mild regularity conditions on the objective, it can be set arbitrarily: the iterative updates <ref> and <ref> can reach the optimal variational parameters irrespective of the base b and initial variational parameters ^(0), ^(0).
The simpler settings of prior diffusions with linear drifts and Gaussian observations shed some light on how to choose the base.
Indeed, in the case of Gaussian observations, ^* is independent of the prior p and captures all there is to know about the observations. To decouple the contributions of the prior and observations in the posterior, it is desirable to have ^* be independent of the observations. This is naturally achieved in the case of linear diffusion priors by setting the base to the prior (b=p); in that case, we have e(q, p, b)=0, ∀ q and thus ^*=0: we recover the efficient scheme of <ref> for both inference and learning in models with linear diffusion priors.
Equipped with this insight, we return to the general setting of non-linear diffusions and propose to set b so as to minimize the proxy objective e(q^*,p, b). An alternative expression for e reveals the connection of this approach to posterior linearization introduced in <ref>,
e(q, p, b) = f_b f_p^2__c^-1 q(x)
+ 2 ⟨ f_q f_b, f_b f_p ⟩__c^-1 q(x).
Indeed, posterior linearization minimizes the first term of <ref>. In this work, we choose to set b iteratively via posterior linearization within a broader algorithm for inference and learning, which we call and describe in <ref>. In , we alternates steps of (i) inference via mirror descent on (q, ) with respect to q, (ii) posterior linearization to update base b, and (iii) learning via gradient descent on (q(), ) with respect to hyperparameters .
Comparison with the method
is built on top of efficient algorithms specialized for the setting of linear diffusion priors (, GPs). One of its good properties is that it reverts to these efficient algorithms when applied to problems in the linear-Gaussian setting. In that sense, it unifies variational inference in models with a GP and a DP prior. This is unlike the , which is very inefficient in these scenarios in terms of speed of convergence for both inference and learning ( <ref>).
However, the parameterization of is slightly more costly than that of . Both methods implicitly learn the transition statistics (_i, _i, _i) of a linear Gaussian state space model (see <ref>), but does not learn the _i and restricts the diffusion coefficient to match that of the prior diffusion. This is suboptimal when the continuous algorithm is discretized at implementation time: in <ref> we show that the performance of degrades fast as the discretization gets coarser, while the performance of is relatively unaffected.
§ EXPERIMENTS
We implement both and within the MarkovFlow <cit.> framework built on top of TensorFlow <cit.> and perform a series of experiments to showcase various properties of these methods. For inference and learning, we evaluate the performance of these methods on inference problems covering a set of diffusion process priors, both linear and non-linear, each with its own characteristics (<ref>). For inference, we show the proposed method performs better than and is at par with the sequential Monte Carlo baseline. For learning, we use the same setup but also learn the model parameters of the DP prior. We show how provides a better learning objective leading to faster learning. Furthermore, to showcase the applicability of in the real world, we demonstrate inference on finance and GPS tracking data sets. (<ref>).-1
§.§ Evaluation of approximate inference on synthetic problems
We comparatively evaluate our method on synthetic inference problems covering an array of DP priors:
the linear DP (Ornstein–Uhlenbeck, OU, <ref>), x_t =-θ x_t t +β_t, as a sanity check for which the posterior process can be written in closed-form; the DP, x_t =θtanh(x_t) t +β_t, whose marginal state distributions are bimodal and mode-switching in sample state trajectories becomes increasingly unlikely with time (<ref>); the double-well (DW) DP, x_t =θ_0 x_t (θ_1 - x_t^2) t +β_t, whose marginal state distributions have two modes that sample state trajectories keep visiting through time
(<ref>); a Sine DP, x_t =θ_0 sin(x_t -θ_1) t +β_t, whose marginal state distributions have many modes (<ref>); and a Square-root DP, x_t =√(θ |x_t|) t +β_t, that has divergent fat-tailed behaviour (<ref>).-1
Baselines As a baseline, we use sequential Monte Carlo (SMC) as particle smoothing through conditional particle filtering with ancestor sampling adopted from <cit.>. The `optimal' Gaussian baseline is based on a Gaussian fit to the SMC samples. To approximate the log marginal likelihood, we use annealed importance sampling (AIS, <cit.>) with a similar setup as in <cit.> (details in <ref>).-2
R0.45
axis on top,scale only axis,width=,height=, ylabel near ticks,ylabel style=yshift=-2pt,y tick label style=rotate=90, grid style=line width=.1pt, draw=gray!10,dashed,grid, ytick=0, 0.4, 0.8
[
height=,
legend cell align=left,
legend style=fill opacity=1, fill=white, draw opacity=1, text opacity=1, at=(0.97,0.03), anchor=south east,
tick align=outside,
tick pos=left,
width=,
x grid style=white!69.0196078431373!black,
xlabel=Iterations, k,
xmin=-5, xmax=183,
xtick style=color=black,
y grid style=white!69.0196078431373!black,
ylabel=Model parameter, θ,
ymin=-0.01, ymax=0.9,
ytick style=color=black
]
[semithick, color0, opacity=0.3, forget plot]
table
0 0
1 0.0999996498104032
2 0.19969547239506
3 0.29883794718852
4 0.397137516736925
5 0.494207891733617
6 0.58965282788908
7 0.683352637586033
8 0.775091964869813
9 0.819918602428188
10 0.841839913627951
11 0.852585561738366
12 0.857866997743079
13 0.860469868835609
14 0.861755771534433
15 0.862392407577265
16 0.862708184182446
17 0.862857405246513
18 0.862928175381438
19 0.862995540494094
20 0.863027710859666
21 0.863058536534886
22 0.863073351927816
23 0.863087632013759
24 0.863094534453263
25 0.863101225479833
26 0.863104477414275
27 0.863107646661834
28 0.863109194858351
29 0.863110711057135
30 0.863111455159386
31 0.863112187071608
32 0.863112547749947
33 0.863112903889087
34 0.863113080023139
35 0.8631132545237
36 0.86311334109382
37 0.863113427107229
38 0.863113469891369
39 0.863113469891369
40 0.863113469891369
41 0.863113469891369
42 0.863113469891369
43 0.863113469891369
44 0.863113469891369
45 0.863113469891369
46 0.863113469891369
47 0.863113469891369
48 0.863113469891369
49 0.863113469891369
50 0.863113469891369
51 0.863113469891369
52 0.863113469891369
53 0.863113469891369
54 0.863113469891369
55 0.863113469891369
56 0.863113469891369
57 0.863113469891369
58 0.863113469891369
59 0.863113469891369
60 0.863113469891369
61 0.863113469891369
62 0.863113469891369
63 0.863113469891369
64 0.863113469891369
65 0.863113469891369
66 0.863113469891369
67 0.863113469891369
68 0.863113469891369
69 0.863113469891369
70 0.863113469891369
71 0.863113469891369
72 0.863113469891369
73 0.863113469891369
74 0.863113469891369
75 0.863113469891369
76 0.863113469891369
77 0.863113469891369
78 0.863113469891369
79 0.863113469891369
80 0.863113469891369
81 0.863113469891369
82 0.863113469891369
83 0.863113469891369
84 0.863113469891369
85 0.863113469891369
86 0.863113469891369
87 0.863113469891369
88 0.863113469891369
89 0.863113469891369
90 0.863113469891369
91 0.863113469891369
92 0.863113469891369
93 0.863113469891369
94 0.863113469891369
95 0.863113469891369
96 0.863113469891369
97 0.863113469891369
98 0.863113469891369
99 0.863113469891369
100 0.863113469891369
101 0.863113469891369
102 0.863113469891369
103 0.863113469891369
104 0.863113469891369
105 0.863113469891369
106 0.863113469891369
107 0.863113469891369
108 0.863113469891369
109 0.863113469891369
110 0.863113469891369
111 0.863113469891369
112 0.863113469891369
113 0.863113469891369
114 0.863113469891369
115 0.863113469891369
116 0.863113469891369
117 0.863113469891369
118 0.863113469891369
119 0.863113469891369
120 0.863113469891369
121 0.863113469891369
122 0.863113469891369
123 0.863113469891369
124 0.863113469891369
125 0.863113469891369
126 0.863113469891369
127 0.863113469891369
128 0.863113469891369
129 0.863113469891369
130 0.863113469891369
131 0.863113469891369
132 0.863113469891369
133 0.863113469891369
134 0.863113469891369
135 0.863113469891369
136 0.863113469891369
137 0.863113469891369
138 0.863113469891369
139 0.863113469891369
140 0.863113469891369
141 0.863113469891369
142 0.863113469891369
143 0.863113469891369
144 0.863113469891369
145 0.863113469891369
146 0.863113469891369
147 0.863113469891369
148 0.863113469891369
149 0.863113469891369
150 0.863113469891369
151 0.863113469891369
152 0.863113469891369
153 0.863113469891369
154 0.863113469891369
155 0.863113469891369
156 0.863113469891369
157 0.863113469891369
158 0.863113469891369
159 0.863113469891369
160 0.863113469891369
161 0.863113469891369
162 0.863113469891369
163 0.863113469891369
164 0.863113469891369
165 0.863113469891369
166 0.863113469891369
167 0.863113469891369
168 0.863113469891369
169 0.863113469891369
170 0.863113469891369
171 0.863113469891369
172 0.863113469891369
173 0.863113469891369
174 0.863113469891369
175 0.863113469891369
176 0.863113469891369
177 0.863113469891369
178 0.863113469891369
179 0.863113469891369
180 0.863113469891369
181 0.863113469891369
182 0.863113469891369
183 0.863113469891369
;
[semithick, color0, opacity=0.3, forget plot]
table
0 0
1 0.0999996230562328
2 0.199703000109995
3 0.298835725188481
4 0.39711574526851
5 0.494163837602505
6 0.589655452731705
7 0.68334554887638
8 0.774923300809064
9 0.819505754734061
10 0.841222656419721
11 0.851827517569846
12 0.857021051818805
13 0.859572142732651
14 0.860828720998826
15 0.861449196156906
16 0.861756241518696
17 0.861901356846665
18 0.861970189417451
19 0.862002909365847
20 0.862018515517177
21 0.862025981897145
22 0.862029565362053
23 0.862031290160082
24 0.862032122819912
25 0.862032526062801
26 0.86203272188687
27 0.862032817253731
28 0.862032863815124
29 0.862032886607049
30 0.862032897789112
31 0.862032903288271
32 0.862032905998087
33 0.862032907336192
34 0.862032907998094
35 0.862032908326098
36 0.86203290848888
37 0.862032908569782
38 0.86203290861004
39 0.862032908630093
40 0.862032908640092
41 0.862032908645082
42 0.862032908647574
43 0.862032908648819
44 0.862032908649441
45 0.862032908649752
46 0.862032908649908
47 0.862032908649986
48 0.862032908650025
49 0.862032908650065
50 0.862032908650084
51 0.862032908650094
52 0.862032908650099
53 0.862032908650104
54 0.862032908650107
55 0.862032908650109
56 0.86203290865011
57 0.862032908650111
58 0.862032908650111
59 0.862032908650112
60 0.862032908650112
61 0.862032908650112
62 0.862032908650112
63 0.862032908650112
64 0.862032908650112
65 0.862032908650112
66 0.862032908650112
67 0.862032908650112
68 0.862032908650112
69 0.862032908650112
70 0.862032908650112
71 0.862032908650112
72 0.862032908650112
73 0.862032908650112
74 0.862032908650112
75 0.862032908650112
76 0.862032908650112
77 0.862032908650112
78 0.862032908650112
79 0.862032908650112
80 0.862032908650112
81 0.862032908650112
82 0.862032908650112
83 0.862032908650112
84 0.862032908650112
85 0.862032908650112
86 0.862032908650112
87 0.862032908650112
88 0.862032908650112
89 0.862032908650112
90 0.862032908650112
91 0.862032908650112
92 0.862032908650112
93 0.862032908650112
94 0.862032908650112
95 0.862032908650112
96 0.862032908650112
97 0.862032908650112
98 0.862032908650112
99 0.862032908650112
100 0.862032908650112
101 0.862032908650112
102 0.862032908650112
103 0.862032908650112
104 0.862032908650112
105 0.862032908650112
106 0.862032908650112
107 0.862032908650112
108 0.862032908650112
109 0.862032908650112
110 0.862032908650112
111 0.862032908650112
112 0.862032908650112
113 0.862032908650112
114 0.862032908650112
115 0.862032908650112
116 0.862032908650112
117 0.862032908650112
118 0.862032908650112
119 0.862032908650112
120 0.862032908650112
121 0.862032908650112
122 0.862032908650112
123 0.862032908650112
124 0.862032908650112
125 0.862032908650112
126 0.862032908650112
127 0.862032908650112
128 0.862032908650112
129 0.862032908650112
130 0.862032908650112
131 0.862032908650112
132 0.862032908650112
133 0.862032908650112
134 0.862032908650112
135 0.862032908650112
136 0.862032908650112
137 0.862032908650112
138 0.862032908650112
139 0.862032908650112
140 0.862032908650112
141 0.862032908650112
142 0.862032908650112
143 0.862032908650112
144 0.862032908650112
145 0.862032908650112
146 0.862032908650112
147 0.862032908650112
148 0.862032908650112
149 0.862032908650112
150 0.862032908650112
151 0.862032908650112
152 0.862032908650112
153 0.862032908650112
154 0.862032908650112
155 0.862032908650112
156 0.862032908650112
157 0.862032908650112
158 0.862032908650112
159 0.862032908650112
160 0.862032908650112
161 0.862032908650112
162 0.862032908650112
163 0.862032908650112
164 0.862032908650112
165 0.862032908650112
166 0.862032908650112
167 0.862032908650112
168 0.862032908650112
169 0.862032908650112
170 0.862032908650112
171 0.862032908650112
172 0.862032908650112
173 0.862032908650112
174 0.862032908650112
175 0.862032908650112
176 0.862032908650112
177 0.862032908650112
178 0.862032908650112
179 0.862032908650112
180 0.862032908650112
181 0.862032908650112
182 0.862032908650112
183 0.862032908650112
;
[semithick, color0, opacity=0.3, forget plot]
table
0 0
1 0.0999996419319814
2 0.199685009113543
3 0.298755539051551
4 0.396864067253831
5 0.493479322798522
6 0.58810766318782
7 0.680288804753554
8 0.769714711924965
9 0.812927228332324
10 0.8338306905595
11 0.843972816496821
12 0.848912033425733
13 0.851325521049508
14 0.852508753164131
15 0.853090573210088
16 0.853377424196281
17 0.853512505074443
18 0.853576340101441
19 0.853636863030082
20 0.853665649769626
21 0.853693121611701
22 0.853706271605391
23 0.853718899404463
24 0.853724980794362
25 0.853730855308889
26 0.853733700666869
27 0.853736464515406
28 0.853737810374267
29 0.853737810374267
30 0.853737810374267
31 0.853737810374267
32 0.853737810374267
33 0.853737810374267
34 0.853737810374267
35 0.853737810374267
36 0.853737810374267
37 0.853737810374267
38 0.853737810374267
39 0.853737810374267
40 0.853737810374267
41 0.853737810374267
42 0.853737810374267
43 0.853737810374267
44 0.853737810374267
45 0.853737810374267
46 0.853737810374267
47 0.853737810374267
48 0.853737810374267
49 0.853737810374267
50 0.853737810374267
51 0.853737810374267
52 0.853737810374267
53 0.853737810374267
54 0.853737810374267
55 0.853737810374267
56 0.853737810374267
57 0.853737810374267
58 0.853737810374267
59 0.853737810374267
60 0.853737810374267
61 0.853737810374267
62 0.853737810374267
63 0.853737810374267
64 0.853737810374267
65 0.853737810374267
66 0.853737810374267
67 0.853737810374267
68 0.853737810374267
69 0.853737810374267
70 0.853737810374267
71 0.853737810374267
72 0.853737810374267
73 0.853737810374267
74 0.853737810374267
75 0.853737810374267
76 0.853737810374267
77 0.853737810374267
78 0.853737810374267
79 0.853737810374267
80 0.853737810374267
81 0.853737810374267
82 0.853737810374267
83 0.853737810374267
84 0.853737810374267
85 0.853737810374267
86 0.853737810374267
87 0.853737810374267
88 0.853737810374267
89 0.853737810374267
90 0.853737810374267
91 0.853737810374267
92 0.853737810374267
93 0.853737810374267
94 0.853737810374267
95 0.853737810374267
96 0.853737810374267
97 0.853737810374267
98 0.853737810374267
99 0.853737810374267
100 0.853737810374267
101 0.853737810374267
102 0.853737810374267
103 0.853737810374267
104 0.853737810374267
105 0.853737810374267
106 0.853737810374267
107 0.853737810374267
108 0.853737810374267
109 0.853737810374267
110 0.853737810374267
111 0.853737810374267
112 0.853737810374267
113 0.853737810374267
114 0.853737810374267
115 0.853737810374267
116 0.853737810374267
117 0.853737810374267
118 0.853737810374267
119 0.853737810374267
120 0.853737810374267
121 0.853737810374267
122 0.853737810374267
123 0.853737810374267
124 0.853737810374267
125 0.853737810374267
126 0.853737810374267
127 0.853737810374267
128 0.853737810374267
129 0.853737810374267
130 0.853737810374267
131 0.853737810374267
132 0.853737810374267
133 0.853737810374267
134 0.853737810374267
135 0.853737810374267
136 0.853737810374267
137 0.853737810374267
138 0.853737810374267
139 0.853737810374267
140 0.853737810374267
141 0.853737810374267
142 0.853737810374267
143 0.853737810374267
144 0.853737810374267
145 0.853737810374267
146 0.853737810374267
147 0.853737810374267
148 0.853737810374267
149 0.853737810374267
150 0.853737810374267
151 0.853737810374267
152 0.853737810374267
153 0.853737810374267
154 0.853737810374267
155 0.853737810374267
156 0.853737810374267
157 0.853737810374267
158 0.853737810374267
159 0.853737810374267
160 0.853737810374267
161 0.853737810374267
162 0.853737810374267
163 0.853737810374267
164 0.853737810374267
165 0.853737810374267
166 0.853737810374267
167 0.853737810374267
168 0.853737810374267
169 0.853737810374267
170 0.853737810374267
171 0.853737810374267
172 0.853737810374267
173 0.853737810374267
174 0.853737810374267
175 0.853737810374267
176 0.853737810374267
177 0.853737810374267
178 0.853737810374267
179 0.853737810374267
180 0.853737810374267
181 0.853737810374267
182 0.853737810374267
183 0.853737810374267
;
[semithick, color0, opacity=0.3, forget plot]
table
0 0
1 0.0999996339359322
2 0.199671557494602
3 0.298703025424776
4 0.396738406432813
5 0.493343492242388
6 0.588068375354554
7 0.680587664306702
8 0.770698274359257
9 0.814443973219254
10 0.835707663980969
11 0.846070594293862
12 0.851137881825298
13 0.853623148367813
14 0.854845723901733
15 0.855448735483301
16 0.855746851702307
17 0.855888223257383
18 0.855955509841465
19 0.856019492972555
20 0.856050016363446
21 0.85607919828837
22 0.856093192244379
23 0.856106638345477
24 0.856113117573277
25 0.856119376181622
26 0.856122407470447
27 0.856125350956767
28 0.856126783827039
29 0.85612818215832
30 0.856128866115091
31 0.856129536780892
32 0.856129866309683
33 0.856130190762774
34 0.85613035079781
35 0.856130508947555
36 0.856130587222168
37 0.856130664822418
38 0.856130703343873
39 0.856130741638317
40 0.856130760696049
41 0.856130779686187
42 0.856130789157265
43 0.85613079861296
44 0.856130803337122
45 0.856130808060836
46 0.856130810424111
47 0.856130812790235
48 0.856130813975387
49 0.856130815163205
50 0.856130815758716
51 0.85613081635606
52 0.856130816655756
53 0.856130816956568
54 0.856130817107575
55 0.856130817259218
56 0.856130817335376
57 0.856130817411882
58 0.856130817450317
59 0.85613081748894
60 0.856130817508348
61 0.856130817527855
62 0.856130817537659
63 0.856130817547513
64 0.856130817552466
65 0.856130817557445
66 0.856130817559948
67 0.856130817562464
68 0.856130817563729
69 0.856130817565
70 0.856130817565639
71 0.856130817566281
72 0.856130817566604
73 0.856130817566928
74 0.856130817567091
75 0.856130817567255
76 0.856130817567338
77 0.85613081756742
78 0.856130817567462
79 0.856130817567504
80 0.856130817567525
81 0.856130817567546
82 0.856130817567556
83 0.856130817567567
84 0.856130817567572
85 0.856130817567578
86 0.85613081756758
87 0.856130817567582
88 0.856130817567582
89 0.856130817567582
90 0.856130817567582
91 0.856130817567582
92 0.856130817567582
93 0.856130817567582
94 0.856130817567582
95 0.856130817567582
96 0.856130817567582
97 0.856130817567582
98 0.856130817567582
99 0.856130817567582
100 0.856130817567582
101 0.856130817567582
102 0.856130817567582
103 0.856130817567582
104 0.856130817567582
105 0.856130817567582
106 0.856130817567582
107 0.856130817567582
108 0.856130817567582
109 0.856130817567582
110 0.856130817567582
111 0.856130817567582
112 0.856130817567582
113 0.856130817567582
114 0.856130817567582
115 0.856130817567582
116 0.856130817567582
117 0.856130817567582
118 0.856130817567582
119 0.856130817567582
120 0.856130817567582
121 0.856130817567582
122 0.856130817567582
123 0.856130817567582
124 0.856130817567582
125 0.856130817567582
126 0.856130817567582
127 0.856130817567582
128 0.856130817567582
129 0.856130817567582
130 0.856130817567582
131 0.856130817567582
132 0.856130817567582
133 0.856130817567582
134 0.856130817567582
135 0.856130817567582
136 0.856130817567582
137 0.856130817567582
138 0.856130817567582
139 0.856130817567582
140 0.856130817567582
141 0.856130817567582
142 0.856130817567582
143 0.856130817567582
144 0.856130817567582
145 0.856130817567582
146 0.856130817567582
147 0.856130817567582
148 0.856130817567582
149 0.856130817567582
150 0.856130817567582
151 0.856130817567582
152 0.856130817567582
153 0.856130817567582
154 0.856130817567582
155 0.856130817567582
156 0.856130817567582
157 0.856130817567582
158 0.856130817567582
159 0.856130817567582
160 0.856130817567582
161 0.856130817567582
162 0.856130817567582
163 0.856130817567582
164 0.856130817567582
165 0.856130817567582
166 0.856130817567582
167 0.856130817567582
168 0.856130817567582
169 0.856130817567582
170 0.856130817567582
171 0.856130817567582
172 0.856130817567582
173 0.856130817567582
174 0.856130817567582
175 0.856130817567582
176 0.856130817567582
177 0.856130817567582
178 0.856130817567582
179 0.856130817567582
180 0.856130817567582
181 0.856130817567582
182 0.856130817567582
183 0.856130817567582
;
[semithick, color0, opacity=0.3, forget plot]
table
0 0
1 0.0999995706861136
2 0.199649068264365
3 0.298629716008135
4 0.396596254176766
5 0.493096480946155
6 0.587757202727579
7 0.680425659420089
8 0.725601322370728
9 0.747646217756331
10 0.758433735443551
11 0.763728432721619
12 0.766334896163157
13 0.767621540247621
14 0.768258162821506
15 0.768573798150686
16 0.768724089097651
17 0.768795930497445
18 0.768864846613792
19 0.768898017268129
20 0.768930052167811
21 0.76894556982645
22 0.768960647594512
23 0.768967993378966
24 0.76897516990538
25 0.768978684237783
26 0.768982134196967
27 0.768983831284847
28 0.768985504332235
29 0.768986330568212
30 0.76898714807337
31 0.768987553161339
32 0.768987553161339
33 0.768987553161339
34 0.768987553161339
35 0.768987553161339
36 0.768987553161339
37 0.768987553161339
38 0.768987553161339
39 0.768987553161339
40 0.768987553161339
41 0.768987553161339
42 0.768987553161339
43 0.768987553161339
44 0.768987553161339
45 0.768987553161339
46 0.768987553161339
47 0.768987553161339
48 0.768987553161339
49 0.768987553161339
50 0.768987553161339
51 0.768987553161339
52 0.768987553161339
53 0.768987553161339
54 0.768987553161339
55 0.768987553161339
56 0.768987553161339
57 0.768987553161339
58 0.768987553161339
59 0.768987553161339
60 0.768987553161339
61 0.768987553161339
62 0.768987553161339
63 0.768987553161339
64 0.768987553161339
65 0.768987553161339
66 0.768987553161339
67 0.768987553161339
68 0.768987553161339
69 0.768987553161339
70 0.768987553161339
71 0.768987553161339
72 0.768987553161339
73 0.768987553161339
74 0.768987553161339
75 0.768987553161339
76 0.768987553161339
77 0.768987553161339
78 0.768987553161339
79 0.768987553161339
80 0.768987553161339
81 0.768987553161339
82 0.768987553161339
83 0.768987553161339
84 0.768987553161339
85 0.768987553161339
86 0.768987553161339
87 0.768987553161339
88 0.768987553161339
89 0.768987553161339
90 0.768987553161339
91 0.768987553161339
92 0.768987553161339
93 0.768987553161339
94 0.768987553161339
95 0.768987553161339
96 0.768987553161339
97 0.768987553161339
98 0.768987553161339
99 0.768987553161339
100 0.768987553161339
101 0.768987553161339
102 0.768987553161339
103 0.768987553161339
104 0.768987553161339
105 0.768987553161339
106 0.768987553161339
107 0.768987553161339
108 0.768987553161339
109 0.768987553161339
110 0.768987553161339
111 0.768987553161339
112 0.768987553161339
113 0.768987553161339
114 0.768987553161339
115 0.768987553161339
116 0.768987553161339
117 0.768987553161339
118 0.768987553161339
119 0.768987553161339
120 0.768987553161339
121 0.768987553161339
122 0.768987553161339
123 0.768987553161339
124 0.768987553161339
125 0.768987553161339
126 0.768987553161339
127 0.768987553161339
128 0.768987553161339
129 0.768987553161339
130 0.768987553161339
131 0.768987553161339
132 0.768987553161339
133 0.768987553161339
134 0.768987553161339
135 0.768987553161339
136 0.768987553161339
137 0.768987553161339
138 0.768987553161339
139 0.768987553161339
140 0.768987553161339
141 0.768987553161339
142 0.768987553161339
143 0.768987553161339
144 0.768987553161339
145 0.768987553161339
146 0.768987553161339
147 0.768987553161339
148 0.768987553161339
149 0.768987553161339
150 0.768987553161339
151 0.768987553161339
152 0.768987553161339
153 0.768987553161339
154 0.768987553161339
155 0.768987553161339
156 0.768987553161339
157 0.768987553161339
158 0.768987553161339
159 0.768987553161339
160 0.768987553161339
161 0.768987553161339
162 0.768987553161339
163 0.768987553161339
164 0.768987553161339
165 0.768987553161339
166 0.768987553161339
167 0.768987553161339
168 0.768987553161339
169 0.768987553161339
170 0.768987553161339
171 0.768987553161339
172 0.768987553161339
173 0.768987553161339
174 0.768987553161339
175 0.768987553161339
176 0.768987553161339
177 0.768987553161339
178 0.768987553161339
179 0.768987553161339
180 0.768987553161339
181 0.768987553161339
182 0.768987553161339
183 0.768987553161339
;
[semithick, color1, opacity=0.3, forget plot]
table
0 0
1 0.00999999956162457
2 0.019996530514237
3 0.0299872288118744
4 0.039969698185709
5 0.0499415154315424
6 0.0599002358008436
7 0.0698433984428393
8 0.0797685318552089
9 0.089673159303083
10 0.099554804167746
11 0.109410995188473
12 0.119239271563374
13 0.129037187877966
14 0.138802318833335
15 0.148532263749208
16 0.158224650820842
17 0.167877141112334
18 0.177487432272737
19 0.187053261965028
20 0.19657241100159
21 0.206042706183304
22 0.215462022842567
23 0.22482828709353
24 0.234139477795554
25 0.243393628238251
26 0.252588827558585
27 0.261723221902249
28 0.270795015342957
29 0.27980247057445
30 0.288743909390805
31 0.297617712971206
32 0.306422321985627
33 0.315156236537897
34 0.323818015962506
35 0.332406278491134
36 0.340919700804395
37 0.349357017483663
38 0.357717020377098
39 0.365998557893176
40 0.374200534234132
41 0.382321908580805
42 0.390361694239421
43 0.39831895775988
44 0.406192818034176
45 0.413982445382623
46 0.421687060634661
47 0.429305934210135
48 0.436838385206106
49 0.444283780493471
50 0.451641533826921
51 0.458911104971104
52 0.466091998845188
53 0.473183764687497
54 0.480185995241324
55 0.487098325962573
56 0.493920434249475
57 0.500652038694229
58 0.507292898356124
59 0.513842812055386
60 0.520301617686809
61 0.526669191551976
62 0.532945447708752
63 0.539130337336605
64 0.545223848116195
65 0.551226003621613
66 0.557136862723627
67 0.56295651900224
68 0.568685100166895
69 0.574322767482675
70 0.57986971520086
71 0.585326169992305
72 0.590692390382097
73 0.595968666184078
74 0.601155317933891
75 0.606252696319257
76 0.61126118160635
77 0.616181183061181
78 0.621013138365032
79 0.625757513023093
80 0.630414799765537
81 0.634985517940398
82 0.639470212897712
83 0.643869455364503
84 0.648183840810272
85 0.65241398880278
86 0.656560542354
87 0.660624167256206
88 0.664605551408277
89 0.668505404132354
90 0.672324455481106
91 0.676063455535917
92 0.679723173696385
93 0.683304397961601
94 0.68680793420374
95 0.690234605434545
96 0.69358525106535
97 0.696860726161346
98 0.700061900690802
99 0.703189658770045
100 0.706244897904988
101 0.709227925586229
102 0.712139723825212
103 0.714981281243105
104 0.717753592621929
105 0.720457658434088
106 0.723094484352911
107 0.725665080746664
108 0.72817046215837
109 0.730611646773672
110 0.732989655878842
111 0.735305513310938
112 0.73756024490202
113 0.739754877919216
114 0.741890440502347
115 0.743967961100725
116 0.745988467910647
117 0.747952988315011
118 0.749862548326427
119 0.751718172035063
120 0.753520881062451
121 0.755271694022335
122 0.75697162598963
123 0.758621687978424
124 0.760222886429965
125 0.761776222711409
126 0.76328269262614
127 0.764743285936326
128 0.766158985898361
129 0.767530768811768
130 0.768859603582079
131 0.770146451298151
132 0.771392264824331
133 0.77259798840782
134 0.773764557301545
135 0.774892897402788
136 0.775983924907797
137 0.777038545982526
138 0.778057656449645
139 0.779042141491892
140 0.779992875371813
141 0.780910721167906
142 0.781796530527125
143 0.782651143433707
144 0.783475387994204
145 0.784270080238621
146 0.785036023937502
147 0.785774010434784
148 0.78648481849623
149 0.787169214173216
150 0.787827950681624
151 0.788461768295582
152 0.789071394255778
153 0.789657542692029
154 0.790220914559825
155 0.790762197590493
156 0.791282066254655
157 0.791781181738634
158 0.792260191933445
159 0.792719731435999
160 0.793160421562155
161 0.793582870371224
162 0.793987672701558
163 0.794375410216813
164 0.794746651462504
165 0.795101951932456
166 0.795441854144742
167 0.795766887726723
168 0.796077569508785
169 0.796374403626371
170 0.796657881629926
171 0.796928482602349
172 0.797186673283564
173 0.79743290820183
174 0.79743290820183
175 0.79743290820183
176 0.79743290820183
177 0.79743290820183
178 0.79743290820183
179 0.79743290820183
180 0.79743290820183
181 0.79743290820183
182 0.79743290820183
183 0.79743290820183
;
[semithick, color1, opacity=0.3, forget plot]
table
0 0
1 0.00999999956584893
2 0.0199965719683942
3 0.0299873821155498
4 0.0399700636185396
5 0.049942224022149
6 0.0599014501338843
7 0.0698453134001824
8 0.0797713752877017
9 0.0896771926298632
10 0.0995603229005283
11 0.109418329378737
12 0.11924878617087
13 0.129049283059447
14 0.138817430150898
15 0.14855086229809
16 0.158247243276938
17 0.167904269700138
18 0.177519674654761
19 0.187091231054101
20 0.196616754697754
21 0.206094107037283
22 0.215521197648022
23 0.224895986410493
24 0.234216485407609
25 0.243480760546129
26 0.25268693291295
27 0.261833179878484
28 0.270917735960808
29 0.279938893465375
30 0.288895002915869
31 0.297784473292321
32 0.306605772092867
33 0.315357425235586
34 0.324038016816673
35 0.332646188740862
36 0.341180640239492
37 0.349640127291011
38 0.358023461957939
39 0.366329511653501
40 0.374557198350266
41 0.382705497742202
42 0.390773438370596
43 0.398760100723363
44 0.406664616316287
45 0.414486166763838
46 0.422223982846272
47 0.429877343578892
48 0.437445575288487
49 0.444928050701199
50 0.452324188045352
51 0.459633450172075
52 0.466855343695931
53 0.473989418157211
54 0.481035265207022
55 0.487992517815806
56 0.494860849505575
57 0.501639973605703
58 0.508329642531861
59 0.514929647087363
60 0.521439815785978
61 0.527860014195048
62 0.534190144297606
63 0.540430143872057
64 0.54657998588788
65 0.552639677915735
66 0.558609261550338
67 0.564488811844412
68 0.57027843675204
69 0.575978276579753
70 0.581588503443724
71 0.587109320731472
72 0.592540962566554
73 0.597883693274769
74 0.603137806850493
75 0.608303626421847
76 0.613381503713463
77 0.618371818505752
78 0.623274978089624
79 0.628091416715768
80 0.632821595037642
81 0.637465999547472
82 0.64202514200464
83 0.646499558855948
84 0.650889810647345
85 0.655196481426801
86 0.659420178138115
87 0.663561530005527
88 0.667621187909107
89 0.671599823750959
90 0.675498129812402
91 0.679316818102307
92 0.68305661969691
93 0.686718284071441
94 0.690302578423993
95 0.693810286992123
96 0.697242210362716
97 0.700599164775709
98 0.703881981422302
99 0.707091505738341
100 0.710228596693582
101 0.713293481404608
102 0.716287110150581
103 0.719210439522125
104 0.72206443206159
105 0.724850055877231
106 0.727568284234074
107 0.730220095124116
108 0.732806470818361
109 0.735328397403046
110 0.737786864302326
111 0.740182863789521
112 0.742517390488967
113 0.744791440870349
114 0.747006012737349
115 0.749162104712305
116 0.751260715718487
117 0.753302844461527
118 0.755289488911424
119 0.757221645786482
120 0.759100310040441
121 0.760926474354
122 0.762701128631835
123 0.76442525950616
124 0.766099849847789
125 0.767725878285602
126 0.769304318735254
127 0.77083613993788
128 0.772322305009513
129 0.773763771001849
130 0.775161488474954
131 0.77651640108244
132 0.777829445169582
133 0.779101549384799
134 0.780333634304881
135 0.781526612074273
136 0.782681386058707
137 0.7837988505134
138 0.784879890266029
139 0.785925380414611
140 0.786936186040413
141 0.787913161935965
142 0.788857152348203
143 0.789768990736763
144 0.790649499547378
145 0.791499490000344
146 0.79231976189395
147 0.793111103422761
148 0.793874291010634
149 0.794610089158272
150 0.795319250305168
151 0.796002514705705
152 0.796660610319202
153 0.797294252713666
154 0.797904144982982
155 0.798490977677273
156 0.799055428746148
157 0.79959816349453
158 0.800119834550752
159 0.800621081846622
160 0.801102532609086
161 0.801564801363203
162 0.802008489946041
163 0.802434187531174
164 0.802842470663417
165 0.803233903303435
166 0.803609036881876
167 0.803968410362649
168 0.804312550314999
169 0.804641970994005
170 0.804957174429136
171 0.805258650520496
172 0.805546877142413
173 0.805822320253992
174 0.805822320253992
175 0.805822320253992
176 0.805822320253992
177 0.805822320253992
178 0.805822320253992
179 0.805822320253992
180 0.805822320253992
181 0.805822320253992
182 0.805822320253992
183 0.805822320253992
;
[semithick, color1, opacity=0.3, forget plot]
table
0 0
1 0.0099999995500573
2 0.0199964582672243
3 0.0299869616081309
4 0.0399690611612515
5 0.0499402800329077
6 0.0598981183453915
7 0.069840058793395
8 0.0797635722154549
9 0.089666123139286
10 0.0995451752615671
11 0.109398196824758
12 0.119222665855966
13 0.129016075235723
14 0.13877593756773
15 0.148499789824052
16 0.158185197743927
17 0.167829759968081
18 0.177431111894249
19 0.18698692924335
20 0.196494931329432
21 0.205952884030008
22 0.215358602456662
23 0.224709953328889
24 0.234004857056862
25 0.243241289541277
26 0.252417283700566
27 0.26153093073758
28 0.270580381159325
29 0.279563845564494
30 0.288479595214425
31 0.297325962403669
32 0.306101340646693
33 0.314804184697287
34 0.323433010417135
35 0.331986394509666
36 0.340462974134788
37 0.348861446419534
38 0.357180567878847
39 0.36541915375995
40 0.373576077322843
41 0.381650269068525
42 0.389640715925583
43 0.397546460404835
44 0.40536659973072
45 0.413100284957184
46 0.420746720074921
47 0.428305161115883
48 0.435774915260181
49 0.443155339949654
50 0.450445842011682
51 0.457645876796079
52 0.464754947327294
53 0.471772603473547
54 0.478698441134013
55 0.485532101444666
56 0.492273270003006
57 0.498921676111496
58 0.505477092039214
59 0.51193933230097
60 0.51830825295287
61 0.52458375090314
62 0.530765763236873
63 0.536854266553199
64 0.542849276313345
65 0.548750846197954
66 0.554559067471996
67 0.56027406835563
68 0.565896013399337
69 0.57142510286172
70 0.576861572088379
71 0.582205690890352
72 0.587457762920674
73 0.592618125047699
74 0.597687146723914
75 0.602665229349085
76 0.607552805626663
77 0.612350338912521
78 0.617058322555165
79 0.621677279226716
80 0.626207760244046
81 0.630650344879595
82 0.635005639661485
83 0.639274277662686
84 0.643456917779079
85 0.64755424399638
86 0.651566964646005
87 0.655495811650026
88 0.659341539755495
89 0.663104925758496
90 0.666786767718348
91 0.670387884162513
92 0.673909113282773
93 0.677351312123371
94 0.680715355761836
95 0.684002136483274
96 0.687212562948978
97 0.690347559360232
98 0.693408064618239
99 0.696395031481124
100 0.699309425719006
101 0.702151563526058
102 0.704922499484892
103 0.707623293828556
104 0.710255011930654
105 0.712818723775241
106 0.715315503409255
107 0.717746428380134
108 0.720112579161107
109 0.722415038566521
110 0.724654891159464
111 0.726833222653792
112 0.72895111931258
113 0.731009667344875
114 0.733009952302565
115 0.734953058479025
116 0.736840068311136
117 0.738672061786158
118 0.740450115854856
119 0.742175303852159
120 0.743848694926589
121 0.745471353479555
122 0.747044338615575
123 0.748568703604373
124 0.750045495355732
125 0.75147575390791
126 0.752860511930351
127 0.754200794241341
128 0.755497617341213
129 0.75675198896162
130 0.757964907631334
131 0.759137362258971
132 0.760270331733
133 0.761364784539286
134 0.762421678396438
135 0.763441959909111
136 0.764426564239405
137 0.76537641479644
138 0.766292422944151
139 0.767175487727285
140 0.768026495615574
141 0.768846320265984
142 0.769635822302932
143 0.770395849116322
144 0.771127234677196
145 0.771830799370823
146 0.772507349846942
147 0.773157678886933
148 0.773782565287595
149 0.774382773761242
150 0.774959054851767
151 0.775512144866341
152 0.776042765822366
153 0.776551625409307
154 0.777039416965011
155 0.777506819466099
156 0.777954497532017
157 0.778383101442325
158 0.778793267166778
159 0.77918561640777
160 0.779560756654692
161 0.779919281249751
162 0.780261769464802
163 0.780588786588738
164 0.780900884024983
165 0.781198599398632
166 0.781482456672785
167 0.781752966273626
168 0.781752966273626
169 0.781752966273626
170 0.781752966273626
171 0.781752966273626
172 0.781752966273626
173 0.781752966273626
174 0.781752966273626
175 0.781752966273626
176 0.781752966273626
177 0.781752966273626
178 0.781752966273626
179 0.781752966273626
180 0.781752966273626
181 0.781752966273626
182 0.781752966273626
183 0.781752966273626
;
[semithick, color1, opacity=0.3, forget plot]
table
0 0
1 0.00999999953071938
2 0.0199963137847897
3 0.0299864270741332
4 0.0399677864304488
5 0.0499378071849546
6 0.0598938786780027
7 0.0698333700423858
8 0.0797536360154351
9 0.0896520227371954
10 0.0995258734936112
11 0.109372534365627
12 0.119189359747536
13 0.128973717700744
14 0.138722995112351
15 0.148434602631428
16 0.158105979359593
17 0.167734597276328
18 0.177317965383367
19 0.186853633556345
20 0.196339196095697
21 0.205772294972377
22 0.215150622767423
23 0.224471925307525
24 0.23373400400166
25 0.242934717886397
26 0.252071985389762
27 0.261143785825425
28 0.270148160630575
29 0.279083214362097
30 0.287947115466601
31 0.296738096840503
32 0.305454456196721
33 0.314094556254678
34 0.322656824770177
35 0.331139754421439
36 0.339541902567087
37 0.347861890891269
38 0.356098404950348
39 0.364250193634789
40 0.372316068558945
41 0.380294903390505
42 0.388185633130398
43 0.395987253352949
44 0.403698819415088
45 0.411319445642462
46 0.418848304499321
47 0.42628462574819
48 0.433627695604411
49 0.440876855889878
50 0.448031503189474
51 0.45509108801305
52 0.462055113965102
53 0.468923136923729
54 0.475694764229895
55 0.482369653887573
56 0.48894751377488
57 0.495428100865988
58 0.501811220463232
59 0.508096725438605
60 0.514284515483583
61 0.520374536366036
62 0.526366779192872
63 0.532261279676912
64 0.538058117406463
65 0.543757415115993
66 0.549359337956309
67 0.554864092762652
68 0.560271927319161
69 0.565583129618204
70 0.570798027113165
71 0.575916985963341
72 0.58094041026973
73 0.585868741300578
74 0.590702456705683
75 0.59544206971859
76 0.600088128345914
77 0.604641214543202
78 0.609101943376846
79 0.613470962171712
80 0.617748949644293
81 0.621936615021312
82 0.62603469714384
83 0.63004396355714
84 0.633965209586534
85 0.637799257399744
86 0.641546955056252
87 0.64520917554434
88 0.648786815806562
89 0.652280795754495
90 0.65569205727371
91 0.659021563219976
92 0.662270296407764
93 0.665439258592217
94 0.668529469445765
95 0.671541965530639
96 0.674477799268569
97 0.677338037908967
98 0.680123762496949
99 0.682836066842535
100 0.685476056492394
101 0.688043953487663
102 0.69054097244947
103 0.692968330231493
104 0.695327245345975
105 0.697618937369647
106 0.699844626332977
107 0.702005532095954
108 0.70410287371344
109 0.706137868792955
110 0.708111732847554
111 0.710025678646334
112 0.71188091556492
113 0.713678648938132
114 0.715420079416898
115 0.717106402331332
116 0.718738807061753
117 0.720318476419291
118 0.721846586037615
119 0.723324303777175
120 0.724752789143238
121 0.726133192718896
122 0.727466655614105
123 0.728754308931706
124 0.729997273251288
125 0.731196658131656
126 0.73235356163256
127 0.733469069856281
128 0.734544256509545
129 0.735580182486204
130 0.736577895471004
131 0.737538429564702
132 0.738462804930742
133 0.739352027463604
134 0.740207088478895
135 0.741028964425195
136 0.7418186166176
137 0.742576990992875
138 0.743305017886051
139 0.744003611828288
140 0.744673671365754
141 0.745316078899246
142 0.745931700544244
143 0.746521386011044
144 0.747085968504594
145 0.747626264643621
146 0.748143074398628
147 0.748637181048284
148 0.749109351153749
149 0.749560334550424
150 0.749990864356616
151 0.750401656998596
152 0.750793412251493
153 0.751166813295499
154 0.751522526786795
155 0.751861202942656
156 0.75218347564015
157 0.75218347564015
158 0.75218347564015
159 0.75218347564015
160 0.75218347564015
161 0.75218347564015
162 0.75218347564015
163 0.75218347564015
164 0.75218347564015
165 0.75218347564015
166 0.75218347564015
167 0.75218347564015
168 0.75218347564015
169 0.75218347564015
170 0.75218347564015
171 0.75218347564015
172 0.75218347564015
173 0.75218347564015
174 0.75218347564015
175 0.75218347564015
176 0.75218347564015
177 0.75218347564015
178 0.75218347564015
179 0.75218347564015
180 0.75218347564015
181 0.75218347564015
182 0.75218347564015
183 0.75218347564015
;
[semithick, color1, opacity=0.3, forget plot]
table
0 0
1 0.00999999957569795
2 0.0199966826503532
3 0.0299877913567866
4 0.0399710389391529
5 0.049944114828463
6 0.0599046898051789
7 0.0698504211986749
8 0.0797789580828778
9 0.0896879464295401
10 0.0995750341823426
11 0.109437876217076
12 0.119274139155591
13 0.129081506004036
14 0.138857680588989
15 0.148600391768481
16 0.158307397398402
17 0.167976488038385
18 0.177605490384904
19 0.187192270422871
20 0.196734736290488
21 0.206230840855402
22 0.215678584003298
23 0.225076014642886
24 0.23442123243382
25 0.243712389246323
26 0.252947690363267
27 0.262125395437098
28 0.2712438192153
29 0.280301332049186
30 0.289296360201476
31 0.298227385968663
32 0.307092947634365
33 0.315891639269874
34 0.32462211039793
35 0.333283065535362
36 0.341873263629753
37 0.350391517404605
38 0.358836692626794
39 0.367207707309235
40 0.375503530860847
41 0.383723183194978
42 0.391865733806526
43 0.39993030082704
44 0.407916050066203
45 0.415822194047116
46 0.423647991042001
47 0.431392744114027
48 0.4390558001702
49 0.446636549029486
50 0.454134422509615
51 0.461548893535373
52 0.468879475270562
53 0.476125720275276
54 0.483287219689601
55 0.490363602444439
56 0.497354534499705
57 0.504259718109815
58 0.511078891116072
59 0.517811826265252
60 0.524458330553507
61 0.531018244594459
62 0.537491442010231
63 0.543877828844015
64 0.550177342992668
65 0.556389953657778
66 0.562515660813551
67 0.568554494689873
68 0.574506515268871
69 0.580371811793312
70 0.586150502285174
71 0.591842733072811
72 0.597448678325099
73 0.602968539591091
74 0.608402545343703
75 0.613750950526055
76 0.619014036099169
77 0.624192108589797
78 0.629285499637231
79 0.634294565538056
80 0.639219686787865
81 0.644061267619084
82 0.648819735534104
83 0.653495540833043
84 0.658089156135523
85 0.662601075895964
86 0.667031815911957
87 0.671381912825381
88 0.675651923616004
89 0.679842425087394
90 0.683954013345033
91 0.687987303266621
92 0.691942927964594
93 0.695821538240993
94 0.699623802034839
95 0.703350403862263
96 0.707002044249677
97 0.710579439160336
98 0.714083319414669
99 0.717514430104848
100 0.720873530004032
101 0.724160761201579
102 0.727376972830774
103 0.730523020898432
104 0.733599768041164
105 0.736608083252487
106 0.739548841583501
107 0.74242292381962
108 0.745231216135794
109 0.747974609732471
110 0.750654000454462
111 0.753270288394736
112 0.755824377485095
113 0.758317175075544
114 0.760749591504107
115 0.763122539658722
116 0.765436934532787
117 0.767693692775824
118 0.769893732240658
119 0.772037971528447
120 0.774127329532786
121 0.776162724984086
122 0.778145075995333
123 0.780075299610259
124 0.781954311354923
125 0.783783024793624
126 0.785562351089994
127 0.787293198574088
128 0.788976472316225
129 0.79061307370827
130 0.792203900053002
131 0.793749844162178
132 0.795251793963824
133 0.796710632119268
134 0.798127235650364
135 0.799502475577326
136 0.800837216567541
137 0.80213231659569
138 0.803388626615474
139 0.804606990243201
140 0.805788243453436
141 0.80693321428693
142 0.808042722570945
143 0.809117579652117
144 0.810158588141939
145 0.811166541674919
146 0.812142224679454
147 0.813086412161416
148 0.813999869500445
149 0.814883352258884
150 0.815737606003315
151 0.816563366138594
152 0.817361357754275
153 0.818132295483318
154 0.818876883372908
155 0.819595814767256
156 0.820289772202179
157 0.820959427311293
158 0.821605440743595
159 0.822228462092242
160 0.822829129834276
161 0.823408071281078
162 0.823965902539299
163 0.824503228482009
164 0.825020642729808
165 0.82551872764163
166 0.825998054314966
167 0.826459182595223
168 0.826902661093942
169 0.827329027215582
170 0.827738807192586
171 0.828132516128426
172 0.828510658048354
173 0.828873725957538
174 0.829222201906308
175 0.829556557062213
176 0.829877251788586
177 0.830184735729335
178 0.830479447899665
179 0.830761816782432
180 0.831032260429864
181 0.831291186570344
182 0.831538992719979
183 0.831776066298692
;
[thick, color0]
table
0 0
1 0.0999996238841326
2 0.199680821475513
3 0.298752390572293
4 0.396890397973769
5 0.493658205064638
6 0.588648304378148
7 0.681600062988552
8 0.763205914866765
9 0.802888355294032
10 0.822206932006339
11 0.831636984564103
12 0.836254572195215
13 0.83852244424664
14 0.839639426484126
15 0.840190942115649
16 0.840462558139476
17 0.84059108418449
18 0.840659012271117
19 0.840710564626141
20 0.840738388935545
21 0.84076048163171
22 0.84077260574683
23 0.840782490660549
24 0.840787985109239
25 0.840792533454185
26 0.840795088327086
27 0.840797222134517
28 0.840798431441403
29 0.840799184152997
30 0.840799635502245
31 0.840799998135275
32 0.840800136718664
33 0.840800273104732
34 0.84080034047093
35 0.840800407066592
36 0.840800440068095
37 0.840800472807007
38 0.840800489076177
39 0.840800496739077
40 0.840800500552623
41 0.840800504351649
42 0.840800506246363
43 0.84080050813775
44 0.840800509082708
45 0.840800510027512
46 0.840800510500199
47 0.840800510973439
48 0.840800511210477
49 0.840800511448049
50 0.840800511567155
51 0.840800511686626
52 0.840800511746566
53 0.840800511806729
54 0.840800511836931
55 0.84080051186726
56 0.840800511882492
57 0.840800511897794
58 0.84080051190548
59 0.840800511913205
60 0.840800511917087
61 0.840800511920988
62 0.840800511922949
63 0.84080051192492
64 0.840800511925911
65 0.840800511926906
66 0.840800511927407
67 0.84080051192791
68 0.840800511928163
69 0.840800511928417
70 0.840800511928545
71 0.840800511928673
72 0.840800511928738
73 0.840800511928803
74 0.840800511928835
75 0.840800511928868
76 0.840800511928885
77 0.840800511928901
78 0.84080051192891
79 0.840800511928918
80 0.840800511928922
81 0.840800511928926
82 0.840800511928929
83 0.840800511928931
84 0.840800511928932
85 0.840800511928933
86 0.840800511928933
87 0.840800511928933
88 0.840800511928934
89 0.840800511928934
90 0.840800511928934
91 0.840800511928934
92 0.840800511928934
93 0.840800511928934
94 0.840800511928934
95 0.840800511928934
96 0.840800511928934
97 0.840800511928934
98 0.840800511928934
99 0.840800511928934
100 0.840800511928934
101 0.840800511928934
102 0.840800511928934
103 0.840800511928934
104 0.840800511928934
105 0.840800511928934
106 0.840800511928934
107 0.840800511928934
108 0.840800511928934
109 0.840800511928934
110 0.840800511928934
111 0.840800511928934
112 0.840800511928934
113 0.840800511928934
114 0.840800511928934
115 0.840800511928934
116 0.840800511928934
117 0.840800511928934
118 0.840800511928934
119 0.840800511928934
120 0.840800511928934
121 0.840800511928934
122 0.840800511928934
123 0.840800511928934
124 0.840800511928934
125 0.840800511928934
126 0.840800511928934
127 0.840800511928934
128 0.840800511928934
129 0.840800511928934
130 0.840800511928934
131 0.840800511928934
132 0.840800511928934
133 0.840800511928934
134 0.840800511928934
135 0.840800511928934
136 0.840800511928934
137 0.840800511928934
138 0.840800511928934
139 0.840800511928934
140 0.840800511928934
141 0.840800511928934
142 0.840800511928934
143 0.840800511928934
144 0.840800511928934
145 0.840800511928934
146 0.840800511928934
147 0.840800511928934
148 0.840800511928934
149 0.840800511928934
150 0.840800511928934
151 0.840800511928934
152 0.840800511928934
153 0.840800511928934
154 0.840800511928934
155 0.840800511928934
156 0.840800511928934
157 0.840800511928934
158 0.840800511928934
159 0.840800511928934
160 0.840800511928934
161 0.840800511928934
162 0.840800511928934
163 0.840800511928934
164 0.840800511928934
165 0.840800511928934
166 0.840800511928934
167 0.840800511928934
168 0.840800511928934
169 0.840800511928934
170 0.840800511928934
171 0.840800511928934
172 0.840800511928934
173 0.840800511928934
174 0.840800511928934
175 0.840800511928934
176 0.840800511928934
177 0.840800511928934
178 0.840800511928934
179 0.840800511928934
180 0.840800511928934
181 0.840800511928934
182 0.840800511928934
183 0.840800511928934
;
(Δ t=0.01)
[thick, color1]
table
0 0
1 0.00999999955678963
2 0.0199965114369997
3 0.029987158193295
4 0.0399695296670203
5 0.0499411883000033
6 0.0598996745526602
7 0.0698425123754955
8 0.0797672146913357
9 0.0896712888477935
10 0.099552242001159
11 0.109407586394934
12 0.119234844498668
13 0.129031553975583
14 0.138795272450661
15 0.148523582054252
16 0.15821409371994
17 0.167864451219053
18 0.177472334918004
19 0.187035465248339
20 0.196551605882992
21 0.206018566615675
22 0.215434205943594
23 0.224796433356665
24 0.234103211339101
25 0.243352557091675
26 0.252542543985026
27 0.261671302756167
28 0.270737022461793
29 0.27973795120312
30 0.288672396637835
31 0.297538726295273
32 0.306335367711255
33 0.315060808399064
34 0.323713595672884
35 0.332292336339693
36 0.340795696275103
37 0.349222399898016
38 0.357571229558205
39 0.36584102485013
40 0.374030681865407
41 0.382139152395403
42 0.390165443094505
43 0.398108614613613
44 0.405967780712495
45 0.413742107358644
46 0.421430811819435
47 0.429033161753425
48 0.436548474305877
49 0.443976115212737
50 0.451315497916609
51 0.458566082697536
52 0.465727375820815
53 0.472798928703452
54 0.479780337100371
55 0.486671240311011
56 0.493471320406528
57 0.500180301477446
58 0.5067979489013
59 0.513324068629515
60 0.519758506492549
61 0.526101147522132
62 0.532351915289267
63 0.538510771256558
64 0.54457771414331
65 0.550552779301815
66 0.556436038103164
67 0.562227597330961
68 0.567927598581261
69 0.573536217667133
70 0.579053664026261
71 0.584480180130056
72 0.589816040892831
73 0.595061553079643
74 0.600217054711537
75 0.605282914466967
76 0.610259531078312
77 0.61514733272249
78 0.61994677640478
79 0.624658347335069
80 0.629282558295877
81 0.633819949001572
82 0.638271085448356
83 0.642636559254664
84 0.64691698699175
85 0.651113009504334
86 0.655225291221266
87 0.659254519456296
88 0.663201403699089
89 0.667066674896739
90 0.67085108472612
91 0.674555404857467
92 0.678180426209685
93 0.681726958197925
94 0.685195827974035
95 0.688587879660569
96 0.691903973579058
97 0.695144985473318
98 0.698311805728592
99 0.701405338587379
100 0.7044265013628
101 0.707375537041228
102 0.710253455748186
103 0.713061273144742
104 0.715800010000262
105 0.718470691741739
106 0.721074347982544
107 0.723612012033298
108 0.726084720397414
109 0.728493512253733
110 0.730839428928529
111 0.733123513359064
112 0.735346809550717
113 0.737510362029623
114 0.739615215292653
115 0.741662413256422
116 0.743652998706962
117 0.745588012751562
118 0.747468494274196
119 0.749295479395865
120 0.751070000941101
121 0.752793087911775
122 0.754465764969296
123 0.756089051926184
124 0.757663963247939
125 0.75919150756604
126 0.76067268720286
127 0.762108497709183
128 0.763499927414971
129 0.764847956993942
130 0.766153559042474
131 0.767417697673288
132 0.768641328124296
133 0.769825396382955
134 0.770970838826425
135 0.772078581877739
136 0.77314954167821
137 0.774184623776186
138 0.77518472283227
139 0.776150722341056
140 0.777083494369398
141 0.777983899311206
142 0.77885278565869
143 0.779690989789991
144 0.780499335773062
145 0.781278635185666
146 0.782029686951295
147 0.782753277190836
148 0.783450179089731
149 0.784121152780408
150 0.784766945239698
151 0.785388290200964
152 0.785985908080623
153 0.786560505918764
154 0.787112777333504
155 0.787643402488756
156 0.78815304807503
157 0.788581069925386
158 0.788992442006944
159 0.789387673484557
160 0.789767263260072
161 0.790131699981081
162 0.79048146205837
163 0.790817017691777
164 0.791138824904172
165 0.791447331583261
166 0.791742975530904
167 0.792026184519674
168 0.7922458445663
169 0.792456368749947
170 0.792658061033085
171 0.792851218233009
172 0.793036130077621
173 0.793213079265427
174 0.793282774455181
175 0.793349645486362
176 0.793413784431637
177 0.793475281219787
178 0.793534223653852
179 0.793590697430406
180 0.793644786159892
181 0.793696571387988
182 0.793746132617915
183 0.793793547333658
;
(Δ t=0.001)
Faster learning of the double-well DP parameter θ (M-Step) of the proposed method as compared to .
Speeding up inference
We compare the performance of approximate inference using and on three aspects:
convergence speed, accuracy of the posterior approximation, and robustness to the discretization of the time horizon.
As a sanity check, we start with a linear DP prior, where reaches the optimal posterior after a single iteration (<ref>).
Also, for non-linear DP priors, is faster than (see <ref> for an example with DW prior).
Convergence plots for other DPs are available in <ref>.
To measure the accuracy of the approximate posterior, we use negative log predictive density (NLPD) with 5-fold cross-validation (results in <ref>).
The proposed method is robust to the discretization of the time horizon; we report both the methods with different discretization (Δ t={0.01, 0.005, 0.001}).
A coarse grid leads to a model with less parameters as the number of variational parameters scales inversely with the discretization step Δ t.
Learning of model parameters
To showcase the learning capability of , we experiment with the same setup as in the evaluation of inference but now learn parameters of the prior DP θ as well. We compare the two methods on two aspects: speed of learning
and posterior predictive accuracy. provides a better objective for learning (<ref>) which leads to a faster learning algorithm (<ref>).
We also report the posterior predictive performance of the methods in <ref> using NLPD with 5-fold cross-validation as a metric.
Further comparisons
Finally, we compare against the popular class of NeuralSDEs methods <cit.>. These methods are variational inference algorithms with a broader scope than : the posterior process q is not restricted to be a linear DP. However, they rely on sample-based estimation of the ELBO gradient and thus incur a large computational cost and convergence of optimization via stochastic gradient descent is often slow (see <ref>).
§.§ Empirical evaluation on finance and vehicle tracking data
We showcase the learning capability of on real-world data set: finance and vehicle tracking data from <cit.>. For both data sets, we parameterize the prior DP drift f_p as an MLP neural network <cit.>, with one hidden layer of 3 nodes and ReLU activation function <cit.>, and learn its parameters.-2
r0.45
axis on top,scale only axis,width=,height=, ylabel near ticks,ylabel style=yshift=-2pt,y tick label style=rotate=90, ylabel=log-price (USD), xlabel=Time, t (years), grid style=line width=.1pt, draw=gray!10,dashed,grid, ytick=-2, 2, 6
[
height=,
legend cell align=left,
legend style=fill opacity=1.0, fill=white, draw opacity=1, text opacity=1, at=(0.03,0.97), anchor=north west,
tick align=outside,
tick pos=left,
width=,
x grid style=white!69.0196078431373!black,
xmin=0, xmax=8400,
xtick style=color=black,
xtick=1200,3600,6000,8400,
xticklabels=1985,1995,2005,2015,
y grid style=white!69.0196078431373!black,
ymin=-2.2, ymax=6.5,
ytick style=color=black
]
[forget plot] graphics [includegraphics cmd=,xmin=-2007.43064516129, xmax=10521.214516129, ymin=-3.47629881429126, ymax=8.27778375370451] fig/applestock-000.png;
[thick, color0, forget plot]
table
6828 3.24745512415573
6829 3.24800235021494
6830 3.24854906848078
6831 3.24909527963616
6832 3.24964098402629
6833 3.25018618226778
6834 3.25073087472401
6835 3.25127506179288
6836 3.25181874417094
6837 3.25236192210229
6838 3.25290459610982
6839 3.2534467668686
6840 3.25398843456836
6841 3.25452959978546
6842 3.25507026298505
6843 3.25561042482386
6844 3.25615008532792
6845 3.25668924508739
6846 3.25722790479549
6847 3.25776606472173
6848 3.25830372551159
6849 3.25884088755157
6850 3.25937755112046
6851 3.25991371686394
6852 3.26044938534855
6853 3.26098455690479
6854 3.26151923205981
6855 3.26205341118823
6856 3.26258709453858
6857 3.26312028306258
6858 3.26365297678507
6859 3.26418517639959
6860 3.2647168824146
6861 3.26524809507287
6862 3.26577881492853
6863 3.26630904242706
6864 3.26683877802407
6865 3.26736802213058
6866 3.26789677548246
6867 3.26842503809444
6868 3.26895281085617
6869 3.26948009380937
6870 3.2700068875395
6871 3.27053319260165
6872 3.27105900968129
6873 3.27158433886423
6874 3.27210918082678
6875 3.27263353561151
6876 3.2731574041069
6877 3.27368078654799
6878 3.27420368345332
6879 3.27472609534121
6880 3.27524802252174
6881 3.27576946541731
6882 3.27629042473036
6883 3.27681090067584
6884 3.27733089392924
6885 3.27785040456997
6886 3.27836943339689
6887 3.27888798075154
6888 3.27940604692759
6889 3.27992363267383
6890 3.28044073815256
6891 3.28095736388708
6892 3.28147351037058
6893 3.28198917802923
6894 3.28250436729381
6895 3.28301907857558
6896 3.28353331241914
6897 3.28404706908781
6898 3.2845603492048
6899 3.28507315335012
6900 3.28558548168331
6901 3.28609733458931
6902 3.28660871276647
6903 3.2871196165321
6904 3.28763004619881
6905 3.28814000244082
6906 3.28864948562238
6907 3.28915849611229
6908 3.28966703452998
6909 3.29017510109133
6910 3.29068269624522
6911 3.2911898204678
6912 3.29169647434372
6913 3.29220265811581
6914 3.29270837225786
6915 3.29321361737435
6916 3.29371839371701
6917 3.29422270180476
6918 3.29472654201372
6919 3.29522991478571
6920 3.29573282044036
6921 3.29623525976983
6922 3.29673723298427
6923 3.29723874050756
6924 3.29773978280497
6925 3.29824036018244
6926 3.29874047312531
6927 3.29924012216774
6928 3.29973930753271
6929 3.30023802981584
6930 3.30073628956521
6931 3.3012340871124
6932 3.30173142279356
6933 3.30222829703555
6934 3.30272471023179
6935 3.30322066289773
6936 3.3037161555533
6937 3.30421118853355
6938 3.30470576224678
6939 3.30519987721561
6940 3.30569353372248
6941 3.30618673221073
6942 3.30667947328843
6943 3.30717175718629
6944 3.30766358432051
6945 3.30815495525724
6946 3.30864587037702
6947 3.30913632987947
6948 3.3096263343866
6949 3.31011588448131
6950 3.31060498023368
6951 3.31109362219662
6952 3.31158181076237
6953 3.31206954674733
6954 3.31255683012288
6955 3.31304366120447
6956 3.31353004082961
6957 3.31401596924902
6958 3.31450144685085
6959 3.31498647407346
6960 3.31547105132637
6961 3.31595517894499
6962 3.31643885759151
6963 3.31692208746209
6964 3.31740486901752
6965 3.31788720277174
6966 3.31836908901373
6967 3.31885052809676
6968 3.31933152057538
6969 3.31981206667955
6970 3.32029216702824
6971 3.32077182208724
6972 3.32125103208089
6973 3.32172979747314
6974 3.32220811873121
6975 3.32268599617399
6976 3.32316343032879
6977 3.32364042159357
6978 3.32411697023335
6979 3.32459307686653
6980 3.32506874170273
6981 3.32554396527975
6982 3.3260187478554
6983 3.3264930899704
6984 3.3269669922441
6985 3.32744045465651
6986 3.32791347776192
6987 3.32838606220377
6988 3.32885820820184
6989 3.32932991622522
6990 3.32980118652565
6991 3.33027201948479
6992 3.3307424156302
6993 3.33121237552881
6994 3.33168189948337
6995 3.33215098781834
6996 3.33261964086363
6997 3.33308785920592
6998 3.33355564292882
6999 3.3340229928553
7000 3.33448990919989
7001 3.33495639249349
7002 3.33542244288303
7003 3.33588806102464
7004 3.33635324710537
7005 3.33681800147215
7006 3.33728232467126
7007 3.33774621711512
7008 3.33820967910322
7009 3.33867271119566
7010 3.33913531361976
7011 3.33959748698141
7012 3.3400592315853
7013 3.34052054774755
7014 3.34098143599759
7015 3.34144189651504
7016 3.34190192993033
7017 3.34236153650841
7018 3.3428207166987
7019 3.34327947094403
7020 3.34373779947666
7021 3.34419570296529
7022 3.34465318137762
7023 3.3451102355888
7024 3.34556686568003
7025 3.34602307221728
7026 3.3464788554772
7027 3.3469342157286
7028 3.34738915357589
7029 3.34784366927116
7030 3.34829776336618
7031 3.34875143613978
7032 3.34920468802178
7033 3.34965751925569
7034 3.35010993050136
7035 3.35056192192813
7036 3.35101349407157
7037 3.3514646472523
7038 3.35191538169723
7039 3.35236569823012
7040 3.35281559693311
7041 3.35326507824818
7042 3.35371414251198
7043 3.35416279001691
7044 3.35461102141673
7045 3.35505883700732
7046 3.35550623697119
7047 3.35595322191067
7048 3.35639979216639
7049 3.35684594791002
7050 3.35729168972479
7051 3.35773701816401
7052 3.35818193347505
7053 3.3586264359141
7054 3.35907052582009
7055 3.35951420382067
7056 3.359957470237
7057 3.36040032550893
7058 3.36084276994595
7059 3.36128480386715
7060 3.36172642747319
7061 3.36216764135466
7062 3.36260844598659
7063 3.36304884163897
7064 3.36348882863622
7065 3.36392840743909
7066 3.36436757846176
7067 3.36480634192188
7068 3.36524469832384
7069 3.36568264788626
7070 3.36612019114185
7071 3.36655732847638
7072 3.36699406041119
7073 3.36743038714915
7074 3.36786630912606
7075 3.3683018266028
7076 3.36873693998254
7077 3.36917164962566
7078 3.36960595590633
7079 3.37003985927481
7080 3.37047335996203
7081 3.37090645860727
7082 3.37133915546916
7083 3.3717714509444
7084 3.37220334520711
7085 3.37263483870751
7086 3.3730659320017
7087 3.37349662531828
7088 3.37392691916713
7089 3.37435681371185
7090 3.37478630944926
7091 3.37521540659693
7092 3.37564410559903
7093 3.37607240715282
7094 3.37650031097846
7095 3.37692781787894
7096 3.37735492822295
7097 3.37778164229396
7098 3.37820796021654
7099 3.37863388279431
7100 3.37905941026273
7101 3.37948454280655
7102 3.37990928116605
7103 3.3803336252368
7104 3.38075757551856
7105 3.38118113240549
7106 3.38160429606088
7107 3.38202706708532
7108 3.38244944576743
7109 3.38287143266686
7110 3.38329302807511
7111 3.38371423223818
7112 3.38413504568535
7113 3.38455546866732
7114 3.38497550165393
7115 3.38539514493667
7116 3.3858143988824
7117 3.38623326393785
7118 3.38665174019624
7119 3.38706982810531
7120 3.38748752822278
7121 3.38790484106361
7122 3.38832176638228
7123 3.38873830853926
7124 3.38915446423891
7125 3.38957023389047
7126 3.3899856177434
7127 3.39040061597189
7128 3.3908152293076
7129 3.39122945786788
7130 3.39164330197634
7131 3.39205676222291
7132 3.3924698388714
7133 3.39288253217633
7134 3.39329484251864
7135 3.39370677038458
7136 3.39411831612807
7137 3.39452947987114
7138 3.39494026223327
7139 3.39535066327085
7140 3.39576068364546
7141 3.39617032362921
7142 3.39657958342085
7143 3.39698846352281
7144 3.39739696417803
7145 3.39780508602479
7146 3.39821282899316
7147 3.3986201937271
7148 3.39902718067402
7149 3.39943378996224
7150 3.39984002189165
7151 3.40024587698482
7152 3.40065135526337
7153 3.40105645735416
7154 3.40146118374467
7155 3.40186553467972
7156 3.4022695103868
7157 3.40267311107208
7158 3.40307633730439
7159 3.40347918956827
7160 3.40388166798163
7161 3.40428377279139
7162 3.40468550463137
7163 3.40508686358689
7164 3.40548785026621
7165 3.40588846476774
7166 3.40628870761959
7167 3.40668857906512
7168 3.40708807935452
7169 3.40748720904222
7170 3.40788596847266
7171 3.40828435775244
7172 3.40868237756607
7173 3.40908002803758
7174 3.40947730947368
7175 3.40987422245953
7176 3.41027076700082
7177 3.41066694358296
7178 3.4110627524924
7179 3.41145819409347
7180 3.41185326891186
7181 3.41224797703811
7182 3.41264231890281
7183 3.41303629490228
7184 3.41342990523514
7185 3.4138231503888
7186 3.41421603078137
7187 3.41460854667388
7188 3.41500069832257
7189 3.41539248588331
7190 3.41578391009751
7191 3.41617497124275
7192 3.41656566961187
7193 3.41695600539705
7194 3.4173459791325
7195 3.41773559103143
7196 3.41812484137268
7197 3.41851373042537
7198 3.41890225883512
7199 3.41929042673259
7200 3.41967823446213
7201 3.42006568221271
7202 3.42045277053642
7203 3.42083949971833
7204 3.42122587025669
7205 3.42161188204133
7206 3.42199753580379
7207 3.42238283175121
7208 3.42276777018895
7209 3.42315235143204
7210 3.42353657580111
7211 3.42392044387455
7212 3.42430395579073
7213 3.42468711154814
7214 3.42506991199303
7215 3.42545235728684
7216 3.42583444768493
7217 3.42621618346951
7218 3.42659756499471
7219 3.42697859284092
7220 3.42735926939474
7221 3.42773959276165
7222 3.428119563094
7223 3.42849918093313
7224 3.42887844667145
7225 3.42925736034528
7226 3.42963592238873
7227 3.43001413335898
7228 3.43039199330733
7229 3.43076950253294
7230 3.43114666157984
7231 3.43152347053316
7232 3.43189992985352
7233 3.43227603997163
7234 3.43265180095763
7235 3.43302721344567
7236 3.43340227751138
7237 3.43377699340272
7238 3.43415136177312
7239 3.43452538271192
7240 3.43489905668868
7241 3.4352723839514
7242 3.43564536460096
7243 3.43601799937578
7244 3.43639028834339
7245 3.43676223195525
7246 3.43713383044506
7247 3.43750508401145
7248 3.43787599306881
7249 3.43824655807496
7250 3.43861677907802
7251 3.43898665667408
7252 3.43935619089887
7253 3.43972538222837
7254 3.44009423096263
7255 3.44046273766794
7256 3.44083090241129
7257 3.44119872526664
7258 3.44156620702574
7259 3.44193334766498
7260 3.44230014041397
7261 3.44266659262082
7262 3.44303270499748
7263 3.44339847775165
7264 3.44376391100744
7265 3.44412900536549
7266 3.44449376082326
7267 3.44485817788141
7268 3.44522225687278
7269 3.44558599808656
7270 3.4459494015683
7271 3.44631246813863
7272 3.44667519775139
7273 3.44703759078516
7274 3.44739964765722
7275 3.44776136858225
7276 3.44812275388954
7277 3.44848380380844
7278 3.44884451881273
7279 3.44920489924612
7280 3.44956494519695
7281 3.44992465710148
7282 3.4502840350865
7283 3.45064307987442
7284 3.4510017913857
7285 3.45136016996122
7286 3.45171821619024
7287 3.45207593029575
7288 3.45243331237967
7289 3.4527903629022
7290 3.45314708213752
7291 3.45350347040677
7292 3.45385952793317
7293 3.45421525508836
7294 3.45457065229632
7295 3.45492571984933
7296 3.45528045773232
7297 3.45563486659062
7298 3.45598894656849
7299 3.45634269791673
7300 3.45669612110129
7301 3.45704921624976
7302 3.45740198376177
7303 3.45775442416159
7304 3.45810653740852
7305 3.45845832403763
7306 3.45880978428575
7307 3.45916091842011
7308 3.45951172656245
7309 3.45986220933805
7310 3.46021236679638
7311 3.46056219941788
7312 3.46091170743381
7313 3.46126089114033
7314 3.46160975084561
7315 3.46195828680246
7316 3.46230649946705
7317 3.4626543891597
7318 3.46300195261563
7319 3.46334919352614
7320 3.46369611231462
7321 3.46404270918599
7322 3.46438898444208
7323 3.46473493845899
7324 3.46508057146232
7325 3.46542588386522
7326 3.46577087586846
7327 3.46611554788827
7328 3.46645990009226
7329 3.46680393273603
7330 3.46714764634826
7331 3.46749104099407
7332 3.46783411699098
7333 3.46817687482729
7334 3.46851931454964
7335 3.46886143668488
7336 3.469203241348
7337 3.46954472882915
7338 3.46988589962679
7339 3.47022675364407
7340 3.4705672916415
7341 3.47090751354215
7342 3.47124741996796
7343 3.47158701105864
7344 3.47192628701968
7345 3.47226524817642
7346 3.47260389487784
7347 3.4729422274135
7348 3.47328024601709
7349 3.47361795116604
7350 3.47395534277623
7351 3.47429242149631
7352 3.47462918747719
7353 3.47496564092943
7354 3.47530178234356
7355 3.47563761186337
7356 3.47597312976909
7357 3.47630833648623
7358 3.47664323209311
7359 3.47697781696993
7360 3.47731209159383
7361 3.47764605590623
7362 3.47797971043003
7363 3.47831305525529
7364 3.4786460909065
7365 3.47897881747264
7366 3.47931123544596
7367 3.47964334491305
7368 3.47997527791865
7369 3.48030690295636
7370 3.4806382201636
7371 3.48096923017435
7372 3.48129993294473
7373 3.48163032906434
7374 3.48196041882068
7375 3.48229020224066
7376 3.48261967981471
7377 3.48294885203298
7378 3.4832777189392
7379 3.48360628082853
7380 3.48393453774057
7381 3.48426249039789
7382 3.48459013893112
7383 3.48491748345127
7384 3.48524452445921
7385 3.48557126216734
7386 3.4858976968845
7387 3.48622382911704
7388 3.48654965893733
7389 3.48687518662516
7390 3.48720041230658
7391 3.48752533642142
7392 3.48784995890576
7393 3.48817428058727
7394 3.48849830118535
7395 3.48882202121489
7396 3.48914544120335
7397 3.48946856097213
7398 3.48979138110248
7399 3.4901139019882
7400 3.49043612369778
7401 3.49075804621649
7402 3.49107967025806
7403 3.49140099616479
7404 3.49172202396894
7405 3.4920427539593
7406 3.49236318640829
7407 3.49268332174598
7408 3.49300316016633
7409 3.49332270200407
7410 3.49364194744916
7411 3.49396089667424
7412 3.49427955007025
7413 3.4945979077388
7414 3.49491597015014
7415 3.4952337373949
7416 3.49555121012671
7417 3.49586838833428
7418 3.4961852724312
7419 3.49650186250286
7420 3.49681815892892
7421 3.49713416182141
7422 3.49744987179035
7423 3.49776528876234
7424 3.49808041331824
7425 3.49839524543035
7426 3.49870978545461
7427 3.49902403286299
7428 3.49933798875393
7429 3.49965165339963
7430 3.4999650273163
7431 3.50027811044167
7432 3.50059090332426
7433 3.50090340591059
7434 3.50121561850256
7435 3.50152754148278
7436 3.50183917521684
7437 3.50215051965921
7438 3.5024615740313
7439 3.50277233906104
7440 3.50308281573996
7441 3.50339300441465
7442 3.50370290554129
7443 3.50401251893101
7444 3.50432184511269
7445 3.50463088453263
7446 3.50493963716058
7447 3.50524810339546
7448 3.50555628347746
7449 3.50586417783807
7450 3.50617178634796
7451 3.50647910954664
7452 3.50678614773292
7453 3.50709290111545
7454 3.50739936991557
7455 3.50770555439092
7456 3.50801145485107
7457 3.50831707157269
7458 3.50862240472084
7459 3.50892745447899
7460 3.50923222144869
7461 3.50953670560886
7462 3.5098409073635
7463 3.51014482683596
7464 3.51044846450953
7465 3.51075182040907
7466 3.51105489503364
7467 3.51135768828683
7468 3.51166020092391
7469 3.51196243270798
7470 3.51226438395337
7471 3.51256605525791
7472 3.51286744655908
7473 3.51316855832748
7474 3.51346939071765
7475 3.51376994405585
7476 3.51407021866863
7477 3.51437021468066
7478 3.51466993233884
7479 3.51496937185482
7480 3.51526853371476
7481 3.51556741798062
7482 3.51586602486657
7483 3.51616435464932
7484 3.51646240794754
7485 3.51676018451632
7486 3.51705768490342
7487 3.5173549093369
7488 3.51765185790383
7489 3.51794853090783
7490 3.5182449287546
7491 3.51854105157274
7492 3.51883689968487
7493 3.51913247340137
7494 3.5194277726734
7495 3.51972279807266
7496 3.52001754998003
7497 3.52031202830726
7498 3.52060623348743
7499 3.52090016552331
7500 3.52119382499952
7501 3.52148721182485
7502 3.52178032652674
7503 3.52207316924321
7504 3.52236574031955
7505 3.52265804005945
7506 3.52295006844915
7507 3.52324182593245
7508 3.52353331272269
7509 3.52382452906961
7510 3.5241154752949
7511 3.52440615154964
7512 3.52469655805774
7513 3.52498669512256
7514 3.52527656309964
7515 3.52556616222621
7516 3.52585549246598
7517 3.52614455446296
7518 3.52643334835541
7519 3.52672187412251
7520 3.52701013200153
7521 3.52729812264242
7522 3.52758584606065
7523 3.52787330244107
7524 3.52816049223435
7525 3.52844741541916
7526 3.52873407226087
7527 3.52902046320487
7528 3.52930658853377
7529 3.52959244841022
7530 3.52987804296301
7531 3.53016337247556
7532 3.53044843709553
7533 3.5307332372614
7534 3.53101779281358
7535 3.53130208428489
7536 3.53158611187
7537 3.53186987585008
7538 3.53215337654968
7539 3.53243661401624
7540 3.53271958875924
7541 3.53300230064198
7542 3.53328475023963
7543 3.53356693761529
7544 3.53384886312013
7545 3.53413052690163
7546 3.53441192916791
7547 3.53469307028299
7548 3.5349739504642
7549 3.53525456983073
7550 3.53553492882118
7551 3.5358150275419
7552 3.53609486627355
7553 3.53637444502397
7554 3.53665376457107
7555 3.53693282495522
7556 3.53721162632863
7557 3.53749016858683
7558 3.53776845258919
7559 3.53804647810466
7560 3.53832424559869
7561 3.5386017552562
7562 3.5388790074085
7563 3.53915600197652
7564 3.53943273950536
7565 3.53970922006596
7566 3.53998544414479
7567 3.54026141186192
7568 3.54053712344536
7569 3.5408125790532
7570 3.54108777881611
7571 3.54136272320244
7572 3.54163741253462
7573 3.54191184668244
7574 3.54218602613763
7575 3.54245995108363
7576 3.54273362176943
7577 3.54300703856082
7578 3.54328020117843
7579 3.54355311046806
7580 3.5438257661878
7581 3.54409816874055
7582 3.54437031836145
7583 3.54464221560476
7584 3.54491386031175
7585 3.54518525295034
7586 3.54545639359448
7587 3.54572728279535
7588 3.54599792031766
7589 3.54626830641092
7590 3.54653844148592
7591 3.54680832593134
7592 3.54707795977125
7593 3.54734734329182
7594 3.54761647670698
7595 3.54788536023173
7596 3.54815399401488
7597 3.54842237856554
7598 3.54869051413887
7599 3.5489584003986
7600 3.54922603820286
7601 3.5494934274455
7602 3.54976056823256
7603 3.55002746098704
7604 3.55029410608919
7605 3.55056050351318
7606 3.55082665339654
7607 3.55109255626372
7608 3.55135821225152
7609 3.55162362140441
7610 3.55188878445687
7611 3.55215370112872
7612 3.55241837177118
7613 3.5526827968037
7614 3.55294697591525
7615 3.5532109098615
7616 3.55347459878777
7617 3.55373804259147
7618 3.55400124172816
7619 3.55426419651952
7620 3.55452690721779
7621 3.55478937387946
7622 3.55505159671678
7623 3.55531357579562
7624 3.55557531175194
7625 3.55583680447775
7626 3.5560980546388
7627 3.55635906193846
7628 3.5566198269145
7629 3.55688034972585
7630 3.55714063063855
7631 3.55740066979807
7632 3.55766046759606
7633 3.55792002416555
7634 3.55817933936232
7635 3.55843841381278
7636 3.55869724761065
7637 3.55895584106155
7638 3.55921419437093
7639 3.5594723077703
7640 3.5597301812487
7641 3.55998781524225
7642 3.56024521007013
7643 3.56050236574267
7644 3.56075928267204
7645 3.5610159607934
7646 3.56127240058088
7647 3.56152860238329
7648 3.56178456624766
7649 3.56204029203611
7650 3.56229578054144
7651 3.56255103168114
7652 3.56280604588285
7653 3.56306082300879
7654 3.56331536374038
7655 3.56356966802125
7656 3.5638237363958
7657 3.56407756832526
7658 3.56433116467936
7659 3.56458452541115
7660 3.56483765094811
7661 3.56509054129521
7662 3.5653431968731
7663 3.56559561789615
7664 3.5658478044631
7665 3.56609975667829
7666 3.56635147501884
7667 3.56660295959679
7668 3.56685421065552
7669 3.56710522812216
7670 3.56735601268387
7671 3.56760656409567
7672 3.56785688290823
7673 3.56810696922894
7674 3.56835682327975
7675 3.56860644512357
7676 3.56885583505518
7677 3.56910499347125
7678 3.56935392060019
7679 3.5696026165012
7680 3.56985108141327
7681 3.57009931542914
7682 3.5703473190827
7683 3.57059509225167
7684 3.57084263549447
7685 3.57108994866094
7686 3.57133703200859
7687 3.57158388592553
7688 3.57183051047959
7689 3.57207690595647
7690 3.57232307266769
7691 3.57256901073119
7692 3.57281472014242
7693 3.57306020147316
7694 3.57330545452153
7695 3.57355047985996
7696 3.57379527769347
7697 3.57403984801668
7698 3.5742841914239
7699 3.57452830765357
7700 3.57477219712697
7701 3.57501585996093
7702 3.57525929646384
7703 3.57550250682753
7704 3.57574549132978
7705 3.57598824994243
7706 3.57623078312182
7707 3.57647309094473
7708 3.5767151737915
7709 3.57695703186575
7710 3.57719866501218
7711 3.57744007376751
7712 3.57768125820806
7713 3.57792221856461
7714 3.57816295513503
7715 3.57840346791483
7716 3.57864375745596
7717 3.57888382359294
7718 3.57912366686579
7719 3.57936328721346
7720 3.5796026849471
7721 3.57984186015948
7722 3.58008081334298
7723 3.58031954447592
7724 3.58055805378666
7725 3.58079634140166
7726 3.58103440769056
7727 3.58127225275042
7728 3.58150987680598
7729 3.58174728024461
7730 3.58198446314257
7731 3.58222142545887
7732 3.58245816761126
7733 3.58269468989465
7734 3.58293099230867
7735 3.58316707527566
7736 3.58340293865207
7737 3.58363858295268
7738 3.5838740084748
7739 3.58410921523599
7740 3.58434420337735
7741 3.58457897299469
7742 3.58481352479654
7743 3.58504785840028
7744 3.58528197439171
7745 3.58551587291243
7746 3.58574955419878
7747 3.58598301818069
7748 3.5862162650837
7749 3.58644929550266
7750 3.58668210921135
7751 3.58691470644337
7752 3.58714708779484
7753 3.58737925310119
7754 3.58761120281575
7755 3.58784293698724
7756 3.5880744556126
7757 3.58830575929668
7758 3.58853684783096
7759 3.58876772167636
7760 3.58899838099328
7761 3.58922882587944
7762 3.5894590566853
7763 3.58968907343641
7764 3.58991887657259
7765 3.59014846603179
7766 3.59037784208712
7767 3.59060700512212
7768 3.5908359550049
7769 3.59106469235217
7770 3.59129321709784
7771 3.59152152936859
7772 3.59174962932455
7773 3.59197751742706
7774 3.59220519365613
7775 3.59243265842545
7776 3.59265991169423
7777 3.59288695354611
7778 3.59311378471471
7779 3.59334013428457
7780 3.59356627463439
7781 3.59379220571226
7782 3.59401792794199
7783 3.59424344144312
7784 3.5944687464694
7785 3.59469384302168
7786 3.59491873162951
7787 3.59514341191063
7788 3.59536788456281
7789 3.59559214964388
7790 3.59581620714971
7791 3.59604005789179
7792 3.59626370137219
7793 3.59648713802346
7794 3.5967103682585
7795 3.59693339180872
7796 3.59715620927307
7797 3.59737882048271
7798 3.59760122593423
7799 3.5978234258793
7800 3.59804542027324
7801 3.59826720943777
7802 3.59848879358078
7803 3.59871017288386
7804 3.598931347614
7805 3.5991523176348
7806 3.59937308352274
7807 3.59959364538401
7808 3.59981400335521
7809 3.60003415776593
7810 3.60025410895023
7811 3.60047385714245
7812 3.60069340246932
7813 3.60091274514386
7814 3.60113188537077
7815 3.6013508233575
7816 3.60156955922458
7817 3.60178809316853
7818 3.60200642535879
7819 3.60222455605331
7820 3.60244248545602
7821 3.60266021363974
7822 3.60287774104182
7823 3.60309506741843
7824 3.60331219318621
7825 3.60352911845868
7826 3.60374584366947
7827 3.60396236874793
7828 3.60417869388436
7829 3.604394819445
7830 3.60461074560533
7831 3.60482647246306
7832 3.60504199999777
7833 3.60525732841404
7834 3.60547245836213
7835 3.60568738967338
7836 3.60590212246696
7837 3.60611665698242
7838 3.60633099350987
7839 3.60654513203785
7840 3.606759073174
7841 3.6069728166549
7842 3.60718636285917
7843 3.60739971205524
7844 3.60761286426835
7845 3.60782581968756
7846 3.60803857862422
7847 3.60825114095072
7848 3.60846350717346
7849 3.6086756772322
7850 3.60888765155369
7851 3.60909943016082
7852 3.60931101330078
7853 3.60952240120505
7854 3.60973359369609
7855 3.60994459126764
7856 3.61015539403562
7857 3.61036600228302
7858 3.61057641613469
7859 3.61078663599139
7860 3.61099666139981
7861 3.61120649311399
7862 3.61141613091959
7863 3.61162557523511
7864 3.61183482616645
7865 3.61204388396205
7866 3.61225274891004
7867 3.61246142087384
7868 3.61266990020639
7869 3.61287818707346
7870 3.61308628188173
7871 3.61329418455688
7872 3.61350189527363
7873 3.61370941426984
7874 3.61391674150178
7875 3.61412387740671
7876 3.61433082210826
7877 3.61453757565609
7878 3.61474413816682
7879 3.61495051014936
7880 3.61515669149535
7881 3.61536268247712
7882 3.61556848352832
7883 3.61577409449512
7884 3.61597951570775
7885 3.61618474728819
7886 3.61638978928329
7887 3.61659464211686
7888 3.6167993057839
7889 3.61700378033954
7890 3.6172078855872
7891 3.61741180264876
7892 3.61761553136562
7893 3.61781907211549
7894 3.61802242481507
7895 3.61822558975781
7896 3.61842856734692
7897 3.61863135767663
7898 3.61883396068553
7899 3.61903637652102
7900 3.61923860574236
7901 3.61944064813537
7902 3.61964250407971
7903 3.61984417382022
7904 3.62004565762042
7905 3.62024695526785
7906 3.6204480673174
7907 3.62064899362647
7908 3.6208497347095
7909 3.62105029039202
7910 3.62125066105429
7911 3.62145084656974
7912 3.62165084725657
7913 3.62185066358801
7914 3.62205029529128
7915 3.62224974268117
7916 3.62244900598591
7917 3.62264808543652
7918 3.62284698108181
7919 3.6230456932586
7920 3.62324422199044
7921 3.62344256763343
7922 3.62364073015529
7923 3.62383870955397
7924 3.62403650623329
7925 3.62423412044816
7926 3.62443155201428
7927 3.62462880151954
7928 3.62482586876526
7929 3.62502275426969
7930 3.62521945791316
7931 3.6254159799825
7932 3.62561232089061
7933 3.62580848042861
7934 3.62600445892404
7935 3.62620025642354
7936 3.62639587320024
7937 3.6265913093717
7938 3.62678656505564
7939 3.62698164064094
7940 3.6271765361863
7941 3.62737125182086
7942 3.62756578777278
7943 3.62776014411641
7944 3.62795432083057
7945 3.62814831839243
7946 3.62834213686286
7947 3.62853577655048
7948 3.62872923728882
7949 3.62892251948
7950 3.62911562327398
7951 3.6293085486275
7952 3.62950129613293
7953 3.62969386584535
7954 3.62988625743161
7955 3.63007847165181
7956 3.63027050841752
7957 3.63046236758059
7958 3.63065404988609
7959 3.63084555514717
7960 3.63103688358864
7961 3.63122803522806
7962 3.63141901065376
7963 3.63160980980152
7964 3.63180043283094
7965 3.63199087978164
7966 3.63218115089919
7967 3.63237124630081
7968 3.63256116612717
7969 3.63275091059378
7970 3.63294048014749
7971 3.63312987447423
7972 3.6333190941589
7973 3.63350813919202
7974 3.63369700946301
7975 3.63388570538001
7976 3.63407422712701
7977 3.6342625746556
7978 3.63445074843893
7979 3.63463874863399
7980 3.6348265752957
7981 3.63501422828479
7982 3.63520170802718
7983 3.63538901487601
7984 3.6355761486593
7985 3.6357631097573
7986 3.63594989829658
7987 3.63613651420665
7988 3.6363229578068
7989 3.63650922942578
7990 3.63669532891459
7991 3.63688125651366
7992 3.63706701266177
7993 3.63725259720802
7994 3.63743801042033
7995 3.63762325241265
7996 3.63780832347831
7997 3.63799322356672
7998 3.63817795287485
7999 3.63836251165801
8000 3.63854690016355
8001 3.63873111823683
8002 3.63891516626184
8003 3.63909904433255
8004 3.63928275262053
8005 3.63946629123813
8006 3.63964966066169
8007 3.63983286071405
8008 3.6400158914326
8009 3.64019875331869
8010 3.64038144625548
8011 3.64056397070336
8012 3.64074632644151
8013 3.64092851395491
8014 3.64111053320904
8015 3.64129238452869
8016 3.64147406773824
8017 3.64165558321484
8018 3.6418369311798
8019 3.64201811170917
8020 3.64219912476706
8021 3.64237997081442
8022 3.64256064978439
8023 3.64274116194573
8024 3.64292150731915
8025 3.64310168619598
8026 3.64328169875426
8027 3.64346154506991
8028 3.64364122542266
8029 3.64382073982307
8030 3.64400008842502
8031 3.64417927164237
8032 3.6443582890716
8033 3.64453714136061
8034 3.6447158285561
8035 3.64489435052808
8036 3.64507270773754
8037 3.64525090015312
8038 3.64542892812132
8039 3.64560679174054
8040 3.64578449112265
8041 3.64596202640485
8042 3.64613939782205
8043 3.64631660539584
8044 3.64649364929568
8045 3.64667052976059
8046 3.64684724700194
8047 3.64702380095667
8048 3.64720019184703
8049 3.64737641985209
8050 3.64755248501157
8051 3.64772838759112
8052 3.64790412793237
8053 3.64807970594485
8054 3.64825512173156
8055 3.64843037560608
8056 3.64860546762889
8057 3.64878039790606
8058 3.6489551665711
8059 3.64912977397131
8060 3.64930421987989
8061 3.64947850484556
8062 3.64965262902417
8063 3.64982659226806
8064 3.65000039493368
8065 3.6501740369736
8066 3.65034751872324
8067 3.65052084027551
8068 3.65069400191893
8069 3.65086700353157
8070 3.65103984541728
8071 3.65121252751362
8072 3.65138505030431
8073 3.65155741375669
8074 3.65172961784487
8075 3.651901663131
8076 3.65207354953918
8077 3.65224527714625
8078 3.65241684629861
8079 3.6525882570544
8080 3.65275950934916
8081 3.65293060343111
8082 3.65310153952542
8083 3.65327231769735
8084 3.65344293823374
8085 3.65361340104975
8086 3.65378370668087
8087 3.65395385493158
8088 3.65412384599047
8089 3.65429368008567
8090 3.65446335727153
8091 3.65463287785337
8092 3.65480224184872
8093 3.6549714494187
8094 3.65514050071081
8095 3.65530939611489
8096 3.65547813529763
8097 3.65564671862523
8098 3.65581514624249
8099 3.6559834183096
8100 3.65615153504345
8101 3.65631949657162
8102 3.65648730275374
8103 3.6566549541061
8104 3.65682245059433
8105 3.65698979236744
8106 3.65715697965046
8107 3.65732401244101
8108 3.65749089103865
8109 3.65765761544393
8110 3.65782418590406
8111 3.65799057250335
8112 3.65815680558118
8113 3.65832288511378
8114 3.65848881154976
8115 3.65865458500355
8116 3.65882020526401
8117 3.65898567273235
8118 3.65915098761109
8119 3.65931615010383
8120 3.65948116035179
8121 3.65964601825226
8122 3.65981072419849
8123 3.65997527824311
8124 3.66013968041187
8125 3.66030393062899
8126 3.66046802938428
8127 3.66063197683863
8128 3.66079577309704
8129 3.66095941805792
8130 3.66112291207797
8131 3.66128625544218
8132 3.66144944811441
8133 3.66161249017403
8134 3.66177538170445
8135 3.66193812327039
8136 3.66210071456629
8137 3.66226315575456
8138 3.66242544734119
8139 3.66258758911733
8140 3.66274958117276
8141 3.66291142404193
8142 3.66307311748582
8143 3.66323466181244
8144 3.66339605697667
8145 3.66355730345479
8146 3.66371840107414
8147 3.66387935006758
8148 3.66404015068691
8149 3.66420080293678
8150 3.66436130710049
8151 3.66452166329552
8152 3.6646818714449
8153 3.66484193180976
8154 3.66500184463996
8155 3.66516161000396
8156 3.66532122794668
8157 3.66548069865151
8158 3.66564002245874
8159 3.66579919922882
8160 3.66595822914554
8161 3.66611711233007
8162 3.66627584885263
8163 3.66643443921838
8164 3.66659288324611
8165 3.66675118132347
8166 3.66690933313222
8167 3.66706733934873
8168 3.66722519972308
8169 3.66738291426738
8170 3.6675404835699
8171 3.66769790747142
8172 3.66785518629159
8173 3.66801231997345
8174 3.66816930881145
8175 3.66832615284366
8176 3.66848285212138
8177 3.66863940691419
8178 3.66879581744486
8179 3.66895208359155
8180 3.66910820557876
8181 3.66926418386496
8182 3.66942001812103
8183 3.6695757088396
8184 3.66973125573101
8185 3.66988665907911
8186 3.67004191928026
8187 3.67019703614379
8188 3.6703520099755
8189 3.670506841025
8190 3.67066152902925
8191 3.6708160745672
8192 3.67097047754544
8193 3.67112473825806
8194 3.67127885657571
8195 3.67143283278463
8196 3.67158666688759
8197 3.67174035909105
8198 3.67189390973353
8199 3.67204731855858
8200 3.67220058601683
8201 3.67235371190699
8202 3.67250669661537
8203 3.67265954034775
8204 3.67281224311516
8205 3.67296480516032
8206 3.67311722629639
8207 3.6732695068717
8208 3.67342164711415
8209 3.67357364712898
8210 3.67372550691205
8211 3.67387722660675
8212 3.6740288063049
8213 3.67418024626171
8214 3.67433154646532
8215 3.67448270730001
8216 3.67463372877797
8217 3.67478461086295
8218 3.67493535410258
8219 3.6750859580095
8220 3.67523642311246
8221 3.67538674937734
8222 3.67553693705651
8223 3.6756869862577
8224 3.67583689688729
8225 3.67598666942486
8226 3.67613630378704
8227 3.67628580041443
8228 3.67643515911965
8229 3.67658437995821
8230 3.67673346337185
8231 3.67688240913401
8232 3.67703121752131
8233 3.67717988894522
8234 3.67732842308678
8235 3.67747682026841
8236 3.67762508053004
8237 3.67777320404144
8238 3.67792119077431
8239 3.67806904128329
8240 3.67821675521768
8241 3.6783643332668
8242 3.6785117751817
8243 3.67865908107728
8244 3.67880625108383
8245 3.67895328551484
8246 3.67910018420974
8247 3.67924694741582
8248 3.6793935753055
8249 3.67954006807178
8250 3.67968642579245
8251 3.67983264848432
8252 3.67997873631077
8253 3.68012468935558
8254 3.68027050784419
8255 3.68041611624995
8256 3.68056159046356
8257 3.68070693048468
8258 3.68085213691259
8259 3.6809972093912
8260 3.68114214820253
8261 3.6812869536395
8262 3.68143162544035
8263 3.68157616416777
8264 3.68172056970871
8265 3.68186484218111
8266 3.68200898168184
8267 3.68215298823708
8268 3.68229686212049
8269 3.68244060354673
8270 3.68258421252486
8271 3.68272768939125
8272 3.68287103389549
8273 3.68301424633455
8274 3.6831573268599
8275 3.68330027561079
8276 3.68344309263119
8277 3.68358577804609
8278 3.6837283318517
8279 3.68387075454941
8280 3.68401304598036
8281 3.68415520622653
8282 3.68429723559027
8283 3.68443913413073
8284 3.68458090193784
8285 3.68472253936321
8286 3.68486404617071
8287 3.68500542258644
8288 3.68514666888614
8289 3.68528778487799
8290 3.68542877083876
8291 3.68556962684815
8292 3.68571035321036
8293 3.68585094998825
8294 3.68599141712609
8295 3.68613175487849
8296 3.68627196343412
8297 3.68641204289023
8298 3.68655199321558
8299 3.68669181461809
8300 3.68683150722789
8301 3.68697107131492
8302 3.68711050669582
8303 3.68724981366524
8304 3.68738899232772
8305 3.68752804271105
8306 3.68766696487681
8307 3.68780575927426
8308 3.68794442563245
8309 3.68808296443287
8310 3.68822137584332
8311 3.68835965951385
8312 3.68849781569448
8313 3.68863584470072
8314 3.68877374677081
8315 3.68891152164928
8316 3.68904916944935
8317 3.68918669066031
8318 3.68932408504535
8319 3.68946135297442
8320 3.68959849419817
8321 3.68973550936302
8322 3.68987239809664
8323 3.69000916080238
8324 3.69014579780206
8325 3.69028230918983
8326 3.69041869444932
8327 3.69055495413219
8328 3.69069108846349
8329 3.69082709727411
8330 3.69096298074948
8331 3.6910987389995
8332 3.69123437233523
8333 3.6913698809291
8334 3.69150526429919
8335 3.69164052314958
8336 3.69177565737947
8337 3.69191066720645
8338 3.69204555281494
8339 3.69218031402092
8340 3.69231495098508
8341 3.69244946403245
8342 3.69258385344678
8343 3.69271811896448
8344 3.69285226093496
8345 3.69298627936599
8346 3.69312017431144
8347 3.693253946152
8348 3.69338759456147
8349 3.69352111993729
8350 3.69365452218383
8351 3.69378780184765
8352 3.69392095857331
8353 3.69405399266369
8354 3.69418690404104
8355 3.69431969325166
8356 3.6944523602372
8357 3.69458490491068
8358 3.69471732773342
8359 3.69484962836631
8360 3.6949818072279
8361 3.69511386428771
8362 3.69524579981154
8363 3.69537761389879
8364 3.69550930645923
8365 3.69564087801852
8366 3.69577232818767
8367 3.69590365753036
8368 3.69603486574471
8369 3.69616595318103
8370 3.69629691980396
8371 3.69642776592886
8372 3.69655849155125
8373 3.69668909694917
8374 3.69681958209541
8375 3.69694994685666
8376 3.69708019188072
8377 3.69721031686965
8378 3.69734032183946
8379 3.69747020718012
8380 3.69759997316447
8381 3.69772961949101
8382 3.69785914650546
8383 3.69798855422546
8384 3.69811784283749
8385 3.69824701244302
8386 3.69837606304747
8387 3.69850499482123
8388 3.69863380764503
8389 3.69876250182295
8390 3.69889107767414
8391 3.69901953522792
8392 3.69914787431557
8393 3.69927609554304
8394 3.699404198615
8395 3.69953218374736
8396 3.69966005107463
8397 3.69978780054955
8398 3.69991543257462
8399 3.70004294693772
8400 3.70017034393375
8401 3.70029762353071
8402 3.7004247861281
8403 3.70055183143962
8404 3.7006787600476
8405 3.70080557170689
8406 3.70093226646064
8407 3.70105884476776
8408 3.70118530643367
8409 3.70131165183405
8410 3.70143788060805
8411 3.70156399318708
8412 3.70168998986174
8413 3.70181587074553
8414 3.70194163545218
8415 3.70206728446173
8416 3.7021928177886
8417 3.70231823547573
8418 3.70244353782892
8419 3.70256872484052
8420 3.70269379647829
8421 3.70281875313564
8422 3.70294359445057
8423 3.70306832110473
8424 3.70319293290329
8425 3.70331743022168
8426 3.70344181264184
8427 3.70356608057188
8428 3.70369023440278
8429 3.70381427377221
8430 3.70393819891404
8431 3.70406200990464
8432 3.70418570703864
8433 3.70430929027891
8434 3.7044327599993
8435 3.70455611589958
8436 3.70467935830399
8437 3.70480248717418
8438 3.70492550280238
8439 3.70504840514086
8440 3.70517119422825
8441 3.70529387040661
8442 3.70541643359912
8443 3.70553888417954
8444 3.70566122207357
8445 3.70578344705866
8446 3.7059055599219
8447 3.70602755995334
8448 3.70614944808267
8449 3.70627122401275
8450 3.70639288778891
8451 3.70651443959447
8452 3.70663587954715
8453 3.70675720776521
8454 3.70687842436391
8455 3.70699952923823
8456 3.70712052259555
8457 3.70724140472897
8458 3.70736217576419
8459 3.70748283542744
8460 3.70760338411127
8461 3.70772382183025
8462 3.70784414886061
8463 3.70796436511661
8464 3.70808447074161
8465 3.70820446590732
8466 3.70832435054139
8467 3.70844412482523
8468 3.70856378884259
8469 3.7086833428884
8470 3.70880278685627
8471 3.70892212109531
8472 3.7090413452395
8473 3.70916045986096
8474 3.70927946476466
8475 3.70939836032397
8476 3.70951714623752
8477 3.70963582304448
8478 3.70975439039577
8479 3.70987284889815
8480 3.70999119827632
8481 3.71010943891297
8482 3.71022757065852
8483 3.71034559368482
8484 3.71046350800593
8485 3.71058131374848
8486 3.71069901131044
8487 3.71081660047027
8488 3.71093408173354
8489 3.71105145471968
8490 3.71116871976382
8491 3.7112858767943
8492 3.71140292607611
8493 3.71151986752079
8494 3.71163670181401
8495 3.7117534283451
8496 3.71187004750198
8497 3.71198655926062
8498 3.71210296376108
8499 3.71221926123522
8500 3.71233545174338
8501 3.71245153547799
8502 3.71256751223547
8503 3.71268338256629
8504 3.71279914610767
8505 3.71291480329422
8506 3.71303035397495
8507 3.71314579836016
8508 3.71326113671004
8509 3.7133763688249
8510 3.71349149497059
8511 3.7136065147552
8512 3.71372142918761
8513 3.71383623785713
8514 3.71395094103426
8515 3.71406553863286
8516 3.714180030889
8517 3.71429441780646
8518 3.71440869923069
8519 3.71452287574799
8520 3.71463694719093
8521 3.71475228045789
8522 3.71486750739609
8523 3.71498262835981
8524 3.71509764327243
8525 3.71521255244974
8526 3.71532735543729
8527 3.71544205274014
8528 3.7155566446655
8529 3.71567113068934
8530 3.71578551129701
8531 3.71589978653561
8532 3.71601395644236
8533 3.71612802127981
8534 3.71624198097711
8535 3.71635583572182
8536 3.71646958590959
;
[thick, color0, dashed, forget plot]
table
6828 3.50108395555147
6829 3.50916956062783
6830 3.5170296898613
6831 3.52468227906108
6832 3.53214297632037
6833 3.53942553230332
6834 3.54654210737221
6835 3.55350351869761
6836 3.56031943989326
6837 3.56699856300315
6838 3.57354873366467
6839 3.57997706234015
6840 3.58629001691566
6841 3.59249350227778
6842 3.59859292631788
6843 3.60459325694464
6844 3.61049907028849
6845 3.61631459357127
6846 3.62204374117142
6847 3.62769014574761
6848 3.63325718649988
6849 3.63874801303276
6850 3.64416556654353
6851 3.64951259945085
6852 3.6547916912755
6853 3.66000526360334
6854 3.66515559380821
6855 3.67024482643886
6856 3.67527498380745
6857 3.68024797653647
6858 3.68516561030833
6859 3.69002959560138
6860 3.69484155363188
6861 3.69960302247116
6862 3.70431546398917
6863 3.70898026825778
6864 3.71359875872909
6865 3.71817219648802
6866 3.72270178479348
6867 3.72718867158876
6868 3.73163395472457
6869 3.73603868304167
6870 3.74040386120075
6871 3.74473045145153
6872 3.74901937614569
6873 3.75327151937598
6874 3.75748773068223
6875 3.7616688251596
6876 3.7658155873285
6877 3.76992877102773
6878 3.77400910214087
6879 3.77805727957146
6880 3.78207397650502
6881 3.78605984211619
6882 3.79001550295821
6883 3.79394156315239
6884 3.79783860643821
6885 3.80170719630109
6886 3.80554787805904
6887 3.80936117852252
6888 3.81314760746522
6889 3.81690765866544
6890 3.82064180963293
6891 3.8243505233574
6892 3.82803424846949
6893 3.83169341996863
6894 3.83532845964858
6895 3.83893977693979
6896 3.84252776919079
6897 3.84609282190936
6898 3.84963531009919
6899 3.85315559814557
6900 3.85665403953006
6901 3.86013097849611
6902 3.86358674993624
6903 3.86702167945745
6904 3.87043608392677
6905 3.8738302721919
6906 3.87720454475759
6907 3.8805591943541
6908 3.88389450643679
6909 3.88721075869087
6910 3.89050822252666
6911 3.89378716221108
6912 3.89704783578684
6913 3.9002904946873
6914 3.9035153845874
6915 3.90672274554132
6916 3.90991281127866
6917 3.91308581072073
6918 3.91624196706316
6919 3.91938149854872
6920 3.92250461846349
6921 3.92561153570417
6922 3.92870245386191
6923 3.93177757248595
6924 3.93483708654557
6925 3.93788118694687
6926 3.94091006030229
6927 3.94392388974455
6928 3.94692285380497
6929 3.9499071279676
6930 3.95287688410111
6931 3.95583229012727
6932 3.95877351042879
6933 3.96170070659097
6934 3.96461403675169
6935 3.96751365594529
6936 3.97039971629237
6937 3.9732723666719
6938 3.97613175313046
6939 3.97897801924067
6940 3.9818113054198
6941 3.98463174973618
6942 3.98743948778655
6943 3.99023465221736
6944 3.99301737348134
6945 3.9957877799262
6946 3.99854599698106
6947 4.00129214809207
6948 4.00402635483684
6949 4.00674873645537
6950 4.00945940963213
6951 4.01215848951122
6952 4.01484608911213
6953 4.01752232002992
6954 4.02018729095143
6955 4.02284110900041
6956 4.02548388030359
6957 4.02811570846423
6958 4.03073669553627
6959 4.03334694194616
6960 4.0359465464555
6961 4.03853560605142
6962 4.04111421673885
6963 4.04368247227528
6964 4.04624046534503
6965 4.0487882871071
6966 4.05132602707108
6967 4.05385377359017
6968 4.05637161381489
6969 4.05887963302397
6970 4.06137791582106
6971 4.06386654523267
6972 4.06634560296593
6973 4.06881516960968
6974 4.07127532448874
6975 4.0737261457961
6976 4.07616771069216
6977 4.07860009503933
6978 4.0810233736144
6979 4.08343762028903
6980 4.08584290769007
6981 4.08823930754007
6982 4.09062689034622
6983 4.09300572586068
6984 4.09537588323025
6985 4.09773742961601
6986 4.10009043204791
6987 4.10243495666826
6988 4.10477106819096
6989 4.10709883089736
6990 4.10941830800271
6991 4.11172956180668
6992 4.11403265410739
6993 4.11632764573221
6994 4.11861459673371
6995 4.12089356611407
6996 4.12316461222837
6997 4.12542779295848
6998 4.12768316507262
6999 4.12993078518051
7000 4.13217070842762
7001 4.13440298996368
7002 4.13662768358797
7003 4.13884484295281
7004 4.14105452066345
7005 4.143256768934
7006 4.14545163930932
7007 4.14763918247877
7008 4.14981944879245
7009 4.15199248813229
7010 4.15415834918831
7011 4.15631708079339
7012 4.15846873066037
7013 4.16061334617633
7014 4.16275097406005
7015 4.164881660393
7016 4.16700545113979
7017 4.16912239110309
7018 4.17123252530064
7019 4.17333589783849
7020 4.17543255195167
7021 4.17752253129227
7022 4.17960587806768
7023 4.18168263473517
7024 4.18375284289137
7025 4.18581654389042
7026 4.18787377827507
7027 4.18992458647802
7028 4.19196900837692
7029 4.19400708341081
7030 4.19603885066905
7031 4.19806434883721
7032 4.20008361603924
7033 4.20209668991939
7034 4.2041036081479
7035 4.20610440744062
7036 4.20809912469693
7037 4.2100877959777
7038 4.21207045695925
7039 4.21404714355243
7040 4.21601789045864
7041 4.21798273260352
7042 4.21994170433689
7043 4.22189483955311
7044 4.22384217222983
7045 4.22578373569108
7046 4.22771956273505
7047 4.22964968623223
7048 4.23157413849599
7049 4.23349295131931
7050 4.2354061567532
7051 4.23731378631518
7052 4.23921587094796
7053 4.24111244133324
7054 4.24300352805306
7055 4.24488916140978
7056 4.24676937140973
7057 4.24864418773061
7058 4.25051363963258
7059 4.25237775620406
7060 4.25423656588539
7061 4.25609009760864
7062 4.25793837954445
7063 4.25978143979917
7064 4.26161930598944
7065 4.26345200574537
7066 4.26527956625727
7067 4.26710201437422
7068 4.26891937695404
7069 4.27073168021536
7070 4.27253895084652
7071 4.27434121480699
7072 4.27613849789565
7073 4.27793082559146
7074 4.27971822319235
7075 4.28150071581134
7076 4.2832783284198
7077 4.2850510855343
7078 4.28681901169579
7079 4.2885821311884
7080 4.29034046799328
7081 4.29209404608458
7082 4.29384288907845
7083 4.29558702025172
7084 4.29732646271007
7085 4.29906123951111
7086 4.30079137382123
7087 4.30251688794838
7088 4.30423780453206
7089 4.30595414556953
7090 4.30766593327631
7091 4.30937318947335
7092 4.3110759358471
7093 4.3127741943619
7094 4.3144679855837
7095 4.31615733122621
7096 4.31784225222955
7097 4.3195227694374
7098 4.32119890302946
7099 4.32287067422581
7100 4.32453810308917
7101 4.32620120966321
7102 4.32786001425119
7103 4.32951453630454
7104 4.33116479558529
7105 4.33281081166878
7106 4.33445260024409
7107 4.33609018419578
7108 4.33772358261914
7109 4.33935281468897
7110 4.34097789923154
7111 4.34259885453618
7112 4.34421569931975
7113 4.34582845195472
7114 4.3474371305118
7115 4.34904175323666
7116 4.35064233804099
7117 4.35223890286643
7118 4.35383146504971
7119 4.35542004219431
7120 4.35700465191492
7121 4.3585853115572
7122 4.36016203783826
7123 4.36173485532185
7124 4.36330377361978
7125 4.36486880961337
7126 4.36642997978721
7127 4.36798729937008
7128 4.3695407862051
7129 4.37109045636588
7130 4.37263632608969
7131 4.37417841658217
7132 4.37571673875839
7133 4.37725130834898
7134 4.37878214113425
7135 4.38030925287233
7136 4.38183265889917
7137 4.38335237449343
7138 4.38486841519246
7139 4.38638079584587
7140 4.38788953188787
7141 4.38939463809779
7142 4.39089612925007
7143 4.39239402030814
7144 4.39388832562568
7145 4.39537906026984
7146 4.3968662382718
7147 4.39834987416715
7148 4.3998299823437
7149 4.40130657668749
7150 4.4027796712339
7151 4.40424928020073
7152 4.40571541712818
7153 4.40717809594148
7154 4.40863733064842
7155 4.41009313464926
7156 4.41154552148461
7157 4.41299450419194
7158 4.41444009655048
7159 4.41588231187266
7160 4.4173211631205
7161 4.41875666326262
7162 4.42018882569313
7163 4.42161766289927
7164 4.42304318805642
7165 4.42446541367031
7166 4.42588435267938
7167 4.42730001737873
7168 4.42871242020176
7169 4.43012157386588
7170 4.43152749061097
7171 4.43293018246735
7172 4.4343296619821
7173 4.43572594105479
7174 4.43711903168495
7175 4.43850894604365
7176 4.43989569574577
7177 4.4412792927087
7178 4.44265974859888
7179 4.44403707504075
7180 4.44541128384169
7181 4.44678238644652
7182 4.44815039404094
7183 4.44951531839761
7184 4.45087717043922
7185 4.45223596159334
7186 4.45359170334788
7187 4.45494440642194
7188 4.45629408191951
7189 4.45764074069643
7190 4.45898439404638
7191 4.46032505263721
7192 4.46166272736177
7193 4.46299742869544
7194 4.46432916741235
7195 4.46565795398466
7196 4.46698379900099
7197 4.4683067127832
7198 4.46962670616079
7199 4.47094378923366
7200 4.47225797210573
7201 4.4735692649712
7202 4.47487767817745
7203 4.47618322180563
7204 4.47748590611359
7205 4.47878574068172
7206 4.48008273578827
7207 4.48137690106495
7208 4.48266824643285
7209 4.48395678160379
7210 4.48524251626346
7211 4.48652546034539
7212 4.4878056231859
7213 4.48908301401985
7214 4.49035764288903
7215 4.4916295190059
7216 4.49289865165936
7217 4.49416505020012
7218 4.4954287239895
7219 4.49668968244249
7220 4.49794793794758
7221 4.4992034963174
7222 4.50045636634603
7223 4.5017065573229
7224 4.50295407826695
7225 4.50419893783513
7226 4.50544114492403
7227 4.50668070883939
7228 4.50791763785902
7229 4.50915194081072
7230 4.51038362648644
7231 4.51161270342424
7232 4.51283918031716
7233 4.51406306588621
7234 4.51528436815912
7235 4.51650309603251
7236 4.5177192577367
7237 4.5189328615715
7238 4.52014391617413
7239 4.52135242954881
7240 4.52255841019698
7241 4.52376186616647
7242 4.52496280533555
7243 4.52616123642394
7244 4.52735716708487
7245 4.52855060551849
7246 4.52974155964303
7247 4.53093003719265
7248 4.53211604625539
7249 4.53329959483979
7250 4.5344806903586
7251 4.53565934111785
7252 4.53683555449171
7253 4.53800933825641
7254 4.53918070021625
7255 4.54034964800506
7256 4.54151618922468
7257 4.5426803310597
7258 4.54384208145872
7259 4.54500144765119
7260 4.54615842356714
7261 4.54731302994352
7262 4.54846527461635
7263 4.54961516478399
7264 4.55076270772086
7265 4.55190791092152
7266 4.55305078117632
7267 4.55419132596608
7268 4.55532955232695
7269 4.556465467492
7270 4.55759907826292
7271 4.55873039199805
7272 4.55985941565711
7273 4.56098615606015
7274 4.56211062037488
7275 4.56323281532022
7276 4.56435274785274
7277 4.56547042478199
7278 4.56658585315952
7279 4.56769903961554
7280 4.56880999070474
7281 4.56991871330575
7282 4.57102521389293
7283 4.57212949961668
7284 4.57323157660051
7285 4.57433145143561
7286 4.57542913098973
7287 4.57652462192441
7288 4.57761793032072
7289 4.57870906285173
7290 4.57979802592271
7291 4.58088482595602
7292 4.5819694691441
7293 4.58305196204789
7294 4.58413231103661
7295 4.58521052238498
7296 4.58628660199287
7297 4.58736055655866
7298 4.58843239209269
7299 4.5895021146166
7300 4.59056973057443
7301 4.59163524585386
7302 4.59269866657642
7303 4.59375999910838
7304 4.59481924917673
7305 4.59587642289994
7306 4.59693152626036
7307 4.59798456512253
7308 4.5990355453398
7309 4.60008447320927
7310 4.60113135409513
7311 4.60217619426105
7312 4.60321899919544
7313 4.60425977496995
7314 4.60529852719125
7315 4.60633526159532
7316 4.6073699841666
7317 4.60840270055093
7318 4.60943341095588
7319 4.61046212638803
7320 4.61148885247705
7321 4.61251359489003
7322 4.61353635908158
7323 4.614557150771
7324 4.61557597521378
7325 4.61659283823398
7326 4.61760774504015
7327 4.61862070142415
7328 4.61963171249431
7329 4.6206407836361
7330 4.62164792055978
7331 4.6226531283689
7332 4.6236564124254
7333 4.62465777820866
7334 4.62565723089227
7335 4.62665477578217
7336 4.62765041804165
7337 4.62864416299032
7338 4.62963601588195
7339 4.63062598154788
7340 4.63161406560712
7341 4.63260027286625
7342 4.63358460868421
7343 4.63456707816876
7344 4.63554768623564
7345 4.63652643795807
7346 4.63750333839371
7347 4.63847839254323
7348 4.63945160542964
7349 4.64042298219336
7350 4.6413925273376
7351 4.64236024614187
7352 4.64332614342341
7353 4.64429022401548
7354 4.64525249294621
7355 4.64621295503886
7356 4.64717161498335
7357 4.64812847780242
7358 4.64908354792275
7359 4.65003683041253
7360 4.6509883300888
7361 4.65193805144762
7362 4.65288599926797
7363 4.65383217815859
7364 4.65477659297885
7365 4.6557192482145
7366 4.65666014878931
7367 4.65759929906577
7368 4.65853694422335
7369 4.65947284800264
7370 4.66040701481311
7371 4.66133944960634
7372 4.6622701566394
7373 4.66319914080209
7374 4.66412640648379
7375 4.66505195807527
7376 4.66597580006055
7377 4.6668979372469
7378 4.66781837382009
7379 4.66873711409979
7380 4.66965416237698
7381 4.67056952332292
7382 4.6714832010921
7383 4.67239520008793
7384 4.67330552464813
7385 4.67421417911163
7386 4.67512116755555
7387 4.67602649496682
7388 4.67693016507409
7389 4.67783218215739
7390 4.67873255025687
7391 4.67963127378763
7392 4.6805283564808
7393 4.68142380327887
7394 4.6823176175699
7395 4.68320980377248
7396 4.68410036641376
7397 4.68498930894911
7398 4.6858766358888
7399 4.68676235130822
7400 4.68764645919467
7401 4.68852896338602
7402 4.68940986817401
7403 4.69028917774652
7404 4.691166895792
7405 4.6920430264014
7406 4.69291757351618
7407 4.69379054122749
7408 4.6946619334349
7409 4.69553175409186
7410 4.69640000709067
7411 4.697266696183
7412 4.69813182538888
7413 4.69899539835223
7414 4.69985741912873
7415 4.70071789148072
7416 4.70157681957311
7417 4.70243420691304
7418 4.7032900575442
7419 4.70414437482152
7420 4.70499716300269
7421 4.70584842537634
7422 4.7066981660863
7423 4.70754638867268
7424 4.70839309698813
7425 4.70923829467005
7426 4.71008198535992
7427 4.71092417119433
7428 4.71176485734182
7429 4.71260404760536
7430 4.71344174578153
7431 4.71427795514668
7432 4.71511267972345
7433 4.71594592264485
7434 4.71677768761263
7435 4.71760797840302
7436 4.71843679840951
7437 4.71926415114597
7438 4.72009003787368
7439 4.72091446321957
7440 4.72173743195201
7441 4.72255894762367
7442 4.72337901395474
7443 4.72419763389554
7444 4.72501481118092
7445 4.72583054944253
7446 4.7266448519519
7447 4.72745772211257
7448 4.72826916334035
7449 4.72907917936214
7450 4.72988777296668
7451 4.7306949479921
7452 4.73150070779573
7453 4.7323050556406
7454 4.73310799476368
7455 4.73390952860089
7456 4.73470966053564
7457 4.73550839380441
7458 4.73630573164255
7459 4.73710167738506
7460 4.73789623452415
7461 4.73868940606791
7462 4.73948119542132
7463 4.74027160559369
7464 4.74106064024141
7465 4.74184830227381
7466 4.74263459518888
7467 4.74341952167089
7468 4.74420308555378
7469 4.74498528941028
7470 4.74576613660153
7471 4.74654563045893
7472 4.74732377395265
7473 4.74810057043388
7474 4.74887602290593
7475 4.74965013445909
7476 4.75042290830507
7477 4.75119434750461
7478 4.7519644550174
7479 4.75273323388471
7480 4.75350068750622
7481 4.75426681870711
7482 4.75503163051935
7483 4.75579512591172
7484 4.75655730820202
7485 4.75731818007167
7486 4.7580777448895
7487 4.75883600550867
7488 4.75959296477342
7489 4.76034862587757
7490 4.76110299188073
7491 4.76185606571042
7492 4.7626078503931
7493 4.76335834876946
7494 4.76410756363953
7495 4.76485549816881
7496 4.76560215528437
7497 4.76634753786225
7498 4.76709164856456
7499 4.7678344902809
7500 4.76857606619281
7501 4.76931637867815
7502 4.77005543097618
7503 4.77079322591997
7504 4.77152976623691
7505 4.7722650549397
7506 4.77299909452224
7507 4.77373188798562
7508 4.77446343802098
7509 4.77519374758748
7510 4.77592281945064
7511 4.77665065649461
7512 4.77737726119579
7513 4.77810263639269
7514 4.77882678494298
7515 4.7795497104957
7516 4.78027141455073
7517 4.78099190022562
7518 4.78171117033848
7519 4.78242922702748
7520 4.78314607298134
7521 4.78386171149854
7522 4.78457614488124
7523 4.78528937573088
7524 4.7860014070518
7525 4.7867122411108
7526 4.78742188071483
7527 4.78813032830963
7528 4.7888375867869
7529 4.78954365902225
7530 4.79024854707877
7531 4.79095225363993
7532 4.79165478131949
7533 4.79235613287178
7534 4.79305634091339
7535 4.79375537781367
7536 4.79445324613035
7537 4.79514994841924
7538 4.79584548728377
7539 4.79653986503074
7540 4.7972330845576
7541 4.79792514798889
7542 4.7986160582058
7543 4.79930581742092
7544 4.79999442840681
7545 4.80068189339216
7546 4.80136821503217
7547 4.80205339568292
7548 4.80273743796188
7549 4.80342034410585
7550 4.80410211683281
7551 4.8047827584885
7552 4.80546227135403
7553 4.80614065791764
7554 4.80681792101456
7555 4.80749406287047
7556 4.80816908578411
7557 4.80884299179645
7558 4.80951578409547
7559 4.8101874644153
7560 4.81085803554997
7561 4.81152749954668
7562 4.81219585914269
7563 4.81286311613504
7564 4.81352927335248
7565 4.81419433296433
7566 4.81485829766747
7567 4.81552116943941
7568 4.81618295073346
7569 4.81684364385581
7570 4.81750325083199
7571 4.81816177428928
7572 4.81881921671724
7573 4.81947557982631
7574 4.82013086636807
7575 4.82078507850433
7576 4.82143821847914
7577 4.82209028883432
7578 4.82274129115584
7579 4.82339122843648
7580 4.82404010227358
7581 4.8246879151593
7582 4.82533466940182
7583 4.82598036749662
7584 4.8266250113167
7585 4.82726860322022
7586 4.82791114525972
7587 4.82855264007266
7588 4.82919308930386
7589 4.82983249514662
7590 4.83047086000197
7591 4.83110818620674
7592 4.83174447563907
7593 4.832379730627
7594 4.8330139533676
7595 4.83364714585939
7596 4.83427931018655
7597 4.83491044886739
7598 4.83554056400202
7599 4.83616965700089
7600 4.83679773078738
7601 4.83742478710892
7602 4.8380508280787
7603 4.83867585581565
7604 4.83929987280624
7605 4.83992288082023
7606 4.84054488202296
7607 4.84116587888185
7608 4.84178587338955
7609 4.84240486744853
7610 4.84302286362434
7611 4.84363986335565
7612 4.84425586896791
7613 4.84487088247341
7614 4.84548490565654
7615 4.84609794101235
7616 4.84670999054997
7617 4.84732105563939
7618 4.84793113883211
7619 4.8485402420562
7620 4.84914836749037
7621 4.84975551678322
7622 4.85036169207656
7623 4.8509668951738
7624 4.85157112851027
7625 4.85217439357663
7626 4.85277669298185
7627 4.85337802786116
7628 4.85397840079329
7629 4.85457781352093
7630 4.85517626813731
7631 4.85577376641987
7632 4.85637031045994
7633 4.85696590223373
7634 4.85756054314507
7635 4.8581542356188
7636 4.85874698142013
7637 4.85933878247386
7638 4.85992964089437
7639 4.8605195584467
7640 4.86110853674532
7641 4.86169657791484
7642 4.86228368406704
7643 4.86286985671902
7644 4.86345509796292
7645 4.86403940941433
7646 4.86462279319652
7647 4.86520525139597
7648 4.86578678562623
7649 4.86636739732896
7650 4.86694708895175
7651 4.86752586200343
7652 4.86810371873435
7653 4.86868066042171
7654 4.86925668936916
7655 4.86983180716181
7656 4.87040601596065
7657 4.87097931681954
7658 4.87155171223494
7659 4.87212320362647
7660 4.87269379310281
7661 4.87326348215367
7662 4.87383227297658
7663 4.87440016706839
7664 4.87496716629518
7665 4.87553327221884
7666 4.8760984870222
7667 4.87666281221598
7668 4.8772262496933
7669 4.8777888007583
7670 4.87835046765354
7671 4.87891125192592
7672 4.87947115531187
7673 4.88003017973572
7674 4.88058832672364
7675 4.88114559792667
7676 4.88170199519173
7677 4.88225752026556
7678 4.88281217513374
7679 4.88336596113916
7680 4.88391888000722
7681 4.88447093352458
7682 4.88502212231778
7683 4.88557244890009
7684 4.88612191548217
7685 4.88667052323826
7686 4.88721827396419
7687 4.88776516949614
7688 4.88831121143856
7689 4.88885640147137
7690 4.88940074131957
7691 4.88994423268977
7692 4.89048687678549
7693 4.89102867577701
7694 4.89156963096192
7695 4.89210974417355
7696 4.89264901711356
7697 4.89318745121093
7698 4.89372504849687
7699 4.89426181010398
7700 4.89479773784858
7701 4.89533283324773
7702 4.89586709794226
7703 4.89640053368196
7704 4.89693314208044
7705 4.89746492453199
7706 4.89799588275916
7707 4.89852601838405
7708 4.89905533288689
7709 4.89958382801399
7710 4.90011150505866
7711 4.90063836585555
7712 4.9011644119851
7713 4.90168964480017
7714 4.90221406603094
7715 4.90273767721739
7716 4.90326047990496
7717 4.90378247562492
7718 4.90430366596758
7719 4.90482405240936
7720 4.90534363647534
7721 4.90586241965103
7722 4.90638040369988
7723 4.90689759007153
7724 4.90741398019987
7725 4.90792957548182
7726 4.90844437759663
7727 4.90895838805088
7728 4.90947160823812
7729 4.90998404004319
7730 4.91049568481386
7731 4.9110065436325
7732 4.91151661828248
7733 4.91202591039604
7734 4.91253442118195
7735 4.91304215249021
7736 4.91354910531505
7737 4.91405528142717
7738 4.91456068249145
7739 4.91506530972717
7740 4.91556916447509
7741 4.91607224818213
7742 4.91657456273736
7743 4.91707610911692
7744 4.91757688915669
7745 4.91807690418832
7746 4.91857615575964
7747 4.91907464492912
7748 4.91957237319091
7749 4.92006934237079
7750 4.92056555348427
7751 4.92106100790541
7752 4.92155570761075
7753 4.92204965358222
7754 4.9225428476401
7755 4.92303529094417
7756 4.92352698447906
7757 4.9240179302639
7758 4.92450812916206
7759 4.92499758302311
7760 4.92548629317023
7761 4.92597426072412
7762 4.92646148738729
7763 4.92694797432294
7764 4.9274337231341
7765 4.92791873496685
7766 4.92840301123809
7767 4.9288865535305
7768 4.92936936292117
7769 4.92985144113938
7770 4.9303327892544
7771 4.93081340868308
7772 4.93129330053712
7773 4.93177246675921
7774 4.93225090819091
7775 4.9327286265131
7776 4.93320562272916
7777 4.93368189820258
7778 4.93415745468804
7779 4.93463182249719
7780 4.93510547601626
7781 4.93557841638006
7782 4.9360506450235
7783 4.93652216327121
7784 4.93699297250264
7785 4.9374630738055
7786 4.93793246892661
7787 4.93840115836482
7788 4.93886914413864
7789 4.93933642740391
7790 4.93980300904008
7791 4.94026889135501
7792 4.94073407465329
7793 4.94119856053323
7794 4.94166235044341
7795 4.94212544524601
7796 4.94258784671098
7797 4.94304955572524
7798 4.94351057377802
7799 4.94397090232145
7800 4.94443054224082
7801 4.94488949508448
7802 4.94534776211313
7803 4.94580534444117
7804 4.94626224356465
7805 4.94671846025824
7806 4.94717399629591
7807 4.94762885286882
7808 4.94808303103473
7809 4.94853653243746
7810 4.94898935836337
7811 4.94944151031731
7812 4.94989298939387
7813 4.95034379676566
7814 4.95079393369
7815 4.95124340153975
7816 4.95169220138505
7817 4.95214033432941
7818 4.95258780186215
7819 4.95303460513119
7820 4.95348074525112
7821 4.9539262234956
7822 4.95437104111458
7823 4.95481519885954
7824 4.95525869834562
7825 4.9557015406839
7826 4.9561437272199
7827 4.9565852588884
7828 4.95702613682539
7829 4.95746636247093
7830 4.95790593703341
7831 4.95834486151094
7832 4.95878313696997
7833 4.95922076444291
7834 4.95965774571951
7835 4.96009408162357
7836 4.96052977304463
7837 4.96096482152498
7838 4.96139922807365
7839 4.96183299367069
7840 4.9622661201022
7841 4.96269860779461
7842 4.96313045825392
7843 4.96356167257
7844 4.96399225198211
7845 4.9644221974568
7846 4.96485151026388
7847 4.96528019123567
7848 4.96570824183179
7849 4.96613566295347
7850 4.96656245589857
7851 4.96698862178413
7852 4.96741416174683
7853 4.96783907686746
7854 4.96826336789793
7855 4.96868703643315
7856 4.96911008340883
7857 4.96953251001698
7858 4.96995431741845
7859 4.97037550673951
7860 4.97079607860032
7861 4.97121603465794
7862 4.97163537552438
7863 4.97205410263326
7864 4.97247221703305
7865 4.97288971970435
7866 4.97330661192255
7867 4.97372289451749
7868 4.97413856880209
7869 4.97455363580412
7870 4.97496809675169
7871 4.97538195247072
7872 4.9757952039923
7873 4.97620785269369
7874 4.97661989920441
7875 4.97703134472637
7876 4.97744219058407
7877 4.97785243758739
7878 4.97826208657969
7879 4.97867113914
7880 4.97907959593603
7881 4.97948745820805
7882 4.97989472718439
7883 4.98030140363087
7884 4.98070748870638
7885 4.98111298337774
7886 4.98151788855413
7887 4.98192220544659
7888 4.98232593518997
7889 4.9827290784423
7890 4.98313055582909
7891 4.98353145115675
7892 4.98393176507765
7893 4.98433149867076
7894 4.98473065289647
7895 4.9851292287002
7896 4.98552722750461
7897 4.98592465031116
7898 4.98632149781658
7899 4.98671777078243
7900 4.98711347081592
7901 4.98750859849782
7902 4.98790315496435
7903 4.98829714128899
7904 4.98869055874488
7905 4.98908340781719
7906 4.98947568990892
7907 4.989867405588
7908 4.9902585561768
7909 4.99064914248271
7910 4.99103916566248
7911 4.99142862623565
7912 4.99181752544735
7913 4.99220586455877
7914 4.99259364408725
7915 4.99298086518465
7916 4.99336752888161
7917 4.99375363614449
7918 4.99413918773148
7919 4.99452418500414
7920 4.99490862853268
7921 4.99529251979088
7922 4.9956758591628
7923 4.99605864757578
7924 4.99644088621606
7925 4.99682257626851
7926 4.99720371831554
7927 4.99758431346962
7928 4.99796436230665
7929 4.99834386638607
7930 4.99872282614726
7931 4.99910124278572
7932 4.99947911741396
7933 4.99985645068911
7934 5.00023324359199
7935 5.00060949687555
7936 5.00098521175168
7937 5.00136038908017
7938 5.00173502956195
7939 5.00210913451862
7940 5.00248270461562
7941 5.00285574083333
7942 5.00322824420619
7943 5.00360021536234
7944 5.00397165516986
7945 5.00434256481172
7946 5.00471294508309
7947 5.0050827972296
7948 5.00545212159463
7949 5.00582091938244
7950 5.00618919142079
7951 5.00655693845571
7952 5.00692416173181
7953 5.00729086210196
7954 5.00765703992799
7955 5.00802269687026
7956 5.0083878333394
7957 5.0087524501706
7958 5.0091165486455
7959 5.00948012917998
7960 5.00984319302133
7961 5.01020574065352
7962 5.01056777353397
7963 5.01092929221167
7964 5.0112902977181
7965 5.01165079055038
7966 5.01201077179468
7967 5.01237024234095
7968 5.01272920297134
7969 5.01308765443478
7970 5.01344559821033
7971 5.01380303455755
7972 5.01415996457433
7973 5.01451638924045
7974 5.01487230890938
7975 5.01522772478264
7976 5.01558263772103
7977 5.01593704826573
7978 5.01629095776768
7979 5.01664436690045
7980 5.01699727665967
7981 5.01734968738475
7982 5.01770160010009
7983 5.01805301608008
7984 5.01840393571613
7985 5.0187543599781
7986 5.01910428994767
7987 5.01945372598574
7988 5.01980266908916
7989 5.02015112023359
7990 5.020499080123
7991 5.02084654939115
7992 5.02119352946534
7993 5.02154002074002
7994 5.02188602406547
7995 5.02223154021867
7996 5.02257657030877
7997 5.02292111474455
7998 5.02326517449444
7999 5.02360875049318
8000 5.02395184353085
8001 5.02429445413023
8002 5.02463658339022
8003 5.02497823214182
8004 5.02531940108036
8005 5.02566009088107
8006 5.02600030267437
8007 5.02634003710994
8008 5.02667929479521
8009 5.02701807683304
8010 5.02735638374738
8011 5.02769421662813
8012 5.02803157598054
8013 5.02836846276494
8014 5.02870487783463
8015 5.02904082197358
8016 5.02937629555778
8017 5.02971129970037
8018 5.03004583524787
8019 5.03037990278421
8020 5.03071350310564
8021 5.03104663717297
8022 5.03137930542214
8023 5.03171150897938
8024 5.03204324836464
8025 5.03237452463325
8026 5.03270533833529
8027 5.03303569025421
8028 5.03336558148995
8029 5.03369501230009
8030 5.0340239836884
8031 5.03435249662083
8032 5.03468055129844
8033 5.03500814896124
8034 5.03533529031897
8035 5.03566197577995
8036 5.03598820643721
8037 5.03631398286899
8038 5.03663930601872
8039 5.03696417653783
8040 5.03728859522952
8041 5.03761256276394
8042 5.03793607987821
8043 5.03825914740978
8044 5.03858176594636
8045 5.0389039364258
8046 5.03922565949639
8047 5.03954693584982
8048 5.03986776623902
8049 5.04018815130082
8050 5.04050809166699
8051 5.04082758839669
8052 5.04114664228885
8053 5.04146525384335
8054 5.04178342363575
8055 5.04210115279171
8056 5.04241844165223
8057 5.04273529120828
8058 5.04305170193187
8059 5.04336767480394
8060 5.04368321016599
8061 5.04399830921421
8062 5.0443129726137
8063 5.04462720073249
8064 5.04494099454423
8065 5.04525435466364
8066 5.04556728192293
8067 5.04587977697453
8068 5.046191840528
8069 5.04650347314607
8070 5.04681467562442
8071 5.04712544847265
8072 5.04743579271592
8073 5.0477457090288
8074 5.04805519770655
8075 5.04836426000368
8076 5.04867289645356
8077 5.04898110754722
8078 5.04928889429366
8079 5.04959625713121
8080 5.04990319664279
8081 5.05020971359807
8082 5.05051580883053
8083 5.05082148282808
8084 5.05112673662152
8085 5.05143157057739
8086 5.05173598562506
8087 5.05203998217333
8088 5.05234356103786
8089 5.05264672288967
8090 5.05294946839259
8091 5.05325179847186
8092 5.05355371349777
8093 5.0538552140358
8094 5.05415630092562
8095 5.05445697514175
8096 5.05475723675286
8097 5.05505708695646
8098 5.05535652612597
8099 5.05565555494798
8100 5.05595417427584
8101 5.05625238469468
8102 5.05655018647487
8103 5.05684758093311
8104 5.05714456822488
8105 5.05744114918451
8106 5.05773732461111
8107 5.05803309483751
8108 5.05832846079981
8109 5.05862342303166
8110 5.05891798215135
8111 5.05921209281976
8112 5.05950580210513
8113 5.05979911033797
8114 5.06009201857913
8115 5.06038452738034
8116 5.0606766369483
8117 5.06096834835757
8118 5.06125966225166
8119 5.0615505793936
8120 5.06184110028694
8121 5.06213122543465
8122 5.0624209556469
8123 5.06271029155083
8124 5.06299923357244
8125 5.06328778219696
8126 5.0635759384291
8127 5.06386370286936
8128 5.06415107614678
8129 5.06443805847232
8130 5.06472465079391
8131 5.06501085406114
8132 5.06529666861263
8133 5.06558209510362
8134 5.06586713377882
8135 5.0661517859257
8136 5.06643605176499
8137 5.06671993193357
8138 5.06700342725721
8139 5.06728653810559
8140 5.06756926500107
8141 5.06785160897232
8142 5.06813357021171
8143 5.06841514957958
8144 5.06869634753212
8145 5.06897716506758
8146 5.06925760229947
8147 5.06953766013993
8148 5.06981733901713
8149 5.07009663968055
8150 5.07037556289001
8151 5.07065410912962
8152 5.07093227863965
8153 5.07121007227381
8154 5.07148749083515
8155 5.07176453469082
8156 5.07204120445992
8157 5.07231750082213
8158 5.0725934242877
8159 5.07286897561175
8160 5.07314415509911
8161 5.07341896348695
8162 5.07369340112501
8163 5.07396746919171
8164 5.0742411679931
8165 5.07451449813707
8166 5.07478745992726
8167 5.07506005440779
8168 5.07533228169296
8169 5.07560414233521
8170 5.07587563748819
8171 5.07614676727212
8172 5.07641753251936
8173 5.0766879335042
8174 5.07695797123377
8175 5.0772276458749
8176 5.07749695803157
8177 5.07776590845218
8178 5.07803449773729
8179 5.0783027262956
8180 5.07857059469637
8181 5.07883810391436
8182 5.07910525409146
8183 5.07937204604858
8184 5.07963847981612
8185 5.07990455640996
8186 5.08017027653896
8187 5.08043564029268
8188 5.0807006485072
8189 5.08096530185304
8190 5.08122960048236
8191 5.08149354550243
8192 5.08175713714303
8193 5.08202037610387
8194 5.082283262751
8195 5.08254579786984
8196 5.08280798169647
8197 5.08306981489974
8198 5.0833312983756
8199 5.08359243217557
8200 5.08385321728652
8201 5.08411365367667
8202 5.08437374233911
8203 5.08463348393095
8204 5.08489287878511
8205 5.08515192755739
8206 5.08541063041573
8207 5.0856689882184
8208 5.08592700153883
8209 5.08618467092585
8210 5.08644199685878
8211 5.08669897977336
8212 5.08695562026465
8213 5.0872119189368
8214 5.08746787621416
8215 5.08772349298467
8216 5.08797876949062
8217 5.0882337060664
8218 5.08848830383233
8219 5.08874256252672
8220 5.08899648320654
8221 5.08925006613129
8222 5.08950331208565
8223 5.08975622151567
8224 5.09000879464199
8225 5.09026103248933
8226 5.0905129352465
8227 5.09076450381195
8228 5.09101573838245
8229 5.09126663940985
8230 5.09151720773563
8231 5.0917674434684
8232 5.09201734735621
8233 5.09226692004544
8234 5.09251616189689
8235 5.0927650733988
8236 5.0930136550616
8237 5.09326190731929
8238 5.0935098305672
8239 5.09375742586428
8240 5.0940046930571
8241 5.09425163340743
8242 5.09449824702004
8243 5.09474453425882
8244 5.09499049567525
8245 5.09523613207799
8246 5.09548144362047
8247 5.09572643082601
8248 5.09597109441763
8249 5.09621543487546
8250 5.09645945274029
8251 5.09670314817535
8252 5.09694652183202
8253 5.09718957426378
8254 5.09743230593439
8255 5.09767431136186
8256 5.09791599792655
8257 5.09815736598836
8258 5.09839841638874
8259 5.09863914937899
8260 5.09887956537155
8261 5.09911966514125
8262 5.09935944884134
8263 5.09959891734491
8264 5.09983807091308
8265 5.10007690996987
8266 5.10031543480929
8267 5.10055364621253
8268 5.1007915445914
8269 5.10102913041463
8270 5.10126640438562
8271 5.10150336704787
8272 5.10174001839539
8273 5.10197635900082
8274 5.10221238958575
8275 5.10244811053693
8276 5.10268352242313
8277 5.10291862559136
8278 5.10315342027369
8279 5.10338790734492
8280 5.10362208713904
8281 5.10385595996726
8282 5.10408952664935
8283 5.10432278750493
8284 5.10455574298474
8285 5.10478839367553
8286 5.10502073979868
8287 5.10525278193314
8288 5.10548452060961
8289 5.1057159561472
8290 5.10594708887034
8291 5.10617791957419
8292 5.10640844868997
8293 5.10663867650261
8294 5.10686860358201
8295 5.10709823033925
8296 5.1073275574723
8297 5.10755658518896
8298 5.10778531391597
8299 5.10801374420475
8300 5.10824187659555
8301 5.1084697115519
8302 5.10869724928808
8303 5.10892449036279
8304 5.10915143530179
8305 5.10937808441551
8306 5.10960443808107
8307 5.10983049717997
8308 5.11005626173267
8309 5.11028173242251
8310 5.11050690986449
8311 5.11073179412939
8312 5.11095638551804
8313 5.11118068492252
8314 5.11140469280624
8315 5.11162840927936
8316 5.11185183488312
8317 5.11207497022578
8318 5.11229781554347
8319 5.11252037136808
8320 5.11274263779491
8321 5.11296461594734
8322 5.11318630570125
8323 5.11340770782491
8324 5.11362882284164
8325 5.11384965137783
8326 5.11407019288053
8327 5.11429044850826
8328 5.11451041873781
8329 5.11473010363587
8330 5.11494950385107
8331 5.11516861989634
8332 5.11538745194967
8333 5.11560600104527
8334 5.11582426666878
8335 5.11604224983531
8336 5.11625995097229
8337 5.116477370403
8338 5.11669450868814
8339 5.11691136604424
8340 5.11712794272084
8341 5.11734423957068
8342 5.11756025708709
8343 5.11777599535195
8344 5.11799145506515
8345 5.11820663638242
8346 5.11842153973238
8347 5.11863616591188
8348 5.11885051477802
8349 5.11906458706011
8350 5.11927838306677
8351 5.11949190343862
8352 5.11970514830152
8353 5.11991811811059
8354 5.1201308132322
8355 5.12034323439397
8356 5.12055538197622
8357 5.12076725606856
8358 5.1209788576514
8359 5.12119018637558
8360 5.12140124297529
8361 5.12161202789488
8362 5.12182254158843
8363 5.12203278467057
8364 5.12224275702069
8365 5.12245245965809
8366 5.12266189242847
8367 5.12287105606613
8368 5.1230799507493
8369 5.12328857708346
8370 5.12349693524602
8371 5.12370502599976
8372 5.12391284951352
8373 5.12412040630236
8374 5.12432769690251
8375 5.12453472106751
8376 5.12474148002088
8377 5.12494797352787
8378 5.12515420200863
8379 5.12536016623073
8380 5.12556586663434
8381 5.12577130325537
8382 5.12597647659844
8383 5.12618138709632
8384 5.1263860352388
8385 5.12659042132682
8386 5.12679454580867
8387 5.12699840895681
8388 5.12720201092671
8389 5.12740535232628
8390 5.12760843388859
8391 5.12781125576594
8392 5.128013818171
8393 5.12821612185407
8394 5.12841816682838
8395 5.12861995370819
8396 5.12882148280649
8397 5.12902275445224
8398 5.12922376922161
8399 5.12942452707165
8400 5.12962502886338
8401 5.12982527444069
8402 5.1300252648143
8403 5.13022499984079
8404 5.13042448046888
8405 5.13062370664082
8406 5.130822678632
8407 5.13102139733298
8408 5.13121986265357
8409 5.13141807525217
8410 5.13161603504559
8411 5.13181374276795
8412 5.13201119905069
8413 5.13220840406213
8414 5.13240535778971
8415 5.1326020611062
8416 5.13279851425849
8417 5.1329947174012
8418 5.13319067116587
8419 5.13338637570505
8420 5.13358183141883
8421 5.13377703902171
8422 5.13397199818599
8423 5.13416671009672
8424 5.13436117462924
8425 5.13455539248917
8426 5.13474936343418
8427 5.13494308824175
8428 5.13513656764641
8429 5.13532980123864
8430 5.13552278981902
8431 5.13571553362718
8432 5.13590803309264
8433 5.13610028858091
8434 5.13629230061695
8435 5.13648406920647
8436 5.13667559482319
8437 5.1368668779604
8438 5.13705791882984
8439 5.13724871784498
8440 5.13743927523825
8441 5.13762959169705
8442 5.13781966721053
8443 5.13800950252491
8444 5.13819909766898
8445 5.13838845283111
8446 5.13857756913508
8447 5.13876644590177
8448 5.13895508431286
8449 5.13914348449017
8450 5.13933164670738
8451 5.13951957125338
8452 5.13970725859832
8453 5.13989470901239
8454 5.14008192290791
8455 5.14026890038795
8456 5.140455641983
8457 5.1406421482829
8458 5.14082841956938
8459 5.14101445565867
8460 5.14120025739794
8461 5.14138582502206
8462 5.1415711590174
8463 5.14175625938547
8464 5.14194112671599
8465 5.14212576135553
8466 5.1423101633763
8467 5.14249433338419
8468 5.14267827162425
8469 5.14286197856064
8470 5.14304545443709
8471 5.14322869970541
8472 5.1434117141847
8473 5.14359449895959
8474 5.14377705391114
8475 5.14395937965472
8476 5.14414147611857
8477 5.14432334410474
8478 5.14450498357974
8479 5.14468639512077
8480 5.14486757879635
8481 5.14504853523014
8482 5.1452292645503
8483 5.14540976720553
8484 5.1455900433331
8485 5.14577009339416
8486 5.1459499180113
8487 5.14612951697997
8488 5.14630889126042
8489 5.14648804058288
8490 5.14666696557353
8491 5.14684566635958
8492 5.14702414353216
8493 5.14720239706929
8494 5.14738042790946
8495 5.14755823577238
8496 5.14773582129106
8497 5.14791318452554
8498 5.14809032592182
8499 5.14826724581153
8500 5.14844394463287
8501 5.14862042272071
8502 5.14879668016034
8503 5.14897271764337
8504 5.14914853519393
8505 5.14932413320816
8506 5.14949951206991
8507 5.14967467186759
8508 5.14984961327952
8509 5.15002433634052
8510 5.15019884149807
8511 5.15037312832988
8512 5.15054719847093
8513 5.15072105157588
8514 5.15089468812103
8515 5.15106810816711
8516 5.15124131220666
8517 5.15141430045214
8518 5.15158707305428
8519 5.15175963070087
8520 5.1519319734793
8521 5.15210641035716
8522 5.15228062957147
8523 5.15245463191008
8524 5.1526284174443
8525 5.15280198680113
8526 5.15297533960372
8527 5.15314847666945
8528 5.15332139856437
8529 5.15349410479222
8530 5.15366659622787
8531 5.15383887304456
8532 5.15401093547637
8533 5.15418278413754
8534 5.15435441901138
8535 5.15452584055134
8536 5.15469704931737
;
[thick, color0, dashed, forget plot]
table
6828 2.99382629275999
6829 2.98683513980205
6830 2.98006844710025
6831 2.97350828021124
6832 2.9671389917322
6833 2.96094683223225
6834 2.9549196420758
6835 2.94904660488815
6836 2.94331804844862
6837 2.93772528120143
6838 2.93226045855498
6839 2.92691647139704
6840 2.92168685222107
6841 2.91656569729315
6842 2.91154759965221
6843 2.90662759270308
6844 2.90180110036735
6845 2.89706389660351
6846 2.89241206841956
6847 2.88784198369585
6848 2.88335026452329
6849 2.87893376207039
6850 2.87458953569739
6851 2.87031483427704
6852 2.86610707942159
6853 2.86196385020623
6854 2.8578828703114
6855 2.85386199593761
6856 2.84989920526972
6857 2.84599258958869
6858 2.8421403432618
6859 2.83834075719781
6860 2.83459221119732
6861 2.83089316767458
6862 2.8272421658679
6863 2.82363781659634
6864 2.82007879731904
6865 2.81656384777314
6866 2.81309176617145
6867 2.80966140460013
6868 2.80627166698778
6869 2.80292150457707
6870 2.79960991387824
6871 2.79633593375177
6872 2.79309864321688
6873 2.78989715835248
6874 2.78673063097134
6875 2.78359824606343
6876 2.78049922088531
6877 2.77743280206825
6878 2.77439826476577
6879 2.77139491111096
6880 2.76842206853846
6881 2.76547908871843
6882 2.7625653465025
6883 2.75968023819928
6884 2.75682318142027
6885 2.75399361283884
6886 2.75119098873475
6887 2.74841478298057
6888 2.74566448638995
6889 2.74293960668223
6890 2.74023966667218
6891 2.73756420441675
6892 2.73491277227168
6893 2.73228493608984
6894 2.72968027493904
6895 2.72709838021137
6896 2.72453885564748
6897 2.72200131626625
6898 2.71948538831041
6899 2.71699070855468
6900 2.71451692383655
6901 2.71206369068252
6902 2.70963067559671
6903 2.70721755360675
6904 2.70482400847086
6905 2.70244973268975
6906 2.70009442648716
6907 2.69775779787049
6908 2.69543956262317
6909 2.6931394434918
6910 2.69085716996377
6911 2.68859247872451
6912 2.68634511290061
6913 2.68411482154432
6914 2.68190135992832
6915 2.67970448920737
6916 2.67752397615535
6917 2.67535959288879
6918 2.67321111696428
6919 2.6710783310227
6920 2.66896102241723
6921 2.66685898383548
6922 2.66477201210662
6923 2.66269990852917
6924 2.66064247906438
6925 2.65859953341801
6926 2.65657088594832
6927 2.65455635459093
6928 2.65255576126045
6929 2.65056893166409
6930 2.64859569502931
6931 2.64663588409753
6932 2.64468933515833
6933 2.64275588748012
6934 2.64083538371189
6935 2.63892766985017
6936 2.63703259481424
6937 2.63515001039521
6938 2.63327977136311
6939 2.63142173519055
6940 2.62957576202517
6941 2.62774171468529
6942 2.6259194587903
6943 2.62410886215522
6944 2.62230979515967
6945 2.62052213058828
6946 2.61874574377297
6947 2.61698051166687
6948 2.61522631393637
6949 2.61348303250724
6950 2.61175055083522
6951 2.61002875488201
6952 2.60831753241261
6953 2.60661677346473
6954 2.60492636929433
6955 2.60324621340853
6956 2.60157620135563
6957 2.59991623003382
6958 2.59826619816542
6959 2.59662600620075
6960 2.59499555619725
6961 2.59337475183857
6962 2.59176349844416
6963 2.5901617026489
6964 2.58856927269001
6965 2.58698611843637
6966 2.58541215095638
6967 2.58384728260336
6968 2.58229142733586
6969 2.58074450033513
6970 2.57920641823542
6971 2.57767709894181
6972 2.57615646119585
6973 2.57464442533661
6974 2.57314091297368
6975 2.57164584655189
6976 2.57015914996542
6977 2.56868074814781
6978 2.56721056685229
6979 2.56574853344402
6980 2.56429457571539
6981 2.56284862301944
6982 2.56141060536458
6983 2.55998045408013
6984 2.55855810125795
6985 2.55714347969701
6986 2.55573652347592
6987 2.55433716773927
6988 2.55294534821272
6989 2.55156100155308
6990 2.55018406504859
6991 2.54881447716289
6992 2.547452177153
6993 2.54609710532541
6994 2.54474920223303
6995 2.5434084095226
6996 2.54207466949889
6997 2.54074792545337
6998 2.53942812078502
6999 2.5381152005301
7000 2.53680910997216
7001 2.53550979502329
7002 2.53421720217809
7003 2.53293127909647
7004 2.53165197354728
7005 2.53037923401031
7006 2.52911301003321
7007 2.52785325175147
7008 2.52659990941399
7009 2.52535293425903
7010 2.52411227805122
7011 2.52287789316944
7012 2.52164973251023
7013 2.52042774931877
7014 2.51921189793514
7015 2.51800213263707
7016 2.51679840872087
7017 2.51560068191373
7018 2.51440890809676
7019 2.51322304404958
7020 2.51204304700165
7021 2.51086887463832
7022 2.50970048468756
7023 2.50853783644244
7024 2.50738088846869
7025 2.50622960054415
7026 2.50508393267933
7027 2.50394384497919
7028 2.50280929877487
7029 2.5016802551315
7030 2.50055667606331
7031 2.49943852344234
7032 2.49832576000432
7033 2.49721834859199
7034 2.49611625285481
7035 2.49501943641564
7036 2.49392786344621
7037 2.49284149852691
7038 2.49176030643521
7039 2.49068425290781
7040 2.48961330340757
7041 2.48854742389284
7042 2.48748658068706
7043 2.48643074048072
7044 2.48537987060364
7045 2.48433393832355
7046 2.48329291120734
7047 2.48225675758911
7048 2.48122544583678
7049 2.48019894450073
7050 2.47917722269638
7051 2.47816025001283
7052 2.47714799600215
7053 2.47614043049496
7054 2.47513752358711
7055 2.47413924623157
7056 2.47314556906427
7057 2.47215646328726
7058 2.47117190025933
7059 2.47019185153025
7060 2.46921628906099
7061 2.46824518510067
7062 2.46727851242872
7063 2.46631624347878
7064 2.46535835128301
7065 2.46440480913281
7066 2.46345559066626
7067 2.46251066946953
7068 2.46157001969365
7069 2.46063361555716
7070 2.45970143143718
7071 2.45877344214576
7072 2.45784962292673
7073 2.45692994870684
7074 2.45601439505977
7075 2.45510293739426
7076 2.45419555154528
7077 2.45329221371701
7078 2.45239290011686
7079 2.45149758736121
7080 2.45060625193077
7081 2.44971887112996
7082 2.44883542185986
7083 2.44795588163708
7084 2.44708022770415
7085 2.44620843790392
7086 2.44534049018218
7087 2.44447636268819
7088 2.44361603380219
7089 2.44275948185416
7090 2.44190668562222
7091 2.44105762372051
7092 2.44021227535097
7093 2.43937061994374
7094 2.43853263637322
7095 2.43769830453167
7096 2.43686760421635
7097 2.43604051515052
7098 2.43521701740361
7099 2.43439709136281
7100 2.43358071743629
7101 2.43276787594989
7102 2.43195854808092
7103 2.43115271416906
7104 2.43035035545182
7105 2.42955145314219
7106 2.42875599187766
7107 2.42796394997487
7108 2.42717530891573
7109 2.42639005064474
7110 2.42560815691867
7111 2.42482960994017
7112 2.42405439205095
7113 2.42328248537992
7114 2.42251387279606
7115 2.42174853663667
7116 2.4209864597238
7117 2.42022762500926
7118 2.41947201534276
7119 2.41871961401632
7120 2.41797040453064
7121 2.41722437057002
7122 2.4164814949263
7123 2.41574176175668
7124 2.41500515485804
7125 2.41427165816758
7126 2.41354125569959
7127 2.4128139325737
7128 2.4120896724101
7129 2.41136845936989
7130 2.410650277863
7131 2.40993510786365
7132 2.40922293898442
7133 2.40851375600367
7134 2.40780754390303
7135 2.40710428789682
7136 2.40640397335696
7137 2.40570658524886
7138 2.40501210927407
7139 2.40432053069583
7140 2.40363183540306
7141 2.40294600916062
7142 2.40226303759164
7143 2.40158290673748
7144 2.40090560273037
7145 2.40023111177974
7146 2.39955941971452
7147 2.39889051328705
7148 2.39822437900434
7149 2.39756100323698
7150 2.39690037254941
7151 2.3962424737689
7152 2.39558729339855
7153 2.39493481876683
7154 2.39428503684093
7155 2.39363793471019
7156 2.39299349928899
7157 2.39235171795222
7158 2.3917125780583
7159 2.39107606726387
7160 2.39044217284277
7161 2.38981088232015
7162 2.38918218356962
7163 2.38855606427451
7164 2.38793251247599
7165 2.38731151586517
7166 2.3866930625598
7167 2.38607714075151
7168 2.38546373850728
7169 2.38485284421856
7170 2.38424444633436
7171 2.38363853303753
7172 2.38303509315003
7173 2.38243411502036
7174 2.3818355872624
7175 2.3812394988754
7176 2.38064583825588
7177 2.38005459445722
7178 2.37946575638593
7179 2.3788793131462
7180 2.37829525398204
7181 2.3777135676297
7182 2.37713424376468
7183 2.37655727140696
7184 2.37598264003107
7185 2.37541033918425
7186 2.37484035821485
7187 2.37427268692582
7188 2.37370731472563
7189 2.3731442310702
7190 2.37258342614864
7191 2.37202488984828
7192 2.37146861186197
7193 2.37091458209866
7194 2.37036279085265
7195 2.36981322807821
7196 2.36926588374437
7197 2.36872074806754
7198 2.36817781150945
7199 2.36763706423153
7200 2.36709849681852
7201 2.36656209945423
7202 2.36602786289539
7203 2.36549577763104
7204 2.36496583439979
7205 2.36443802340095
7206 2.3639123358193
7207 2.36338876243748
7208 2.36286729394504
7209 2.36234792126029
7210 2.36183063533877
7211 2.36131542740372
7212 2.36080228839556
7213 2.36029120907642
7214 2.35978218109704
7215 2.35927519556779
7216 2.3587702437105
7217 2.3582673167389
7218 2.35776640599992
7219 2.35726750323936
7220 2.35677060084191
7221 2.3562756892059
7222 2.35578275984196
7223 2.35529180454336
7224 2.35480281507594
7225 2.35431578285542
7226 2.35383069985343
7227 2.35334755787857
7228 2.35286634875564
7229 2.35238706425517
7230 2.35190969667325
7231 2.35143423764208
7232 2.35096067938988
7233 2.35048901405704
7234 2.35001923375615
7235 2.34955133085883
7236 2.34908529728606
7237 2.34862112523395
7238 2.34815880737211
7239 2.34769833587503
7240 2.34723970318037
7241 2.34678290173633
7242 2.34632792386637
7243 2.34587476232762
7244 2.3454234096019
7245 2.34497385839201
7246 2.3445261012471
7247 2.34408013083024
7248 2.34363593988224
7249 2.34319352131013
7250 2.34275286779744
7251 2.3423139722303
7252 2.34187682730604
7253 2.34144142620034
7254 2.34100776170901
7255 2.34057582733083
7256 2.3401456155979
7257 2.33971711947358
7258 2.33929033259276
7259 2.33886524767878
7260 2.3384418572608
7261 2.33802015529812
7262 2.3376001353786
7263 2.33718179071931
7264 2.33676511429403
7265 2.33635009980946
7266 2.3359367404702
7267 2.33552502979675
7268 2.33511496141861
7269 2.33470652868112
7270 2.33429972487367
7271 2.33389454427921
7272 2.33349097984566
7273 2.33308902551017
7274 2.33268867493956
7275 2.33228992184428
7276 2.33189275992633
7277 2.3314971828349
7278 2.33110318446593
7279 2.33071075887669
7280 2.33031989968915
7281 2.32993060089722
7282 2.32954285628007
7283 2.32915666013217
7284 2.32877200617089
7285 2.32838888848682
7286 2.32800730139074
7287 2.32762723866709
7288 2.32724869443863
7289 2.32687166295267
7290 2.32649613835233
7291 2.32612211485752
7292 2.32574958672223
7293 2.32537854812883
7294 2.32500899355603
7295 2.32464091731369
7296 2.32427431347177
7297 2.32390917662259
7298 2.32354550104429
7299 2.32318328121685
7300 2.32282251162815
7301 2.32246318664566
7302 2.32210530094713
7303 2.32174884921481
7304 2.32139382564031
7305 2.32104022517532
7306 2.32068804231113
7307 2.32033727171768
7308 2.31998790778511
7309 2.31963994546684
7310 2.31929337949762
7311 2.3189482045747
7312 2.31860441567217
7313 2.31826200731072
7314 2.31792097449997
7315 2.3175813120096
7316 2.31724301476751
7317 2.31690607776846
7318 2.31657049427538
7319 2.31623626066424
7320 2.3159033721522
7321 2.31557182348195
7322 2.31524160980259
7323 2.31491272614698
7324 2.31458516771086
7325 2.31425892949646
7326 2.31393400669678
7327 2.31361039435238
7328 2.31328808769022
7329 2.31296708183595
7330 2.31264737213673
7331 2.31232895361924
7332 2.31201182155655
7333 2.31169597144593
7334 2.31138139820702
7335 2.3110680975876
7336 2.31075606465436
7337 2.31044529466799
7338 2.31013578337163
7339 2.30982752574026
7340 2.30952051767589
7341 2.30921475421805
7342 2.30891023125171
7343 2.30860694394851
7344 2.30830488780373
7345 2.30800405839477
7346 2.30770445136196
7347 2.30740606228376
7348 2.30710888660454
7349 2.30681292013871
7350 2.30651815821485
7351 2.30622459685075
7352 2.30593223153098
7353 2.30564105784337
7354 2.30535107174091
7355 2.30506226868789
7356 2.30477464455483
7357 2.30448819517003
7358 2.30420291626347
7359 2.30391880352732
7360 2.30363585309886
7361 2.30335406036484
7362 2.30307342159209
7363 2.30279393235199
7364 2.30251558883415
7365 2.30223838673078
7366 2.30196232210261
7367 2.30168739076032
7368 2.30141361161395
7369 2.30114095791009
7370 2.30086942551408
7371 2.30059901074236
7372 2.30032970925007
7373 2.30006151732659
7374 2.29979443115756
7375 2.29952844640604
7376 2.29926355956888
7377 2.29899976681906
7378 2.29873706405831
7379 2.29847544755727
7380 2.29821491310416
7381 2.29795545747287
7382 2.29769707677013
7383 2.29743976681461
7384 2.29718352427028
7385 2.29692834522306
7386 2.29667422621344
7387 2.29642116326726
7388 2.29616915280058
7389 2.29591819109292
7390 2.2956682743563
7391 2.29541939905521
7392 2.29517156133071
7393 2.29492475789566
7394 2.29467898480079
7395 2.2944342386573
7396 2.29419051599293
7397 2.29394781299516
7398 2.29370612631616
7399 2.29346545266818
7400 2.2932257882009
7401 2.29298712904696
7402 2.29274947234211
7403 2.29251281458306
7404 2.29227715214588
7405 2.29204248151721
7406 2.2918087993004
7407 2.29157610226447
7408 2.29134438689777
7409 2.29111364991629
7410 2.29088388780765
7411 2.29065509716548
7412 2.29042727475162
7413 2.29020041712538
7414 2.28997452117154
7415 2.28974958330908
7416 2.28952560068031
7417 2.28930256975553
7418 2.28908048731819
7419 2.2888593501842
7420 2.28863915485516
7421 2.28841989826649
7422 2.28820157749441
7423 2.287984188852
7424 2.28776772964835
7425 2.28755219619066
7426 2.28733758554929
7427 2.28712389453166
7428 2.28691112016603
7429 2.28669925919391
7430 2.28648830885106
7431 2.28627826573666
7432 2.28606912692508
7433 2.28586088917634
7434 2.28565354939249
7435 2.28544710456254
7436 2.28524155202416
7437 2.28503688817244
7438 2.28483311018892
7439 2.28463021490251
7440 2.2844281995279
7441 2.28422706120563
7442 2.28402679712783
7443 2.28382740396647
7444 2.28362887904446
7445 2.28343121962273
7446 2.28323442236926
7447 2.28303848467835
7448 2.28284340361458
7449 2.28264917631401
7450 2.28245579972923
7451 2.28226327110119
7452 2.2820715876701
7453 2.28188074659029
7454 2.28169074506746
7455 2.28150158018094
7456 2.28131324916651
7457 2.28112574934098
7458 2.28093907779912
7459 2.28075323157292
7460 2.28056820837324
7461 2.28038400514981
7462 2.28020061930569
7463 2.28001804807823
7464 2.27983628877765
7465 2.27965533854433
7466 2.27947519487841
7467 2.27929585490277
7468 2.27911731629404
7469 2.27893957600568
7470 2.2787626313052
7471 2.27858648005688
7472 2.27841111916552
7473 2.27823654622107
7474 2.27806275852937
7475 2.27788975365262
7476 2.27771752903218
7477 2.2775460818567
7478 2.27737540966029
7479 2.27720550982494
7480 2.2770363799233
7481 2.27686801725413
7482 2.27670041921379
7483 2.27653358338692
7484 2.27636750769306
7485 2.27620218896096
7486 2.27603762491734
7487 2.27587381316512
7488 2.27571075103423
7489 2.2755484359381
7490 2.27538686562847
7491 2.27522603743506
7492 2.27506594897664
7493 2.27490659803327
7494 2.27474798170728
7495 2.27459009797652
7496 2.2744329446757
7497 2.27427651875227
7498 2.2741208184103
7499 2.27396584076573
7500 2.27381158380624
7501 2.27365804497154
7502 2.27350522207731
7503 2.27335311256644
7504 2.27320171440218
7505 2.2730510251792
7506 2.27290104237606
7507 2.27275176387929
7508 2.27260318742441
7509 2.27245531055173
7510 2.27230813113916
7511 2.27216164660467
7512 2.27201585491969
7513 2.27187075385243
7514 2.27172634125629
7515 2.27158261395671
7516 2.27143957038123
7517 2.27129720870031
7518 2.27115552637234
7519 2.27101452121754
7520 2.27087419102172
7521 2.27073453378629
7522 2.27059554724007
7523 2.27045722915126
7524 2.2703195774169
7525 2.27018258972751
7526 2.27004626380692
7527 2.26991059810012
7528 2.26977559028065
7529 2.26964123779819
7530 2.26950753884724
7531 2.26937449131119
7532 2.26924209287157
7533 2.26911034165101
7534 2.26897924471377
7535 2.26884879075612
7536 2.26871897760965
7537 2.26858980328092
7538 2.2684612658156
7539 2.26833336300173
7540 2.26820609296087
7541 2.26807945329506
7542 2.26795344227345
7543 2.26782805780967
7544 2.26770329783344
7545 2.2675791604111
7546 2.26745564330364
7547 2.26733274488306
7548 2.26721046296651
7549 2.26708879555561
7550 2.26696774080955
7551 2.26684729659531
7552 2.26672746119307
7553 2.26660823213031
7554 2.26648960812759
7555 2.26637158703997
7556 2.26625416687314
7557 2.2661373453772
7558 2.26602112108291
7559 2.26590549179403
7560 2.26579045564742
7561 2.26567601096571
7562 2.26556215567432
7563 2.26544888781799
7564 2.26533620565824
7565 2.26522410716758
7566 2.2651125906221
7567 2.26500165428443
7568 2.26489129615726
7569 2.2647815142506
7570 2.26467230680024
7571 2.26456367211559
7572 2.264455608352
7573 2.26434811353858
7574 2.26424118590719
7575 2.26413482366294
7576 2.26402902505971
7577 2.26392378828733
7578 2.26381911120102
7579 2.26371499249963
7580 2.26361143010201
7581 2.26350842232181
7582 2.26340596732108
7583 2.26330406371291
7584 2.2632027093068
7585 2.26310190268047
7586 2.26300164192923
7587 2.26290192551805
7588 2.26280275133146
7589 2.26270411767522
7590 2.26260602296986
7591 2.26250846565595
7592 2.26241144390343
7593 2.26231495595663
7594 2.26221900004637
7595 2.26212357460407
7596 2.26202867784321
7597 2.2619343082637
7598 2.26184046427572
7599 2.26174714379631
7600 2.26165434561834
7601 2.26156206778207
7602 2.26147030838642
7603 2.26137906615842
7604 2.26128833937213
7605 2.26119812620613
7606 2.26110842477012
7607 2.26101923364559
7608 2.26093055111349
7609 2.2608423753603
7610 2.26075470528939
7611 2.2606675389018
7612 2.26058087457444
7613 2.260494711134
7614 2.26040904617397
7615 2.26032387871064
7616 2.26023920702557
7617 2.26015502954354
7618 2.26007134462421
7619 2.25998815098284
7620 2.25990544694522
7621 2.25982323097569
7622 2.259741501357
7623 2.25966025641744
7624 2.2595794949936
7625 2.25949921537886
7626 2.25941941629574
7627 2.25934009601576
7628 2.25926125303572
7629 2.25918288593076
7630 2.25910499313978
7631 2.25902757317627
7632 2.25895062473218
7633 2.25887414609737
7634 2.25879813557957
7635 2.25872259200676
7636 2.25864751380117
7637 2.25857289964924
7638 2.25849874784749
7639 2.25842505709389
7640 2.25835182575208
7641 2.25827905256967
7642 2.25820673607321
7643 2.25813487476632
7644 2.25806346738116
7645 2.25799251217248
7646 2.25792200796524
7647 2.25785195337062
7648 2.25778234686909
7649 2.25771318674325
7650 2.25764447213114
7651 2.25757620135885
7652 2.25750837303135
7653 2.25744098559586
7654 2.25737403811159
7655 2.25730752888069
7656 2.25724145683095
7657 2.25717581983099
7658 2.25711061712377
7659 2.25704584719583
7660 2.25698150879341
7661 2.25691760043675
7662 2.25685412076962
7663 2.25679106872391
7664 2.25672844263102
7665 2.25666624113774
7666 2.25660446301548
7667 2.2565431069776
7668 2.25648217161773
7669 2.25642165548601
7670 2.2563615577142
7671 2.25630187626541
7672 2.2562426105046
7673 2.25618375872217
7674 2.25612531983586
7675 2.25606729232046
7676 2.25600967491863
7677 2.25595246667695
7678 2.25589566606665
7679 2.25583927186323
7680 2.25578328281931
7681 2.2557276973337
7682 2.25567251584763
7683 2.25561773560325
7684 2.25556335550677
7685 2.25550937408362
7686 2.25545579005299
7687 2.25540260235492
7688 2.25534980952063
7689 2.25529741044156
7690 2.25524540401581
7691 2.25519378877261
7692 2.25514256349935
7693 2.2550917271693
7694 2.25504127808115
7695 2.25499121554636
7696 2.25494153827337
7697 2.25489224482244
7698 2.25484333435094
7699 2.25479480520315
7700 2.25474665640536
7701 2.25469888667413
7702 2.25465149498543
7703 2.2546044799731
7704 2.25455784057911
7705 2.25451157535287
7706 2.25446568348449
7707 2.25442016350541
7708 2.25437501469611
7709 2.25433023571752
7710 2.25428582496571
7711 2.25424178167946
7712 2.25419810443102
7713 2.25415479232904
7714 2.25411184423912
7715 2.25406925861227
7716 2.25402703500695
7717 2.25398517156096
7718 2.25394366776401
7719 2.25390252201757
7720 2.25386173341885
7721 2.25382130066792
7722 2.25378122298607
7723 2.25374149888031
7724 2.25370212737346
7725 2.25366310732151
7726 2.25362443778449
7727 2.25358611744996
7728 2.25354814537384
7729 2.25351052044603
7730 2.25347324147128
7731 2.25343630728524
7732 2.25339971694005
7733 2.25336346939327
7734 2.25332756343538
7735 2.25329199806111
7736 2.25325677198908
7737 2.25322188447819
7738 2.25318733445814
7739 2.25315312074481
7740 2.25311924227961
7741 2.25308569780724
7742 2.25305248685573
7743 2.25301960768364
7744 2.25298705962674
7745 2.25295484163654
7746 2.25292295263792
7747 2.25289139143227
7748 2.25286015697649
7749 2.25282924863453
7750 2.25279866493842
7751 2.25276840498132
7752 2.25273846797892
7753 2.25270885262016
7754 2.25267955799139
7755 2.25265058303031
7756 2.25262192674613
7757 2.25259358832946
7758 2.25256556649986
7759 2.25253786032961
7760 2.25251046881633
7761 2.25248339103476
7762 2.2524566259833
7763 2.25243017254988
7764 2.25240403001109
7765 2.25237819709672
7766 2.25235267293614
7767 2.25232745671375
7768 2.25230254708864
7769 2.25227794356496
7770 2.25225364494128
7771 2.25222965005409
7772 2.25220595811198
7773 2.25218256809491
7774 2.25215947912136
7775 2.2521366903378
7776 2.2521142006593
7777 2.25209200888965
7778 2.25207011474138
7779 2.25204844607195
7780 2.25202707325252
7781 2.25200599504446
7782 2.25198521086048
7783 2.25196471961502
7784 2.25194452043615
7785 2.25192461223786
7786 2.25190499433241
7787 2.25188566545643
7788 2.25186662498698
7789 2.25184787188386
7790 2.25182940525934
7791 2.25181122442857
7792 2.25179332809109
7793 2.2517757155137
7794 2.25175838607359
7795 2.25174133837143
7796 2.25172457183516
7797 2.25170808524017
7798 2.25169187809044
7799 2.25167594943714
7800 2.25166029830565
7801 2.25164492379106
7802 2.25162982504843
7803 2.25161500132656
7804 2.25160045166334
7805 2.25158617501135
7806 2.25157217074956
7807 2.25155843789921
7808 2.25154497567569
7809 2.25153178309439
7810 2.25151885953708
7811 2.25150620396759
7812 2.25149381554477
7813 2.25148169352205
7814 2.25146983705154
7815 2.25145824517524
7816 2.25144691706412
7817 2.25143585200765
7818 2.25142504885543
7819 2.25141450697543
7820 2.25140422566092
7821 2.25139420378388
7822 2.25138444096906
7823 2.25137493597732
7824 2.2513656880268
7825 2.25135669623346
7826 2.25134796011905
7827 2.25133947860745
7828 2.25133125094334
7829 2.25132327641908
7830 2.25131555417724
7831 2.25130808341517
7832 2.25130086302557
7833 2.25129389238517
7834 2.25128717100476
7835 2.25128069772319
7836 2.2512744718893
7837 2.25126849243986
7838 2.25126275894609
7839 2.25125727040502
7840 2.2512520262458
7841 2.25124702551519
7842 2.25124226746441
7843 2.25123775154048
7844 2.25123347655458
7845 2.25122944191833
7846 2.25122564698455
7847 2.25122209066578
7848 2.25121877251514
7849 2.25121569151093
7850 2.25121284720881
7851 2.25121023853752
7852 2.25120786485473
7853 2.25120572554264
7854 2.25120381949425
7855 2.25120214610213
7856 2.25120070466241
7857 2.25119949454905
7858 2.25119851485093
7859 2.25119776524327
7860 2.25119724419929
7861 2.25119695157005
7862 2.2511968863148
7863 2.25119704783697
7864 2.25119743529984
7865 2.25119804821975
7866 2.25119888589752
7867 2.25119994723019
7868 2.2512012316107
7869 2.25120273834279
7870 2.25120446701176
7871 2.25120641664304
7872 2.25120858655496
7873 2.25121097584598
7874 2.25121358379914
7875 2.25121641008705
7876 2.25121945363246
7877 2.25122271372478
7878 2.25122618975394
7879 2.25122988115873
7880 2.25123378705466
7881 2.2512379067462
7882 2.25124223987225
7883 2.25124678535936
7884 2.25125154270911
7885 2.25125651119865
7886 2.25126169001246
7887 2.25126707878712
7888 2.25127267637782
7889 2.25127848223679
7890 2.25128521534531
7891 2.25129215414077
7892 2.2512992976536
7893 2.25130664556023
7894 2.25131419673366
7895 2.25132195081541
7896 2.25132990718922
7897 2.25133806504209
7898 2.25134642355449
7899 2.25135498225962
7900 2.25136374066879
7901 2.25137269777292
7902 2.25138185319508
7903 2.25139120635146
7904 2.25140075649597
7905 2.25141050271852
7906 2.25142044472589
7907 2.25143058166494
7908 2.2514409132422
7909 2.25145143830133
7910 2.2514621564461
7911 2.25147306690384
7912 2.25148416906578
7913 2.25149546261725
7914 2.2515069464953
7915 2.25151862017769
7916 2.25153048309022
7917 2.25154253472855
7918 2.25155477443214
7919 2.25156720151305
7920 2.25157981544821
7921 2.25159261547598
7922 2.25160560114778
7923 2.25161877153215
7924 2.25163212625051
7925 2.25164566462781
7926 2.25165938571302
7927 2.25167328956946
7928 2.25168737522386
7929 2.25170164215331
7930 2.25171608967906
7931 2.25173071717928
7932 2.25174552436726
7933 2.2517605101681
7934 2.25177567425608
7935 2.25179101597152
7936 2.2518065346488
7937 2.25182222966322
7938 2.25183810054933
7939 2.25185414676327
7940 2.25187036775698
7941 2.25188676280838
7942 2.25190333133937
7943 2.25192007287048
7944 2.25193698649128
7945 2.25195407197314
7946 2.25197132864262
7947 2.25198875587136
7948 2.25200635298301
7949 2.25202411957755
7950 2.25204205512718
7951 2.25206015879929
7952 2.25207843053405
7953 2.25209686958874
7954 2.25211547493524
7955 2.25213424643336
7956 2.25215318349564
7957 2.25217228499058
7958 2.25219155112669
7959 2.25221098111435
7960 2.25223057415594
7961 2.2522503298026
7962 2.25227024777356
7963 2.25229032739138
7964 2.25231056794378
7965 2.25233096901291
7966 2.25235153000369
7967 2.25237225026067
7968 2.252393129283
7969 2.25241416675278
7970 2.25243536208465
7971 2.2524567143909
7972 2.25247822374347
7973 2.25249988914359
7974 2.25252171001664
7975 2.25254368597738
7976 2.25256581653298
7977 2.25258810104547
7978 2.25261053911017
7979 2.25263313036753
7980 2.25265587393174
7981 2.25267876918483
7982 2.25270181595427
7983 2.25272501367194
7984 2.25274836160246
7985 2.25277185953651
7986 2.25279550664549
7987 2.25281930242755
7988 2.25284324652443
7989 2.25286733861797
7990 2.25289157770618
7991 2.25291596363616
7992 2.2529404958582
7993 2.25296517367603
7994 2.25298999677519
7995 2.25301496460663
7996 2.25304007664786
7997 2.25306533238889
7998 2.25309073125525
7999 2.25311627282284
8000 2.25314195679625
8001 2.25316778234343
8002 2.25319374913345
8003 2.25321985652328
8004 2.25324610416071
8005 2.25327249159519
8006 2.25329901864901
8007 2.25332568431816
8008 2.25335248806998
8009 2.25337942980434
8010 2.25340650876359
8011 2.25343372477859
8012 2.25346107690249
8013 2.25348856514487
8014 2.25351618858345
8015 2.2535439470838
8016 2.25357183991871
8017 2.25359986672932
8018 2.25362802711174
8019 2.25365632063413
8020 2.25368474642847
8021 2.25371330445588
8022 2.25374199414665
8023 2.25377081491208
8024 2.25379976627366
8025 2.2538288477587
8026 2.25385805917323
8027 2.25388739988562
8028 2.25391686935538
8029 2.25394646734604
8030 2.25397619316164
8031 2.2540060466639
8032 2.25403602684475
8033 2.25406613375999
8034 2.25409636679323
8035 2.25412672527621
8036 2.25415720903788
8037 2.25418781743725
8038 2.25421855022391
8039 2.25424940694326
8040 2.25428038701577
8041 2.25431149004575
8042 2.25434271576589
8043 2.25437406338191
8044 2.25440553264499
8045 2.25443712309538
8046 2.25446883450749
8047 2.25450066606352
8048 2.25453261745503
8049 2.25456468840337
8050 2.25459687835616
8051 2.25462918678554
8052 2.25466161357588
8053 2.25469415804635
8054 2.25472681982737
8055 2.25475959842045
8056 2.25479249360554
8057 2.25482550460384
8058 2.25485863121034
8059 2.25489187313868
8060 2.25492522959379
8061 2.25495870047691
8062 2.25499228543465
8063 2.25502598380362
8064 2.25505979532313
8065 2.25509371928357
8066 2.25512775552355
8067 2.25516190357649
8068 2.25519616330986
8069 2.25523053391708
8070 2.25526501521013
8071 2.2552996065546
8072 2.2553343078927
8073 2.25536911848458
8074 2.2554040379832
8075 2.25543906625832
8076 2.25547420262479
8077 2.25550944674527
8078 2.25554479830356
8079 2.25558025697759
8080 2.25561582205552
8081 2.25565149326416
8082 2.25568727022031
8083 2.25572315256662
8084 2.25575913984596
8085 2.25579523152211
8086 2.25583142773669
8087 2.25586772768984
8088 2.25590413094309
8089 2.25594063728167
8090 2.25597724615048
8091 2.25601395723488
8092 2.25605077019967
8093 2.25608768480159
8094 2.256124700496
8095 2.25616181708802
8096 2.2561990338424
8097 2.256236350294
8098 2.25627376635902
8099 2.25631128167121
8100 2.25634889581106
8101 2.25638660844856
8102 2.25642441903262
8103 2.25646232727908
8104 2.25650033296378
8105 2.25653843555037
8106 2.25657663468982
8107 2.25661493004452
8108 2.25665332127749
8109 2.2566918078562
8110 2.25673038965678
8111 2.25676905218695
8112 2.25680780905724
8113 2.25684665988959
8114 2.25688560452038
8115 2.25692464262675
8116 2.25696377357972
8117 2.25700299710712
8118 2.25704231297051
8119 2.25708172081407
8120 2.25712122041665
8121 2.25716081106988
8122 2.25720049275008
8123 2.2572402649354
8124 2.2572801272513
8125 2.25732007906102
8126 2.25736012033947
8127 2.2574002508079
8128 2.2574404700473
8129 2.25748077764353
8130 2.25752117336202
8131 2.25756165682323
8132 2.2576022276162
8133 2.25764288524443
8134 2.25768362963007
8135 2.25772446061508
8136 2.25776537736759
8137 2.25780637957555
8138 2.25784746742517
8139 2.25788864012907
8140 2.25792989734445
8141 2.25797123911155
8142 2.25801266475992
8143 2.25805417404531
8144 2.25809576642121
8145 2.258137441842
8146 2.25817919984881
8147 2.25822103999522
8148 2.25826296235668
8149 2.258304966193
8150 2.25834705131098
8151 2.25838921746142
8152 2.25843146425015
8153 2.25847379134572
8154 2.25851619844477
8155 2.2585586853171
8156 2.25860125143345
8157 2.25864389648088
8158 2.25868662062978
8159 2.25872942284589
8160 2.25877230319197
8161 2.2588152611732
8162 2.25885829658026
8163 2.25890140924505
8164 2.25894459849913
8165 2.25898786450987
8166 2.25903120633718
8167 2.25907462428967
8168 2.2591181177532
8169 2.25916168619955
8170 2.2592053296516
8171 2.25924904767071
8172 2.25929284006382
8173 2.2593367064427
8174 2.25938064638914
8175 2.25942465981242
8176 2.2594687462112
8177 2.2595129053762
8178 2.25955713715244
8179 2.2596014408875
8180 2.25964581646115
8181 2.25969026381555
8182 2.2597347821506
8183 2.25977937163062
8184 2.2598240316459
8185 2.25986876174826
8186 2.25991356202156
8187 2.25995843199491
8188 2.2600033714438
8189 2.26004838019696
8190 2.26009345757614
8191 2.26013860363197
8192 2.26018381794786
8193 2.26022910041226
8194 2.26027445040043
8195 2.26031986769942
8196 2.26036535207871
8197 2.26041090328235
8198 2.26045652109147
8199 2.26050220494159
8200 2.26054795474715
8201 2.26059377013731
8202 2.26063965089164
8203 2.26068559676455
8204 2.26073160744522
8205 2.26077768276326
8206 2.26082382217705
8207 2.26087002552501
8208 2.26091629268947
8209 2.26096262333211
8210 2.26100901696532
8211 2.26105547344013
8212 2.26110199234515
8213 2.26114857358661
8214 2.26119521671648
8215 2.26124192161534
8216 2.26128868806532
8217 2.26133551565949
8218 2.26138240437283
8219 2.26142935349228
8220 2.26147636301838
8221 2.26152343262339
8222 2.26157056202738
8223 2.26161775099972
8224 2.26166499913259
8225 2.26171230636038
8226 2.26175967232758
8227 2.2618070970169
8228 2.26185457985684
8229 2.26190212050657
8230 2.26194971900806
8231 2.26199737479961
8232 2.2620450876864
8233 2.26209285784499
8234 2.26214068427668
8235 2.26218856713801
8236 2.26223650599848
8237 2.26228450076358
8238 2.26233255098143
8239 2.2623806567023
8240 2.26242881737826
8241 2.26247703312616
8242 2.26252530334336
8243 2.26257362789574
8244 2.26262200649241
8245 2.26267043895169
8246 2.26271892479901
8247 2.26276746400562
8248 2.26281605619337
8249 2.26286470126811
8250 2.26291339884461
8251 2.2629621487933
8252 2.26301095078952
8253 2.26305980444739
8254 2.26310870975398
8255 2.26315792113804
8256 2.26320718300057
8257 2.263256494981
8258 2.26330585743643
8259 2.26335526940341
8260 2.26340473103352
8261 2.26345424213774
8262 2.26350380203936
8263 2.26355341099062
8264 2.26360306850435
8265 2.26365277439235
8266 2.2637025285544
8267 2.26375233026164
8268 2.26380217964958
8269 2.26385207667884
8270 2.2639020206641
8271 2.26395201173463
8272 2.26400204939558
8273 2.26405213366827
8274 2.26410226413404
8275 2.26415244068465
8276 2.26420266283925
8277 2.26425293050083
8278 2.2643032434297
8279 2.26435360175389
8280 2.26440400482168
8281 2.26445445248581
8282 2.26450494453119
8283 2.26455548075653
8284 2.26460606089093
8285 2.2646566850509
8286 2.26470735254274
8287 2.26475806323973
8288 2.26480881716267
8289 2.26485961360878
8290 2.26491045280718
8291 2.26496133412212
8292 2.26501225773075
8293 2.2650632234739
8294 2.26511423067016
8295 2.26516527941773
8296 2.26521636939594
8297 2.26526750059151
8298 2.26531867251518
8299 2.26536988503143
8300 2.26542113786022
8301 2.26547243107794
8302 2.26552376410355
8303 2.2655751369677
8304 2.26562654935365
8305 2.2656780010066
8306 2.26572949167254
8307 2.26578102136855
8308 2.26583258953223
8309 2.26588419644323
8310 2.26593584182215
8311 2.26598752489831
8312 2.26603924587093
8313 2.26609100447891
8314 2.26614280073539
8315 2.26619463401919
8316 2.26624650401558
8317 2.26629841109485
8318 2.26635035454723
8319 2.26640233458076
8320 2.26645435060143
8321 2.2665064027787
8322 2.26655849049203
8323 2.26661061377985
8324 2.26666277276247
8325 2.26671496700182
8326 2.2667671960181
8327 2.26681945975612
8328 2.26687175818918
8329 2.26692409091235
8330 2.2669764576479
8331 2.26702885810267
8332 2.26708129272079
8333 2.26713376081292
8334 2.26718626192961
8335 2.26723879646386
8336 2.26729136378666
8337 2.2673439640099
8338 2.26739659694175
8339 2.2674492619976
8340 2.26750195924932
8341 2.26755468849422
8342 2.26760744980647
8343 2.267660242577
8344 2.26771306680478
8345 2.26776592234955
8346 2.2678188088905
8347 2.26787172639212
8348 2.26792467434491
8349 2.26797765281447
8350 2.2680306613009
8351 2.26808370025667
8352 2.2681367688451
8353 2.26818986721679
8354 2.26824299484988
8355 2.26829615210936
8356 2.26834933849817
8357 2.2684025537528
8358 2.26845579781544
8359 2.26850907035704
8360 2.26856237148051
8361 2.26861570068053
8362 2.26866905803465
8363 2.26872244312702
8364 2.26877585589776
8365 2.26882929637896
8366 2.26888276394688
8367 2.26893625899458
8368 2.26898978074011
8369 2.26904332927859
8370 2.26909690436191
8371 2.26915050585797
8372 2.26920413358897
8373 2.26925778759598
8374 2.26931146728832
8375 2.2693651726458
8376 2.26941890374056
8377 2.26947266021143
8378 2.2695264416703
8379 2.26958024812952
8380 2.2696340796946
8381 2.26968793572665
8382 2.26974181641247
8383 2.2697957213546
8384 2.26984965043618
8385 2.26990360355921
8386 2.26995758028627
8387 2.27001158068564
8388 2.27006560436336
8389 2.27011965131962
8390 2.27017372145969
8391 2.27022781468989
8392 2.27028193046013
8393 2.270336069232
8394 2.27039023040162
8395 2.27044441378654
8396 2.27049861934278
8397 2.27055284664685
8398 2.27060709592763
8399 2.27066136680379
8400 2.27071565900413
8401 2.27076997262074
8402 2.2708243074419
8403 2.27087866303844
8404 2.27093303962633
8405 2.27098743677296
8406 2.27104185428927
8407 2.27109629220254
8408 2.27115075021377
8409 2.27120522841593
8410 2.27125972617051
8411 2.27131424360622
8412 2.27136878067279
8413 2.27142333742892
8414 2.27147791311465
8415 2.27153250781726
8416 2.27158712131871
8417 2.27164175355025
8418 2.27169640449197
8419 2.27175107397599
8420 2.27180576153776
8421 2.27186046724957
8422 2.27191519071516
8423 2.27196993211275
8424 2.27202469117733
8425 2.2720794679542
8426 2.2721342618495
8427 2.27218907290201
8428 2.27224390115914
8429 2.27229874630578
8430 2.27235360800907
8431 2.2724084861821
8432 2.27246338098463
8433 2.2725182919769
8434 2.27257321938166
8435 2.2726281625927
8436 2.27268312178478
8437 2.27273809638795
8438 2.27279308677493
8439 2.27284809243675
8440 2.27290311321825
8441 2.27295814911617
8442 2.27301319998772
8443 2.27306826583417
8444 2.27312334647815
8445 2.27317844128621
8446 2.27323355070872
8447 2.27328867400492
8448 2.27334381185248
8449 2.27339896353532
8450 2.27345412887044
8451 2.27350930793556
8452 2.27356450049598
8453 2.27361970651802
8454 2.2736749258199
8455 2.27373015808851
8456 2.27378540320811
8457 2.27384066117503
8458 2.27389593195899
8459 2.27395121519621
8460 2.27400651082461
8461 2.27406181863844
8462 2.27411713870383
8463 2.27417247084774
8464 2.27422781476722
8465 2.27428317045912
8466 2.27433853770648
8467 2.27439391626628
8468 2.27444930606093
8469 2.27450470721617
8470 2.27456011927545
8471 2.27461554248522
8472 2.27467097629431
8473 2.27472642076233
8474 2.27478187561817
8475 2.27483734099323
8476 2.27489281635648
8477 2.27494830198421
8478 2.2750037972118
8479 2.27505930267552
8480 2.27511481775628
8481 2.27517034259579
8482 2.27522587676674
8483 2.2752814201641
8484 2.27533697267876
8485 2.2753925341028
8486 2.27544810460957
8487 2.27550368396057
8488 2.27555927220666
8489 2.27561486885648
8490 2.27567047395411
8491 2.27572608722901
8492 2.27578170862006
8493 2.27583733797229
8494 2.27589297571856
8495 2.27594862091781
8496 2.27600427371291
8497 2.2760599339957
8498 2.27611560160033
8499 2.27617127665891
8500 2.27622695885388
8501 2.27628264823526
8502 2.2763383443106
8503 2.27639404748921
8504 2.2764497570214
8505 2.27650547338029
8506 2.27656119587999
8507 2.27661692485272
8508 2.27667266014056
8509 2.27672840130928
8510 2.27678414844312
8511 2.27683990118053
8512 2.27689565990429
8513 2.27695142413838
8514 2.27700719394749
8515 2.27706296909861
8516 2.27711874957134
8517 2.27717453516078
8518 2.27723032540711
8519 2.2772861207951
8520 2.27734192090255
8521 2.27739815055861
8522 2.27745438522071
8523 2.27751062480955
8524 2.27756686910055
8525 2.27762311809835
8526 2.27767937127087
8527 2.27773562881082
8528 2.27779189076662
8529 2.27784815658646
8530 2.27790442636616
8531 2.27796070002666
8532 2.27801697740834
8533 2.27807325842209
8534 2.27812954294284
8535 2.2781858308923
8536 2.27824212250182
;
[line width=1pt, white!54.9019607843137!black]
table
0 -5
;
SMC
[line width=1pt, color0]
table
0 -5
;
[semithick, color1, mark=x, mark size=3, mark options=solid, only marks]
table
0 -5
;
Test observations
[semithick, black, mark=x, mark size=3, mark options=solid, only marks]
table
0 -5
;
Observations
Demonstration on Apple Inc. stock price data set. learns the drift as an MLP neural net and predict into the future.
Finance data We model the trend of the share price of Apple Inc. (8537 trading days). We experiment with three models under 5-fold cross-validation: Sparse GP regression model with 500 inducing points <cit.> and a sum kernel of Const.+Lin.++ as in <cit.> which gives
NLPD 1.44 ± 0.70 / RMSE 0.91 ± 0.54; with a linear DP (OU)
which gives NLPD 1.08 ± 0.45 / RMSE 0.77 ± 0.41; and with a neural network drift DP which gives NLPD 0.81 ± 0.08 / RMSE 0.51 ± 0.08. The models have different priors incorporated in the form of kernel and prior DP, giving different flexibility to the resulting variational posterior. The aim is to learn the underlying process of the stock price, which we measure in terms of NLPD on the hold-out set. Of all the models, with a NN as drift f_p results in the best NLPD value as it is the most flexible. <ref> shows simulated predictions from the learnt prior DP (details in <ref>).
Vehicle tracking We model a 2D trajectory of vehicle movement from GPS coordinates recorded over time. The data set consists of 6373 observation points collected over 106 minutes. Outputs are modelled with two independent DPs learned jointly. Similar to the setup in <cit.>, we split the data in chunks of 30 s and perform 10-fold cross-validation. We experiment with two models: with a NN drift DP which gives NLPD 0.82 ± 0.43 / RMSE 0.06 ± 0.03, and with a linear DP (OU), which gives NLPD 0.67 ± 0.19 / RMSE 0.13 ± 0.04. with a NN drift gives better NLPD and RMSE value primarily because of the flexibility of the DP to model the trajectory (details in <ref>).-2
§ DISCUSSION AND CONCLUSION
provides a principled and scalable approach for variational inference in models with latent non-linear diffusion processes, parameterized via a time-variant linear Itô SDE.
We argue that the Gaussian VI approach commonly used for inference in GP models can readily be extended to the DP prior setting, and propose a unifying approach for both the GP and DP prior settings.
Our work fixes practical problems in the seminal work by <cit.>
( <ref>), and provides an important building block for approximative inference and learning under diffusion processes.-2
While we tackle the core SDE tooling, diffusion processes have recently gained traction in fields across machine learning such as image generation <cit.>, reinforcement learning <cit.>, and time-series modelling <cit.>. We see that our work can be an important component in future methods and extended to handle more complex scenarios with additional constraints or priors.
tocsectionReferences
abbrvnat
63
urlstyle
[Abadi et al.(2015)Abadi, Agarwal, Barham, Brevdo, Chen, Citro,
Corrado, Davis, Dean, Devin, Ghemawat, Goodfellow, Harp, Irving, Isard, Jia,
Jozefowicz, Kaiser, Kudlur, Levenberg, Mané, Monga, Moore, Murray, Olah,
Schuster, Shlens, Steiner, Sutskever, Talwar, Tucker, Vanhoucke, Vasudevan,
Viégas, Vinyals, Warden, Wattenberg, Wicke, Yu, and
Zheng]2015tensorflow
M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado,
A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving,
M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg,
D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens,
B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan,
F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and
X. Zheng.
TensorFlow: Large-scale machine learning on heterogeneous systems,
2015.
URL <https://www.tensorflow.org/>.
[Adam et al.(2021)Adam, Chang, Khan, and Solin]adam2021dual
V. Adam, P. Chang, M. E. E. Khan, and A. Solin.
Dual parameterization of sparse variational Gaussian processes.
In Advances in Neural Information Processing Systems 34
(NeurIPS), pages 11474–11486. Curran Associates, Inc., 2021.
[Adam et al.(2022)Adam, Artemev, Durrande, Eleftheriadis, Hayes,
Hensman, Jennings, John, Leedham, McLeod, Saul, Verma, Wei, and
Willis]Markovflow_2022
V. Adam, A. Artemev, N. Durrande, S. Eleftheriadis, A. Hayes, J. Hensman,
J. Jennings, S. John, J. A. Leedham, J. A. McLeod, A. Saul, P. Verma, Y. Wei,
and S. Willis.
Markovflow, 2022.
URL <https://github.com/secondmind-labs/markovflow>.
[Amari(1998)]amari1998natural
S. I. Amari.
Natural gradient works efficiently in learning.
Neural Computation, 100 (2):0 251–276, 1998.
[Andrieu et al.(2010)Andrieu, Doucet, and
Holenstein]andrieu2010particle
C. Andrieu, A. Doucet, and R. Holenstein.
Particle Markov chain Monte Carlo methods.
Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 720 (3):0 269–342, 2010.
[Archambeau and Opper(2011)]archambeau_opper_2011
C. Archambeau and M. Opper.
Approximate Inference for Continuous-Time Markov Processes,
pages 125–140.
Cambridge University Press, 2011.
[Archambeau et al.(2007a)Archambeau, Cornford, Opper, and
Shawe-Taylor]pmlr-v1-archambeau07a
C. Archambeau, D. Cornford, M. Opper, and J. Shawe-Taylor.
Gaussian process approximations of stochastic differential
equations.
In Gaussian Processes in Practice, volume 1 of
Proceedings of Machine Learning Research, pages 1–16. PMLR,
2007a.
[Archambeau et al.(2007b)Archambeau, Opper, Shen,
Cornford, and Shawe-taylor]NIPS2007_818f4654
C. Archambeau, M. Opper, Y. Shen, D. Cornford, and J. Shawe-taylor.
Variational inference for diffusion processes.
In Advances in Neural Information Processing Systems (NIPS),
pages 17–24. Curran Associates, Inc., 2007b.
[Blei et al.(2017)Blei, Kucukelbir, and McAuliffe]blei2017variational
D. M. Blei, A. Kucukelbir, and J. D. McAuliffe.
Variational inference: A review for statisticians.
Journal of the American Statistical Association, 1120
(518):0 859–877, 2017.
[Byron et al.(2004)Byron, Shenoy, and Sahani]byron2004derivation
M. Y. Byron, K. V. Shenoy, and M. Sahani.
Derivation of Kalman filtering and smoothing equations.
Technical report, Stanford University, 2004.
[Challis and Barber(2013)]JMLR:v14:challis13a
E. Challis and D. Barber.
Gaussian Kullback-Leibler approximate inference.
Journal of Machine Learning Research, 14:0 2239–2286,
2013.
[Chang et al.(2020)Chang, Wilkinson, Khan, and Solin]chang2020fast
P. E. Chang, W. J. Wilkinson, M. E. Khan, and A. Solin.
Fast variational learning in state-space Gaussian process models.
In Proceedings of the 30th International Workshop on Machine
Learning for Signal Processing (MLSP), pages 1–6. IEEE, 2020.
[Chen et al.(2018)Chen, Rubanova, Bettencourt, and
Duvenaud]chen2018neural
R. T. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud.
Neural ordinary differential equations.
In Advances in Neural Information Processing Systems 31
(NeurIPS), pages 6571–6583, 2018.
[Chopin et al.(2020)Chopin, Papaspiliopoulos,
et al.]chopin2020introduction
N. Chopin, O. Papaspiliopoulos, et al.
An Introduction to Sequential Monte Carlo.
Springer, 2020.
[Csató and Opper(2002)]csato2002sparse
L. Csató and M. Opper.
Sparse on-line Gaussian processes.
Neural Computation, 140 (3):0 641–668, 2002.
[Dhariwal and Nichol(2021)]dhariwal2021diffusion
P. Dhariwal and A. Q. Nichol.
Diffusion models beat GANs on image synthesis.
In Advances in Neural Information Processing Systems 35
(NeurIPS), pages 8780–8794. Curran Associates, Inc., 2021.
[Duncker et al.(2019)Duncker, Bohner, Boussard, and
Sahani]duncker2019learning
L. Duncker, G. Bohner, J. Boussard, and M. Sahani.
Learning interpretable continuous-time models of latent stochastic
dynamical systems.
In Proceedings of the 36th International Conference on Machine
Learning (ICML), volume 97 of Proceedings of Machine Learning
Research, pages 1726–1734. PMLR, 2019.
[Eraker(2001)]dynamical_system_finance
B. Eraker.
MCMC analysis of diffusion models with application to finance.
Journal of Business & Economic Statistics, 19:0
177–191, 2001.
[García et al.(2017)García, Otero, Félix, Presedo, and
Márquez]PhysRevE.96.022104
C. A. García, A. Otero, P. Félix, J. Presedo, and D. G. Márquez.
Nonparametric estimation of stochastic differential equations with
sparse Gaussian processes.
Physical Review E, 96, 2017.
[García-Fernández et al.(2015)García-Fernández,
Svensson, Morelande, and Särkkä]garcia2015posterior
Á. F. García-Fernández, L. Svensson, M. R. Morelande, and
S. Särkkä.
Posterior linearization filter: Principles and implementation using
sigma points.
IEEE Transactions on Signal Processing, 630
(20):0 5561–5573, 2015.
[García-Fernández et al.(2016)García-Fernández,
Svensson, and Särkkä]garcia2016iterated
Á. F. García-Fernández, L. Svensson, and S. Särkkä.
Iterated posterior linearization smoother.
IEEE Transactions on Automatic Control, 620
(4):0 2056–2063, 2016.
[García-Fernández et al.(2019)García-Fernández,
Tronarp, and Särkkä]garcia2019gaussian
Á. F. García-Fernández, F. Tronarp, and S. Särkkä.
Gaussian process classification using posterior linearization.
IEEE Signal Processing Letters, 260 (5):0
735–739, 2019.
[Girsanov(1960)]Girsanov
I. Girsanov.
On transforming a certain class of stochastic processes by absolutely
continuous substitution of measures.
Theory of Probability and Its Applications, 5:0
314–330, 1960.
[Glorot et al.(2011)Glorot, Bordes, and Bengio]pmlr-v15-glorot11a
X. Glorot, A. Bordes, and Y. Bengio.
Deep sparse rectifier neural networks.
In Proceedings of the Fourteenth International Conference on
Artificial Intelligence and Statistics (AISTATS), volume 15 of
Proceedings of Machine Learning Research, pages 315–323. PMLR, 2011.
[Golightly and Wilkinson(2011)]dynamical_system_gene
A. Golightly and D. J. Wilkinson.
Bayesian parameter inference for stochastic biochemical network
models using particle Markov chain Monte carlo.
Interface Focus, 1:0 807–820, 2011.
[Goodfellow et al.(2016)Goodfellow, Bengio, and
Courville]goodfellow2016deep
I. Goodfellow, Y. Bengio, and A. Courville.
Deep Learning.
MIT Press, 2016.
[Higgs(2011)]higgs2011approximate
M. C. Higgs.
Approximate Inference for State-Space Models.
PhD thesis, University College London, London, UK, 2011.
[Ho et al.(2020)Ho, Jain, and Abbeel]ho2020denoise
J. Ho, A. Jain, and P. Abbeel.
Denoising diffusion probabilistic models.
In Advances in Neural Information Processing Systems 33
(NeurIPS), pages 6840–6851. Curran Associates, Inc., 2020.
[Janner et al.(2022)Janner, Du, Tenenbaum, and
Levine]Janner2022planning
M. Janner, Y. Du, J. B. Tenenbaum, and S. Levine.
Planning with diffusion for flexible behavior synthesis.
In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvári, G. Niu,
and S. Sabato, editors, International Conference on Machine Learning
(ICML), volume 162, pages 9902–9915. PMLR, 2022.
[Karatzas and Shreve(1998)]karatzas1998brownian
I. Karatzas and S. E. Shreve.
Brownian motion.
In Brownian Motion and Stochastic Calculus, pages 47–127.
Springer, 1998.
[Khan and Lin(2017)]khan2017conjugate
M. E. Khan and W. Lin.
Conjugate-computation variational inference: Converting variational
inference in non-conjugate models to inferences in conjugate models.
In Proceedings of the 20th International Conference on
Artificial Intelligence and Statistics (AISTATS), volume 54 of
Proceedings of Machine Learning Research, pages 878–887. PMLR, 2017.
[Khan and Nielsen(2018)]khan2018fast
M. E. Khan and D. Nielsen.
Fast yet simple natural-gradient descent for variational inference in
complex models.
In 2018 International Symposium on Information Theory and Its
Applications (ISITA), pages 31–35. IEEE, 2018.
[Kidger et al.(2021)Kidger, Foster, Li, and Lyons]kidger2021efficient
P. Kidger, J. Foster, X. C. Li, and T. Lyons.
Efficient and accurate gradients for neural SDEs.
In Advances in Neural Information Processing Systems 34
(NeurIPS), pages 18747–18761. Curran Associates, Inc., 2021.
[Köhs et al.(2021)Köhs, Alt, and Koeppl]kohs2021variational
L. Köhs, B. Alt, and H. Koeppl.
Variational inference for continuous-time switching dynamical
systems.
In Advances in Neural Information Processing Systems 34
(NeurIPS), volume 34, pages 20545–20557. Curran Associates, Inc., 2021.
[Kuss and Rasmussen(2005)]kuss2005assessing
M. Kuss and C. E. Rasmussen.
Assessing approximate inference for binary Gaussian process
classification.
Journal of Machine Learning Research (JMLR), 6:0
1679–1704, 2005.
[Li et al.(2020a)Li, Wong, Chen, and
Duvenaud]li2020scalable
X. Li, T.-K. L. Wong, R. T. Q. Chen, and D. Duvenaud.
Scalable gradients for stochastic differential equations.
In Proceedings of the Twenty Third International Conference on
Artificial Intelligence and Statistics, volume 108 of Proceedings of
Machine Learning Research, pages 3870–3882. PMLR, 2020a.
[Li et al.(2020b)Li, Wong, Chen, and
Duvenaud]pmlr-v118-li20a
X. Li, T.-K. L. Wong, R. T. Q. Chen, and D. K. Duvenaud.
Scalable gradients and variational inference for stochastic
differential equations.
In Proceedings of The 2nd Symposium on Advances in Approximate
Bayesian Inference, volume 118 of Proceedings of Machine Learning
Research, pages 1–28. PMLR, 2020b.
[Lin et al.(2019)Lin, Khan, and Schmidt]lin2019fast
W. Lin, M. E. Khan, and M. Schmidt.
Fast and simple natural-gradient variational inference with mixture
of exponential-family approximations.
In Proceedings of the 36th International Conference on Machine
Learning (ICML), volume 97 of Proceedings of Machine Learning
Research, pages 3992–4002. PMLR, 2019.
[Lindsten et al.(2014)Lindsten, Jordan, and
Schon]lindsten2014particle
F. Lindsten, M. I. Jordan, and T. B. Schon.
Particle Gibbs with ancestor sampling.
Journal of Machine Learning Research (JMLR), 15:0
2145–2184, 2014.
[Minka(2001a)]minka2001expectation
T. P. Minka.
Expectation propagation for approximate Bayesian inference.
In Proceedings of the 17th Conference in Uncertainty in
Artificial Intelligence, pages 362–369. Morgan Kaufmann,
2001a.
[Minka(2001b)]minka2001family
T. P. Minka.
A Family of Algorithms for Approximate Bayesian Inference.
PhD thesis, Massachusetts Institute of Technology,
2001b.
[Neal(2001)]neal2001annealed
R. M. Neal.
Annealed importance sampling.
Statistics and Computing, 11:0 125–139, 2001.
[Neal and Hinton(1998)]neal1998view
R. M. Neal and G. E. Hinton.
A view of the EM algorithm that justifies incremental, sparse, and
other variants.
In Learning in Graphical Models, pages 355–368. Springer,
1998.
[Nickisch and Rasmussen(2008)]nickisch2008approximations
H. Nickisch and C. E. Rasmussen.
Approximations for binary Gaussian process classification.
Journal of Machine Learning Research (JMLR), 9:0
2035–2078, 2008.
[Øksendal(2003)]Oksendal:2003
B. Øksendal.
Stochastic Differential Equations: An Introduction with
Applications.
Springer, New York, NY, sixth edition, 2003.
[Park et al.(2022)Park, Lee, and Kwon]Park2022sde
S. W. Park, K. Lee, and J. Kwon.
Neural Markov controlled SDE: Stochastic optimization for
continuous-time data.
In International Conference on Learning Representations
(ICLR), 2022.
[Rasmussen and Williams(2006)]Rasmussen_Williams_2006
C. E. Rasmussen and C. K. I. Williams.
Gaussian Processes for Machine Learning.
MIT Press, 2006.
[Rubanova et al.(2019)Rubanova, Chen, and Duvenaud]rubanova2019latent
Y. Rubanova, R. T. Chen, and D. K. Duvenaud.
Latent ordinary differential equations for irregularly-sampled time
series.
In Advances in Neural Information Processing Systems 32
(NeurIPS), pages 5320–5330. Curran Associates, Inc., 2019.
[Ruttor et al.(2013)Ruttor, Batz, and Opper]NIPS2013_021bbc7e
A. Ruttor, P. Batz, and M. Opper.
Approximate Gaussian process inference for the drift function in
stochastic differential equations.
In Advances in Neural Information Processing Systems 26
(NIPS), pages 2040–2048. Curran Associates, Inc., 2013.
[Ryder et al.(2018)Ryder, Golightly, McGough, and
Prangle]pmlr-v80-ryder18a
T. Ryder, A. Golightly, A. S. McGough, and D. Prangle.
Black-box variational inference for stochastic differential
equations.
In Proceedings of the 35th International Conference on Machine
Learning (ICML), volume 80 of Proceedings of Machine Learning
Research, pages 4423–4432. PMLR, 2018.
[Särkkä and Solin(2019)]Sarkka+Solin:2019
S. Särkkä and A. Solin.
Applied Stochastic Differential Equations.
Cambridge University Press, 2019.
[Sato(2001)]sato2001online
M.-A. Sato.
Online model selection based on the variational Bayes.
Neural Computation, 130 (7):0 1649–1681,
2001.
[Solin and Särkkä(2015)]pmlr-v38-solin15
A. Solin and S. Särkkä.
State space methods for efficient inference in Student-t process
regression.
In Proceedings of the Eighteenth International Conference on
Artificial Intelligence and Statistics (AISTATS), volume 38 of
Proceedings of Machine Learning Research, pages 885–893. PMLR, 2015.
[Song et al.(2021)Song, Durkan, Murray, and Ermon]song2021maximum
Y. Song, C. Durkan, I. Murray, and S. Ermon.
Maximum likelihood training of score-based diffusion models.
In Advances in Neural Information Processing Systems 35
(NeurIPS), pages 1415–1428. Curran Associates, Inc., 2021.
[Svensson et al.(2015)Svensson, Schön, and
Kok]Svensson+Schon+Kok:2015
A. Svensson, T. B. Schön, and M. Kok.
Nonlinear state space smoothing using the conditional particle
filter.
In Proceedings of the 17th IFAC Symposium on System
Identification (SYSID), volume 48, pages 975–980. Elsevier, 2015.
[Tashiro et al.(2021)Tashiro, Song, Song, and Ermon]tashiro2021csdi
Y. Tashiro, J. Song, Y. Song, and S. Ermon.
CSDI: Conditional score-based diffusion models for probabilistic
time series imputation.
In Advances in Neural Information Processing Systems 35
(NeurIPS), pages 24804–24816. Curran Associates, Inc., 2021.
[Titsias(2009)]pmlr-v5-titsias09a
M. Titsias.
Variational learning of inducing variables in sparse Gaussian
processes.
In Proceedings of the Twelth International Conference on
Artificial Intelligence and Statistics (AISTATS), volume 5 of
Proceedings of Machine Learning Research, pages 567–574. PMLR, 2009.
[Tronarp et al.(2018)Tronarp, Garcia-Fernandez, and
Särkkä]tronarp2018iterative
F. Tronarp, A. F. Garcia-Fernandez, and S. Särkkä.
Iterative filtering and smoothing in nonlinear and non-Gaussian
systems using conditional moments.
IEEE Signal Processing Letters, 250 (3):0
408–412, 2018.
[Van Kampen(1992)]van1992stochastic
N. G. Van Kampen.
Stochastic Processes in Physics and Chemistry, volume 1.
Elsevier, 1992.
[Vargas et al.(2021)Vargas, Thodoroff, Lamacraft, and
Lawrence]vargas2021solving
F. Vargas, P. Thodoroff, A. Lamacraft, and N. Lawrence.
Solving Schrödinger bridges via maximum likelihood.
Entropy, 230 (9):0 1134, 2021.
[Whiteley(2010)]whiteley2010discussion
N. Whiteley.
Discussion on particle Markov chain Monte Carlo methods.
Journal of the Royal Statistical Society: Series B,
720 (3):0 306–307, 2010.
[Wilkinson et al.(2023)Wilkinson, Särkkä, and
Solin]wilkinson2023bayes
W. J. Wilkinson, S. Särkkä, and A. Solin.
Bayes–Newton methods for approximate Bayesian inference with
PSD guarantees.
Journal of Machine Learning Research (JMLR), 240
(83):0 1–50, 2023.
[Yildiz et al.(2018)Yildiz, Heinonen, Intosalmi, Mannerstrom, and
Lahdesmaki]Cagatay_learning_SDE
C. Yildiz, M. Heinonen, J. Intosalmi, H. Mannerstrom, and H. Lahdesmaki.
Learning stochastic differential equations with Gaussian processes
without gradient matching.
In 2018 IEEE 28th International Workshop on Machine Learning
for Signal Processing (MLSP), 2018.
Supplementary Material:
Variational Gaussian Process Diffusion Processes
empty
This supplementary document is organized as follows.
<ref> describes exact inference for latent diffusion models, and highlights the properties of the posterior drift function.
<ref> describes an (intractable) variational algorithm to perform exact inference in latent diffusion models, using the method of Lagrangian multipliers to optimize a constrained objective.
<ref> provides the full derivations for the tractable variational algorithm of <cit.>, where the variational distribution is restricted to a set of Markovian Gaussian Process.
In <ref>, we propose an alternative parameterization for Markovian Gaussian processes by introducing continuous exponential family description.
In <ref> we derive the Kullback–Leibler divergence between two diffusion processes which is essential for many algorithms including variational inference.
<ref> provides derivations of the proposed method .
<ref> gives details about the Monte Carlo baselines and <ref> provides details about the experiment setup and insights about various results.
§ EXACT INFERENCE FOR LATENT DIFFUSION PROCESS MODELS
In this section, we describe the exact posterior inference in models with diffusion process prior, following the derivations in <cit.>.
Given the Markovian structure of the diffusion, the marginal posterior at time t factorizes as
p(_t |)
=
p(_t |_<t)_p^(F)_t(_t)p(_≥ t|_t)/p(_≥ t|_<t)_ψ_t(_t).
This expression splits the contribution of observations before and after time t into two terms corresponding to continuous time equivalent of the forward and backward (up to a constant) filtering distributions in discrete Markov chains <cit.>.
The forward filtering distribution p^(F)_t satisfies, in between observation times, the Kolmogorov forward equation <cit.>
(∂_t - →K_f) p^(F)_t = 0,
where →K_f is the Fokker–Planck operator defined for twice differentiable functions ϕ as
→K_f [ϕ()] := -∇^⊤ [f(, t)ϕ()] + 1/2(_c (∇∇^⊤) [ϕ()]) .
Similarly, the second term ψ_t(_t)—also known as the information filter—satisfies, in between observation times, the Kolmogorov backward equation
(∂_t + ←K_f) ψ_t = 0,
where ←K_f is the adjoint of the Fokker–Planck operator defined for twice differentiable functions ϕ as
←K_f [ϕ()] := f(, t)^⊤∇[ϕ()] + 1/2(_c (∇∇^⊤) [ϕ()]) .
If solving for p_t^(F) forward in time, at the discrete observation times t, observations are added from the conditioning sets of p_t(F), leading to the instantaneous updates
p^(F)_t^+(_t^+) ∝ p^(F)_t^-(_t^-) p(_t |_t^-) .
The information filter ψ_t can be solved backwards with the initial condition ψ_T = 1 and discrete updates
ψ_t^-(_t^-)=p(_t |_t ) ψ_t^+(_t^+)/p(_t |_<t ) .
Differentiating the marginal posterior distribution p__t | through time <cit.>, it can be shown that the posterior process p_| shares the same diffusion as the prior process and its drift h is given by:
h(_t, t) = f_p(_t, t) + _c∇log ψ_t(_t) .
This result reveals that the posterior process shares the same Markovian structure as the prior process. It also provides a recipe to compute the posterior marginals: (1) compute ψ_t backward in time via <ref> and the jump conditions <ref>, and (2) compute the posterior drift via <ref> and compute the marginals of the posterior process via <ref>. These steps are however intractable for most settings of non-linear drift and observation models.
§ VARIATIONAL INFERENCE FOR LATENT DIFFUSION PROCESS MODELS (GENERAL CASE)
In this section, we derive the posterior process for latent diffusion models starting from the variational formulation of the problem.
We introduce the variational objective, describe an optimization procedure and discuss some properties of the solution.
We start from the variational objective for inference in latent diffusion models given in <ref>, which we rewrite here for completeness
ℒ(q) = _q() [log p(|)] - 1/2_0^τ_q(_t)f_q(_t, t) - f_p(_t, t)^2__c ^-1 t
- q(_0)p(_0).
The drift f_q^* of the optimal variational process q^* maximizing ℒ(q) can be derived from this variational objective using the tools of Lagrangian duality.
Let L̃(q,f_q) be the ELBO defined in <ref> where we have severed the dependency between q and f_q.
The optimization of the ELBO can be cast as the optimization problem
max_qL̃(q,f_q) subject to [ (∂_t - →K_f_q) q](,t) = 0, ∀, t,
where →K_f is the Fokker–Planck operator defined in <ref>.
We introduce the Lagrangian λ associated to this constrained optimization problem
𝔏(q,f_q, λ) =
L̃(q,f_q)+ λ(_t,t) [(∂_t - →K_f_q) q](_t,t) _t t .
A local optimum (q^*,f_q^*) of the original constrained optimization problem is associated to a unique pair of Lagrangian multipliers λ^* satisfying ∇_q, f_q, λ𝔏|_
q^*, f^*_q, λ^* = 0.
Introducing the change of variable λ(,t) = -logψ(,t),
the system of equations can be written as
0 = ∂ψ(,t)/∂ t + ←K_f [ψ(,t)] + ψ(,t)_i=1^N[log p(_i |_i)]δ(t-t_i).
We recover the posterior drift as in <ref>,
f_q(_t, t) = f_p(_t, t) + _c∇log ψ(_t, t).
§ GAUSSIAN VARIATIONAL INFERENCE FOR DIFFUSION PROCESSES ()
In this section, we focus on the case where the set of variational processes is restricted to that of Markovian Gaussian processes, as described in <ref>.
More formally, the variational process q is restricted to be a diffusion with an affine drift
Q = {
q : _t = (_t _t + _t) t + _t, _0 ∼ q(_0)
} .
The drift parameters _t ∈^d × d and _t ∈^d are associated to a unique set of marginal distributions q(_t), which are fully characterised by the mean and covariance =(_t, _t). We denote by =(_t, _t) the variational parameters of q.
The ELBO is given by
ℒ(,) = 𝔼_q()log p(|)
+ 12_0^T_q(_t)(_t _t + _t) - f_p(_t, t)^2__c ^-1 t
- q(_0)p(_0) .
The constraints connecting the drift parameters to the marginal statistics (,) are
C[, ](t) =
[ _t - _t _t - _t; _t - _t _t - _t _t^⊤ - _c ] = 0 ∀ t.
The constrained optimization problem is thus
, max ℒ(,)
subject to C[, ](t) = 0 ∀ t.
A Lagrangian is constructed, with Lagrangian multipliers Θ = (_t, _t) associated to each of the two constraints as
𝔏(,,Θ) = ℒ(,) - _0^T⟨Θ , C[,](t)⟩ t .
A local optimum (^*,^*) of the original constrained optimization problem is associated to a unique pair of Lagrangian multipliers Θ^* satisfying ∇__t, _t, Θ_t𝔏|_^*_t, ^*_t, Θ^*_t = 0, ∀ t.
The system of equations can be re-expressed as:
^*_t = - ^*⊤_t Ψ^*_t - ^*_t^*_t - ∇_ℒ|_^* ,
^*_t = - ^*⊤_t ^*_t - ∇_ℒ|_^* ,
^*_t = _q^*[∇_ f] - 2 _c^* ,
^*_t = _q^*[f] - ^*_t ^*_t - ^*_t ,
0 = C[^*, ^*, ^*, ^*] .
§.§ Fixed point iterations
The system of equations derived earlier are not analytically solvable. They consist of self consistency equations among the optimal variables ^*_t, Θ^*_t.
To find optimal solutions to this problem, a fixed point algorithm, whereby a sequence of variables (^(k)_t, Θ^(k)_t) is constructed in <cit.> using the self consistency equations as follows:
From <ref>: ^(k+1) ← _t = - ^(k)⊤_t _t - _t^(k)_t - ∇_ℒ|_^(k), with (T)=0 ,
From <ref>: ^(k+1) ← _t = - ^(k)⊤_t _t - ∇_ℒ|_^(k)
, with (T)=0 ,
From <ref>: ^(k+1) ← (1-ω)^(k) + ω(_q^(k)[∇_ f] - 2 _c^(k+1)) ,
From <ref>: ^(k+1) ← (1-ω)^(k) + ω( _q^(k)[f] - ^(k+1)_t ^(k)_t - ^(k+1)_t ) ,
From <ref>: ^(k+1) ← _t - ^(k+1)_t _t - ^(k+1)_t ,
From <ref>: ^(k+1) ← _t - ^(k+1)_t _t - _t _t^(k+1)⊤ - _c .
The iterations are run until convergence. In <ref>, the learning-rate ω is introduced for stability reason. It enforces a degree of stickiness to the previous values for and in the sequence defined by the updates.
§ EXPONENTIAL FAMILY FOR MARKOVIAN GAUSSIAN PROCESSES
We propose an alternative parameterization for Markovian Gaussian processes by introducing a continuous exponential family description of such processes. This description corresponds to the continuous time limit of the exponential family characterization of discrete Gaussian Markov chains.
§.§ Linear Gaussian discrete Markov chains
We first consider a Linear Gaussian discrete Markov chain, also known as linear Gaussian state space model (LGSSM), which specifies the distribution of a finite collection of random variables = [_0, …, _N] as follows:
_0 ∼(_0,_0) ,
_i+1 = _i _i + _i + ε̂_i, ε̂_i∼(0, _i) .
The joint density over factorises as a chain p(_0,…,_N) =p(_0) ∏_i=1^N p(_i|_i-1).
It is common in the literature to parametrize the `drift' via statistics φ = {_i, _i, _i}_i=1^N.
It provides an intuitive description of the collection as a temporal process and also an effective way to compute the marginal means _i, covariances _i and cross covariances _i, defined as
_i+1 = 𝔼[_i+1] = _i _i + _i,
_i+1 = 𝔼[(_i+1- _i+1)(_i+1- _i+1)^⊤] = _i_i_i^⊤ + _i,
_i = 𝔼[(_i+1- _i+1)(_i- _i)^⊤] = _i _i .
The stacked parameters = {_i, _i+1, _i+1}_i=1^N constitute a parameterization of the LGSSM.
A convenient alternative parameterization of LGSSMs is as a special case of conditional exponential family distributions <cit.>
p(_i+1|_i) =
exp [ ⟨𝖳_c(_i+1, _i), _i ⟩
- A_c(_i) ] ,
p(_0) =
exp [⟨𝖳(_0), _0 ⟩- A(_0)] ,
where for each conditional distribution, 𝖳_c(_i+1, _i) are the sufficient statistics, _i the natural parameters, and A_c(_i) the associated log partition function.
The initial state is distributed as a multivariate Gaussian with sufficient statistics and natural parameters
𝖳(_0) = [_0, _0 _0^⊤] ;
_0 = [ _0^-1 _0 , -12 _0^-1] .
By expanding the log conditional density as
log p(_i+1|_i)
= -1/2
(_i+1 - _i _i - _i)^⊤_i^-1
(_i+1 - _i _i - _i) + c
=
_i+1^⊤_i^-1_i
-1/2_i+1^⊤_i^-1_i+1
+
_i+1^⊤_i^-1_i _i
+ c̃,
we can identify
𝖳(_i+1, _i) = [_i+1, _i+1 _i+1^⊤, _i+1 _i^⊤] ;
_i = [ _i^-1 _i , -12 _i^-1 , _i^-1 _i],
where the inner product in <ref> (resp. <ref>) distributes over the sufficient statistics in <ref> (resp. <ref>) and is the standard inner product , →^⊤ for vectors and ,→Trace(^⊤) for matrices.
Finally, the expectation parameters for the conditional exponential family distribution are defined as-1
_i = 𝔼_p(_i+1,_i) [𝖳(_i+1, _i)] = [_i+1, _i+1 + _i+1_i+1^⊤, _i + _i+1] ,
_0 = 𝔼_p(_0)[𝖳_0(_0)] = [_0, _0 + _0 _0^⊤] .
The joint distribution over states is a (𝖳+1)d - dimensional multivariate normal distribution in the exponential family
p(_0,…,_N) = exp [ ⟨𝖳 (), _p ⟩ - A(_p) ].
Building the joint density by multiplying the terms from the conditional exponential family description (<ref> and <ref>) reveals the sparse structure of the sufficient statistics. Indeed these are the union of the initial and conditional sufficient statistics in <ref> and <ref>. Importantly, they only include the outer product of identical states _i_i^⊤ or consecutive states _i_i+1^⊤.
These sufficient statistics can be conveniently written as 𝖳() = [, btd(^⊤)] where btd() sets entries of outside of the d-block tri-diagonals to zero.
An important property we use in the paper is that given two such LGSSMs indexed p_1, p_2 with natural parameters ^(1), ^(2), the Kullback Leibler divergence is
∇_^(1)p_1(;^(1))p_1(; ^(2)) = ^(1)- ^(2) .
A LGSSM is fully characterized by either of the parameterizations , , φ and . There is a bijective mapping between each of these parameterizations and requires to compute some of these transformations, which we derive here for completeness.
φ to
_i = _i_i^-1
_i =_i+1 - _i _i _i^⊤
_i = _i+1 - _i _i
to
_i = [_i+1, _i+1 + _i+1_i+1^⊤, _i + _i+1_i^⊤]
_0 = [_0, _0 + _0 _0^⊤]
to
[_i+1,_i+1,_i] =
[^(1)_i, ^(2)_i -^(1)_i^(1)⊤_i, ^(3)_i - ^(1)_i^(1)⊤_i-1]
[_0, _0] = [^(1)_0 , ^(2)_0 - _0 _0^⊤]
to φ
_i = _i_i^-1 = (^(3)_i - ^(1)_i^(1)⊤_i-1)(^(2)_i-1 -^(1)_i-1^(1)⊤_i-1)^-1
_i = _i+1 - _i _i = ^(1)_i - (^(3)_i - ^(1)_i^(1)⊤_i-1)(^(2)_i-1 -^(1)_i-1^(1)⊤_i-1)^-1^(1)_i-1
§.§ Continuous case
In this section, we are interested in the limit case of a diffusion process with linear drift
_0 ∼(_0,_0),
_t = (_t _t + _t) t + _t ,
where _t denotes the Brownian motion with _c spectral density.
In a diffusion process, represents an infinitesimal state difference _t+ t - _t.
For such linear diffusions, the marginal mean and covariances can be shown to follow the following ordinary differential equations <cit.>:
_t 𝔼→_t → _t = _t _t + _t,
(_t-_t)(_t-_t)^⊤ 𝔼→_t → _t = _t _t + _t _t^⊤ + _c .
To match the discrete description, we need the third statistic _t = _t _t^⊤.
In a similar vein as in the discrete case, we can view the diffusion process as a conditional exponential family model
p(_t |_t) =
exp [ ⟨𝖳(_t, _t), _t ⟩ - A(_t , _t) ],
with sufficient statistics as
𝖳(, ) = [ + , ( + )( + )^⊤, ( + ) ^⊤ ]
and natural parameters as
_t = [ _c^-1_t , -12 ( t _c)^-1 , ( t _c)^-1 ( + _t t)] .
Equivalently, we can define the associated expectation parameters as _t = 𝔼 [𝖳(_t, _t)].
Importantly, the property still holds that for two diffusions p_1, p_2 with natural parameters ^(1), ^(2), the Kullback–Leibler divergence verifies as
∇_^(1)p_1(;^(1))p_1(; ^(2)) = ^(1)- ^(2).
§ KULLBACK–LEIBLER DIVERGENCE BETWEEN DIFFUSION PROCESSES
The Kullback–Leibler (KL) divergence is a commonly used divergence to measure the difference between two distributions.
In case of diffusion processes, if both the DPs are linear, they can be described in an equivalent Gaussian process whose KL can be calculated conveniently. In case both DPs have the same diffusion, Girsanov theorem <cit.> can be used to calculate the KL between them. Formally,-1
p : _t = f_p(_t, t) t + _t , _0 ∼ p(_0),
q : _t = f_q(_t, t) t + _t , _0 ∼ q(_0),
and both the processes p and q have the Brownian motion _t with _c spectral density. Then, the KL between q and p can be evaluated as
qp = 12_0^T_q(_t)f_q(_t, t) - f_p(_t, t)^2__c ^-1 t + q(_0)p(_0) .
For implementation purposes, we approximate the integral in the KL expression by the Riemann sum
qp≈12_i=0^⌊ T/Δ t ⌋_q(_iΔ t)f_q(_iΔ t, iΔ t) - f_p(_iΔ t, iΔ t)^2__c ^-1Δ t
+ q(_0)p(_0) .
This corresponds to the KL between two finite state space models (SSMs) with adequately chosen transition and noise function.
Here, we derive the KL between two finite non-linear SSMs. We consider a single transition for simplicity.
Consider the two SSMs:
p: {[ _0 ∼ N(_0^(p),_0^(p)); _1 = f_p(_0) + ε_p∼ N(0, _c^(p)), ].
q: {[ _0 ∼ N(_0^(q),_0^(q)); _1 = f_q(_0) + ε_q∼ N(0, _c^(q)), ].
Then, the KL between the two SSMs can be written as
qp =
_qlogq(_0,_1)/p(_0,_1) ,
=
_q(_0)logq(_0)/p(_0) + _q(_0,_1)logq(_1 |_0)/p(_1 |_0)
=
q(_0)p(_0) + _q(_0)q(_1 |_0)p(_1 |_0) .
The second term in <ref> can be splitted as
q(_1 |_0)p(_1 |_0) = _q(_1 |_0)log q(_1 |_0) - _q(_1 |_0)log p(_1 |_0)
= -H[q(_1 |_0)] - _q(_1 |_0)log p(_1 |_0) .
We have a closed form formula for the entropy, which is independent of _0:
H[q(_1 |_0)] = 1/2log |2π_c^(q)| + d/2 .
For the cross entropy term in <ref>, we have
- 2 _q(_1 |_0)log p(_1 |_0) = log |2π _c^(p)| + _q(_1 |_0)_1 - f_p(_0)^2__c^(p)-1
= log |2π_c^(p)| + _q(_1 |_0)(_1 - f_q(_0)) - (f_p(_0)- f_q(_0))^2__c^(p)-1
= log |2π_c^(p)|
+ _q(_1 |_0)[
_1 - f_q(_0)^2__c^(p)-1
+
f_p(_0)- f_q(_0)^2__c^(p)-1
-2
⟨_1- f_q(_0), f_p(_0)- f_q(_0)⟩__c^(p)-1]
= log |2π_c^(p)|
+ [_c^(p)-1_c^(q)]
+ f_p(_0)- f_q(_0)^2__c^(p)-1 ,
Now, taking the expectation under q(_0), we have
_q(_1,_0)log p(_1 |_0)
=log |2π_c^(p)|
+ [_c^(p)1_c^(q)]
+ _q(_0)f_p(_0) - f_q(_0)^2__c^(p)1 .
Thus, we have the following expression for the KL
q(_1|_0)p(_1|_0) = C(_c^(q),_c^(p)) + 1/2_q(_0)f_p(_0)- f_q(_0)^2__c^(p)-1 ,
where
C(_c^(q),_c^(p)) = 1/2(-log|_c^(q)|/|_c^(p)| - d
+ [_c^(p)-1_c^(q)]
) .
When the two SSMs share the same transition noise, C(_c^(q),_c^(p)=_c^(q))=0.
§ METHOD DETAILS
In this section we discuss the details about the proposed method . In <ref>, we discuss the structure of the optimal posterior for the DP model. Then, in <ref>, we exploit this structure and derive the update rules for .
§.§ Structure of the optimal posterior
The ELBO in <ref> can be understood as inference in the following generative model: the Gaussian process prior b, the observations and an additional likelihood whose expected logarithm is given by e(q,p,b).
For such a generative model, extending the results of <cit.> to the continuous setting, the optimal variational Gaussian process q^* can be shown to have the following structure :-1
q^*() ∝ b() exp(⟨ϕ_c(^*)+ ^*, 𝖳()⟩)
where ^* are sparse in time and depends on observations whereas ^* are dense. They are defined via the self-consistency equations-1
^* = ϕ_c^-1(∇_𝔼_q^*log p(|)) ,
^* = ∇_𝔼_q^* e(q[], p, b),
and the vector 𝖳() = (_t, _t(_t + _t)^⊤) is the sufficient statistics of a Gaussian distribution and (t)=_q[𝖳(_t)] are the expectation parameters that can be efficiently computed via smoothing.
and are often referred to as sites in the context of the expectation propagation algorithm.
The self-consistency equations of the optimal variational process can be obtained via the first order optimality condition as in <ref>.
In , we parameterize the natural parameters of the variational process as
_q = _b + ϕ_c() + ,
where _q and _b are the natural parameters of process q and b respectively.
§.§ Derivation of the update rule
Starting from the ELBO in <ref>:
(q) = 𝔼_q[ log p(|) ] + (qb - qp)_e(q, p, b) - qb,
we parameterize the variational posterior q via its natural parameters as _q = _b + ϕ_c() +.
The mirror descent objective with step-size ρ is
^(k+1) = max_ ⟨∇_ L()|_^(k), ⟩ - 1/ρq(; )q(; ^(k)) .
The gradient of the ELBO with respect to the expectation parameter can be written as
∇_ L()|_^(k) =
∇_𝔼_q[ log p(|) ]|_^(k)
+ ∇_e(q,p,b)|_^(k)
- (ϕ_c(^(k)) + ^(k)) ,
where we have used the property ∇_qb = _q - _b = ϕ_c() +.
To solution of the convex inner-maximization problem can be obtained by solving the first order optimality conditions
_q-^(k)_q = ρ∇_ L()|_^(k) ,
ϕ_c() + = ρ∇_𝔼_q[ log p(|) ]|_^(k)
+ ρ∇_e(q,p,b)|_^(k)
+ (1-ρ)( ϕ_c(^(k)) + ^(k) ).
We distribute the two gradient terms to get the updates
^(k+1) = (1-ρ)^(k)+ ρ ϕ_c^-1( ∇_𝔼_q[ log p(|) ]|_^(k) ),
^(k+1) = (1-ρ)^(k) + ρ ∇_e(q,p,b)|_^(k) .
The update rule for can be further simplified by using the property
∇_e(q,p,b)|_^(k) = ∇_(qb - qp)|_^(k)
= ϕ_c(^(k)) + ^(k) - ∇_qp|_^(k).
Therefore, the update can be equivalently written as
^(k+1) = ^(k) + ρ ( ϕ_c(^(k)) - ∇_qp|_^(k)) .
§.§ Learning of model parameters
We denote the sum of the site contributions to the variational posterior by =ϕ_c( )+.
For the learning of the DP parameters θ, the ELBO provides an objective which can be used as a proxy to the marginal likelihood: ℓ(θ) = (_q^*(θ), θ). In practice, it is possible to obtain ^*(θ_0) for a given θ_0, but the function ^*(θ) is intractable and therefore coordinate ascent is performed on (_q, θ). As discussed in <cit.>, in the GP prior and Gaussian observation case, it is provably better to perform coordinate ascent in the (, θ) coordinates on the function l(, ) = (_b(θ) + ^*, θ), as opposed to coordinates ascent in (_q, θ) on function (, θ). In this setting, we recover the marginal likelihood log p() = (_b(θ) + ^*, θ) > (_q^*(θ_0), θ), ∀θ_0. In practice, this advantage further extends to more challenging observation models.
The objective we use for learning can be expressed as:-2
(_b() + ^*_(), )
= _q_()()log∏_i=1^n p(y_i |_i) p_() /1/𝒵()∏_i=1^n t_i^*(_i) b_()
= log𝒵_t() +
_q_()()log p_() /b_()_w()
+
_i=1^n _q_()()logp(y_i |_i)/t_i^*(_i)_c(),
where log𝒵_t() is the log-partition of ∏_i=1^n t_i^*(_i) b_(). It is the log marginal likelihood of a generative model where the prior is the linear DP, b_θ(), and the observation likelihoods are given by the Gaussian sites. Hence, this is classic Gaussian process regression. The term c() captures a notion of mismatch between the true likelihood terms and their Gaussian approximation via sites which is 0 in the setting of Gaussian observation model. The term w() is related to the error introduced by the linearization of the non-linear drift of the prior. This error is 0 when the prior is a linear DP.
§ MONTE CARLO BASELINES
In the main paper, we compare to baseline `ground-truth' solutions both for inferring the latent processes and for parameter learning targets. For inference, the problem falls under sequential Monte Carlo (particle smoothing) methods, and for estimating the approximate marginal likelihood we employ annealed importance sampling (AIS). In practice, we could use any simulation methods for the baseline, but we settle for the following setup due to its robustness and fast execution.
§.§ Sequential Monte Carlo (SMC) baseline
We use a sequential Monte Carlo approach in the form of particle smoothing through conditional particle filtering with ancestor sampling. The posterior (smoothing) distribution p(|) problem can not be computed in closed form in the general case due to the nonlinear nature of the models. We employ the approach of <cit.>, where the smoothing distribution p(|) is approximated by generating K (correlated) samples using sequential Monte Carlo. Each iteration of the MCMC algorithm uses a conditional particle filter with ancestor sampling. This approach has been shown to avoid the problem of particle degeneracy typically occurring in particle filters <cit.>.
As our setting is continuous-discrete (continuous-time prior with observations discretely spread over the time-horizon), we choose to use an Euler–Maruyama scheme with time-step size Δ t for solving the SDE prior for discrete-time steps.
§.§ Annealed importance sampling (AIS)
As an illustrative ground-truth for the parameter learning target (marginal likelihood), we use a sampling approach as the baseline. We use annealed importance sampling (AIS, <cit.>) which is similar to that used for Gaussian processes with non-conjugate likelihoods in <cit.> and <cit.>. The setup defines a sequence of j=0,1,…,J steps
Z_j= ∫ p(|; θ)^τ(j) p(; θ),
where τ(j)=(j/J)^4 (such that τ(0)=0 and τ(J)=1). The marginal likelihood can be rewritten as
p(;θ)=Z_J/Z_0=Z_J/Z_J-1Z_J-1/Z_J-2⋯Z_1/Z_0,
where Z_j/Z_j-1 is approximated by importance sampling using samples from q_j() ∝ p(| ; θ)^τ(j-1) p(; θ):
Z_j/Z_j-1 = ∫ p(| ; θ)^τ(j) p(; θ) /Z_j-1
= ∫p(| ; θ)^τ(j)/p(| ; θ)^τ(j-1)p(| ; θ)^τ(j-1) p(; θ) /Z_t-1
≈1/S∑_s=1^S p(|_j^(s); θ)^τ(j)-τ(j-1), where _j^(s)∼p(| ; θ)^τ(j-1) p(; θ) /Z_j-1.
The difference to <cit.> and <cit.> is that here sampling _j from p(| ; θ)^τ(j-1) p(; θ) / Z_j-1 is non-trivial due to the non-linear/non-Gaussian nature of the prior. Here we use sequential Monte Carlo in the form of particle smoothing through conditional particle filtering with ancestor sampling (, the method presented in <ref>) to draw those samples. For each process _j sampled from the SMC approach we only use the 10th sample (discarding the preceding ones as burn-in) and otherwise follow the setup described in <ref>.
By using a single sample S=1 and a large number of steps J, the estimation of the log marginal likelihood can be written as
log p(; θ) = ∑_j=1^JlogZ_j/Z_j-1≈∑_j=1^J (τ(j)-τ(j-1))log p(|_j; θ).
Following <cit.>, we set J=8000 and combine three estimates of log marginal likelihood by their geometric mean.
§ DETAILS ON EXPERIMENTS
In this section, we provide details for the experiments which were included in <ref>. We start by explaining the setup and details about the synthetic tasks (<ref>) for each diffusion process individually in <ref>. We then cover the details about the experiments on real-world finance data and vehicle tracking (<ref>) in <ref> and <ref>. We perform a grid search in all the experiments to find the best learning rate and other hyperparameters for all the methods.
§.§ Synthetic tasks: Inference and learning
We consider five diffusion processes to compare the performance of and . For both inference and learning, 5-fold cross-validation is performed. We discuss each diffusion process and the experimental setup in detail below.
Ornstein–Uhlenbeck We start with the (linear) Ornstein–Uhlenbeck diffusion process as a sanity check where the exact posterior and the exact log-likelihood are available in closed-form. It is defined as
x_t = -θ x_t t + β_t,
where β_t is standard Brownian motion (Q_c=1).
For the experiment, we set θ=0.5, x_0=1, t_0=0, t_1=10 and randomly observe 40 observations under the Gaussian likelihood with σ^2=0.01. To simulate the process, Euler–Maruyama is used with 0.01 step-size.
For inference, both the methods (and ) have an Ornstein–Uhlenbeck DP as prior with θ=1.2 and Q_c=1.0. To get the (exact) posterior and the (exact) log-marginal likelihood, we use a Gaussian process regression (GPR) model with a matched Matérn-1/2 kernel.
In the linear DP setup, does a single-step update with ρ=1 for all values of the discretization grid (Δ t = {0.01, 0.005, 0.001}) (theoretical reason of the single-step update is discussed in <ref>). For , we perform a grid-search over various learning-rate ω and use the best-performing one: 1.0 for Δ t=0.001, 0.5 for Δ t=0.005, and 0.1 for Δ t=0.01. As all the models converge to the same posterior, the posterior obtained by with Δ t=0.01 is only plotted in <ref> along with the convergence plot for all the methods.
For learning, the same setup as in the inference experiment is used, but the parameter θ of the prior OU (lengthscale and variance of Matérn-1/2 kernel in GPR) is also optimized. For all the methods, θ is initialized to 2.5 (to be off from the expected optima) and the Adam optimizer is used. For the learning rate of Adam optimizer, we perform a grid search and use the best-performing one: 0.1 for and 0.01 for .
We experiment with a commonly used non-linear diffusion process, the DP, whose marginal state distributions are bimodal and mode-switching in sample state trajectories becomes increasingly unlikely with time (<ref>). It is defined as:
x_t = θtanh(x_t) t + β_t,
where β_t is standard Brownian motion (Q_c=1).
For the experiment, we set θ=1.0, x_0=0, t_0=0, t_1=8 and randomly observe 40 observations under the Gaussian likelihood with σ^2=0.01. To simulate the process, Euler–Maruyama is used with 0.01 step-size.
For inference, both the methods (and ) have a DP as prior with θ=1.0 and Q_c=1.0.
After performing a grid search over the learning rate for both the methods, we use the best performing one: ρ=1 in for all discretization grid (Δ t= {0.01, 0.005, 0.001}), and ω=0.1 for Δ t=0.001, ω=0.001 for Δ t={0.005, 0.01} for . The posterior of both methods and the convergence plot are shown in <ref>. From the plot, it can be observed that the posterior obtained by both methods is identical. However, converges faster than .
For learning, the same setup as in the inference experiment is used, and now the parameter θ of the prior DP is also optimized. For both methods, θ is initialized to 3 (to be off from the expected optima), and Adam optimizer is used. For the learning rate of Adam optimizer, we perform a grid search and use the best-performing one: 0.1 for and 0.01 for .
Double-well We experiment with the non-linear diffusion process, the double-well (DW) DP, whose marginal state distributions have two modes that sample state trajectories keep visiting through time (<ref>). It is defined as:
x_t = θ_0 x_t (θ_1 - x_t^2) t + β_t ,
where β_t is standard Brownian motion (Q_c=1).
For the experiment, we set θ_0=4.0, θ_1=1.0, x_0=1.0, t_0=0, t_1=20 and randomly observe 40 observations under the Gaussian likelihood with σ^2=0.01. To simulate the process, Euler–Maruyama is used with 0.01 step-size.
For inference, both the methods (and ) have a double-well DP as prior with θ_0=4.0, θ_1=1.0 and Q_c=1.0.
After performing a grid search over the learning rate for both the methods, we use the best performing one: ρ=0.5 for and ω=0.001 for for all discretization grid (Δ t={0.01, 0.005, 0.001}). The posterior of both methods and the convergence plot are shown in <ref>. The plot shows that struggles with convergence issues while converges faster and to a better ELBO value. With optimization tricks and more iterations, is expected to reach the same posterior as .
For learning, the same setup as in the inference experiment is used, and now the parameter θ_1 of the prior DW DP is also optimized. For both the methods, θ_1 is initialized to 0.0 (to be off from the expected optima), and Adam optimizer is used. For the learning rate of Adam optimizer, we perform a grid search and use the best-performing one: 0.1 for and 0.01 for . <ref> showcases the fast learning of θ_1 in as compared to .
Sine We experiment with the non-linear diffusion process, Sine DP, whose marginal state distributions have many modes (<ref>). It is defined as:
x_t = θ_0 sin(x_t - θ_1) t + β_t,
where β_t is standard Brownian motion (Q_c=1).
For the experiment, we set θ_0=1.0, θ_1=0.0, x_0=0, t_0=0, t_1=10 and randomly observe 40 observations under the Gaussian likelihood with σ^2=0.01. To simulate the process, Euler–Maruyama is used with 0.01 step-size.
For inference, both the models (and ) have a Sine DP as prior with θ_0=1.0, θ_1=0.0, and Q_c=1.0.
After performing a grid search over the learning rate for both the methods, we use the best performing one: ρ=1.0 for and ω=10^-5 for for all discretization grid (Δ t = {0.01, 0.005, 0.001}). The posterior and convergence plot of both methods is shown in <ref>. From the plot, it can be observed that the posterior obtained by both methods is identical. However, converges faster than . For , in the experiment, we set the maximum iterations to 1000. However, with more iterations and optimization tricks, is expected to reach the same ELBO value as . Empirically, we found that suffers from slow convergence and takes ∼7000 iterations to lead to a value closer to .
For learning, the same setup as in the inference experiment is used, and now the parameter θ_1 of the prior Sine DP is also optimized. For both methods, θ_1 is initialized to 2.0 (to be off from the expected optima), and Adam optimizer is used. For the learning rate of Adam optimizer, we perform a grid search and use the best-performing one: 0.01 for both and . While learning, after performing a grid search, the best-performing learning rate for was ρ=0.1 while for , it was the same as in inference, ω=10^-5.
Square-root We experiment with the non-linear diffusion process, Square-root DP, that has divergent fat-tailed behaviour (<ref>). It is defined as:
x_t = √(θ |x_t|) t + β_t,
where β_t is the standard Brownian motion (Q_c=1).
For the experiment, we set θ=1.0, x_0=0.0, t_0=0, t_1=10 and randomly observe 40 observations under the Gaussian likelihood with σ^2=0.01. To simulate the process, Euler–Maruyama is used with 0.01 step-size.
For inference, both the models (and ) have a Square-root DP as prior with θ=1.0 and Q_c=1.0.
After performing a grid search over the learning rate for both the methods, we use the best performing one: ρ=1.0 for and ω=10^-5 for for all discretization grid (Δ t = {0.01, 0.005, 0.001}). The posterior and convergence plot of both methods is shown in <ref>. From the plot, it can be observed that the posterior obtained by both methods is identical. However, converges faster than . For , in the experiment, we set the maximum iterations to 1000. However, with more iterations and optimization tricks, is expected to reach the same ELBO value as . Empirically, we found that suffers from slow convergence and takes ∼7000 iterations to lead to a value closer to .
For learning, the same setup as in the inference experiment is used, and now the parameter θ of the prior Sine DP is also optimized. For both methods, θ is initialized to 5.0 (to be off from the expected optima), and Adam optimizer is used. For the learning rate of Adam optimizer, we perform a grid search and use the best-performing one: 0.01 for both and . While learning, after performing a grid search, the best-performing learning rate for was ρ=0.5 while for , it was the same as in inference, ω=10^-5.
§.§ Finance data
While the main focus is on providing a better method and algorithm for the particular case defined by (and thus benchmark primarily against it), we also seek to showcase the practical applicability of our approach. To showcase the capability of proposed method in the real world, we experiment with a finance data set originally proposed for Student-t processes in <cit.>. The data set considers the (log) stock price of Apple Inc. and consists of 8537 trading days (setup follows <cit.>).
This experiment aims to learn the underlying process of the stock price, which is measured in terms of negative log predictive density (NLPD) on the hold-out test set. We bin the data into five bins, ∼1707 trading days in every bin and model the log stock price.
We experiment with three models; each model has different prior information incorporated in it. Gaussian likelihood is used with σ^2=0.25 for all the models, which is not optimized.
Following <cit.>, we consider a GP baseline: sparse variational Gaussian process (SVGP) with 500 inducing points <cit.> in which prior information is incorporated in terms of the sum of kernels (Const.+Lin.++). The prior is structured as in <cit.> to capture a linear trend, a slowly moving smooth trend component, and a faster more volatile component.
The GP baseline gives NLPD 1.44 ± 0.70 / RMSE 0.91 ± 0.54. All the kernels are initialized with unit variance and lengthscale, and Adam optimizer is used with a learning rate of 0.1. The inducing variables are spread uniformly over the time grid and not optimized.
Next, we experiment with with a linear OU DP process in which prior is incorporated in terms of the OU process, which is initialized with θ=1.0, Q_c=0.1. After evaluating the data, the prior on the initial state is set to (-1, 0.1). The learning rate ρ is set to 1.0 and the prior DP parameter is optimized using Adam optimizer with a 0.01 learning rate. The model gives NLPD 1.08 ± 0.45 / RMSE 0.77 ± 0.41.
Finally, to give more flexibility, we experiment with with a neural network drift (NN) DP. The drift of the prior DP f_p is initialized to be a NN with one hidden layer with three nodes followed by a ReLU activation function, and Q_c is set to 0.1. The parameters of the NN are initialized from a unit Gaussian, and after evaluating the data, prior on the initial state is set to (-1, 0.1). The learning rate ρ is set to 0.5 and the prior DP parameter is optimized using Adam optimizer with 10^-3. The model gives NLPD 0.81 ± 0.08 / RMSE 0.51 ±0.08.
Of all the models, with a NN as drift results in the best NLPD / RMSE value as it is the most flexible and can adapt to the non-stationary behaviour of the data over the state range. To show the quality of learnt drift, we simulate and plot the predictions from the learnt prior DP for future years <ref>.-1
§.§ GPS tracking data
r0.34
axis on top,scale only axis,width=,height=, grid style=line width=.1pt, draw=gray!10,dashed,grid
[
height=,
tick align=outside,
tick pos=left,
width=,
x grid style=white!69.0196078431373!black,
xmin=-8.52833092245577, xmax=5.20973092245577,
xtick style=color=black,
xtick=-5,0,5,
xticklabels=-5 km,0 km,5 km,
y grid style=white!69.0196078431373!black,
ymin=-14.6716290132189, ymax=11.3050664204545,
ytick style=color=black,
ytick=-10,-5,0,5,10,
yticklabels=-10 km,-5 km,0 km,5 km,10 km
]
graphics [includegraphics cmd=,xmin=-10.7441473490544, xmax=6.98238406373468, ymin=-18.3825855037437, ymax=15.3533825919361] fig/gps-data-posterior-000.png;
[semithick, color0]
table
0.530755766340872 0.108262825486081
0.515945911855535 0.104881002417885
0.500995102239905 0.101500568543316
0.485912426173212 0.0981215435834083
0.470685286772925 0.0947437661192779
0.45532777094217 0.0913673158820411
0.439816633715432 0.0879933040686606
0.424172920986954 0.0846207858506664
0.408418999098015 0.0812497962371164
0.392554461191425 0.0778803717804959
0.37657890457281 0.0745125517784569
0.360491956129232 0.071146381694909
0.344293399619686 0.0677819139363294
0.327983283203723 0.0644191953618403
0.311586655180174 0.0610582801279436
0.295089173570969 0.0576992272781998
0.278492074689446 0.0543421372058463
0.261799375842607 0.0509870705387012
0.245010865443256 0.0476341069121709
0.228130349159889 0.0442833354530857
0.21115855362952 0.0409348553205738
0.194095645407873 0.0375887684861897
0.176939914896398 0.0342451943849139
0.159692863339523 0.0309042796724604
0.142354344243149 0.0275661958791463
0.124923933240842 0.0242311472718518
0.107401603571244 0.0208993812341459
0.0897871040638485 0.017571210180264
0.0720802214623549 0.0142470422776868
0.0542807260890148 0.0109274444496496
0.0363883885302075 0.0076132836637331
0.0243604575057851 0.00541431990855242
0.0162481631844998 0.0039647070514649
0.0107520728980529 0.00302333164118326
0.00700661089278654 0.00240271566689182
0.00443650607330704 0.001999626858175
0.00266140259638172 0.00174700924339194
0.00143278099419253 0.00160283520757313
0.000592746512753584 0.00151221577500015
3.2554373674684e-05 0.00146007327106157
-0.000310117717082113 0.00140682985178618
-0.000507382242522658 0.00134362585062789
-0.000607242811339919 0.00129084578864857
-0.000641529309197244 0.00123970676731989
-0.000615928685751883 0.00118169891261793
-0.000541483959533283 0.00110716929311568
-0.000436411105062745 0.00100371560232679
-0.000298572553380874 0.000854122323277341
-0.000120399654355328 0.000633495908709894
0.000112381695756302 0.000336024243541159
0.000407801152564682 -8.77944967521813e-05
0.000723148825436104 -0.000523075660439343
0.000896745492695801 -0.000856842032053252
0.000728134137588665 -0.00102102734613324
0.000128214642186012 -0.00104295341044119
-0.00100249861575149 -0.000957170833576463
-0.00269867976392728 -0.000811208543406437
-0.00481367046103094 -0.000704386658359943
-0.00716328106204491 -0.000649831119356735
-0.00957167493001978 -0.000576661263753237
-0.0118879700293957 -0.000379991897174834
-0.0140072830161308 -2.70956405535631e-05
-0.0158225742646344 0.000509850427449623
-0.0172528281421337 0.00116568885006322
-0.0182906444312851 0.00189505015449197
-0.0189859176621489 0.00269569494017898
-0.0193927585777103 0.00363905318108978
-0.0195327049415975 0.00485120554058835
-0.019306752970971 0.00641027315624085
-0.0185399691186007 0.00823577656570293
-0.0170440098352083 0.0101679823835682
-0.0146930031106847 0.011972207604636
-0.0114486134276386 0.0134233408873686
-0.00747589191047457 0.0143302026688522
-0.00301775728209217 0.0145656350410128
0.00176338562650059 0.0140143897308484
0.00668227070637722 0.0125537833859438
0.0116376093743811 0.0100644002646292
0.0165341407987904 0.00647179131558557
0.0213126546739171 0.00182710719337407
0.0258028354594085 -0.0036846228996465
0.0299087633723978 -0.0100226707805714
0.0334858844267273 -0.0172219952883606
0.0363938153670401 -0.0252438070387754
0.0385645955987313 -0.0340338919892555
0.0400066745037044 -0.0433245245750103
0.0407758005039058 -0.0528695383180621
0.0408620553723942 -0.0623738604198288
0.0401573655478137 -0.0715946960454942
0.0384685968537836 -0.0803678621433153
0.0355614610683877 -0.0884547984989986
0.0312289545244789 -0.0958103581493038
0.0253334754461996 -0.102421821413685
0.0177684111500233 -0.108277784876592
0.008608933670414 -0.113455693887788
-0.00212028135129377 -0.118138619382984
-0.0143492170342355 -0.122550021088305
-0.0279661420550998 -0.126898142577609
-0.0428602909450225 -0.131350743186237
-0.0587659061134898 -0.135938209077965
-0.0756764977463961 -0.140681751806263
-0.0935690055130811 -0.145537030753062
-0.112415058255778 -0.150415112014013
-0.13191577110776 -0.15516923509151
-0.151943833701101 -0.1597252025728
-0.172381454095493 -0.164039074523668
-0.19316382506717 -0.168054918742031
-0.214207045475788 -0.171762059173666
-0.235210107074321 -0.175128402434875
-0.256140608029638 -0.178158332264962
-0.277214445511309 -0.180891778852498
-0.298410715980839 -0.183320995293705
-0.319485182829324 -0.185417561826388
-0.340459698913664 -0.187159274070375
-0.36134780911469 -0.188526447986493
-0.382204882972037 -0.189530261838113
-0.402836282652162 -0.190153045405004
-0.423296616438402 -0.190405246370678
-0.443656248249787 -0.190267262032172
-0.46393734145256 -0.189684513187155
-0.48419965004033 -0.188621596717573
-0.504193396547788 -0.187056394272109
-0.523903993788918 -0.184976391570856
-0.543330397905205 -0.182343314348331
-0.562473544717955 -0.179151658268726
-0.581350483227889 -0.175366202070787
-0.59995783930321 -0.170912814533666
-0.618311239373299 -0.16560621896949
-0.636414864943103 -0.15924396044189
-0.654029205071149 -0.151724158275955
-0.671111526981652 -0.142877109102349
-0.687591786227264 -0.132651681103619
-0.70334571352573 -0.121015345166688
-0.718217621970783 -0.107978481313677
-0.731889938935738 -0.0936897440542898
-0.744352788864628 -0.078087691192288
-0.755658477828284 -0.0611084076840747
-0.765849569295394 -0.0427088956052333
-0.774826467058096 -0.0230719455763927
-0.782596364443115 -0.00216922405695658
-0.789119737116354 0.0199814876240168
-0.794315466906636 0.0433294861228677
-0.798020281466694 0.0674971349261344
-0.800144581382198 0.092360433118593
-0.800627102587643 0.117822952944584
-0.799564177765127 0.143733519385983
-0.797101512965072 0.169922570999553
-0.793492328896934 0.195990575572793
-0.788965087242242 0.221948845429142
-0.783735011787122 0.247852007529452
-0.778006687785978 0.273808753419955
-0.771917669747337 0.299905073845644
-0.765652441662996 0.325972284408322
-0.759303612974723 0.352144711527198
-0.752902633886886 0.378514283594712
-0.746457041679149 0.405172453896332
-0.740029092588846 0.431920892429324
-0.733561243698843 0.458792621117382
-0.726957286724753 0.485777940907631
-0.720128374220005 0.512824511674947
-0.713102528245832 0.539520637917103
-0.705829219970498 0.565765729297835
-0.698263365899575 0.591503885444389
-0.69037119303145 0.616599689952236
-0.682173333180327 0.640892090783536
-0.672290209674389 0.668383598266271
-0.662173232631091 0.694849739385527
-0.651794893373348 0.720618337518576
-0.641159364474263 0.746052849478612
-0.630383952695546 0.770911989866033
-0.619453101616063 0.795295737790093
-0.608372194037487 0.819267274311652
-0.597166737816824 0.842905884294016
-0.585842955453394 0.866258205458481
-0.574417470718862 0.889384898745501
-0.562889327938649 0.912370389405232
-0.551241497701933 0.935306994040598
-0.539436064324943 0.958310824145591
-0.527575972553745 0.981227255806216
-0.515649914287781 1.00413689710567
-0.503663408042541 1.02708711170977
-0.491606085156394 1.05012536645466
-0.479584690492966 1.07304373337023
-0.46759240281054 1.09586739966266
-0.455588457845164 1.11858864303879
-0.443636160797161 1.14123309183403
-0.431661752172667 1.1638302774106
-0.419686055890416 1.18640173750481
-0.407759019680117 1.20898155152006
-0.395834135352586 1.23158831706161
-0.383845247498551 1.25427898061981
-0.371965715889487 1.2768205993881
-0.360137839782263 1.29929288381623
-0.348356734785813 1.32176381701823
-0.336544142508459 1.34426736430689
-0.324851932203244 1.36657271918857
-0.313187295463737 1.38898991793677
-0.301494946670085 1.41152694314569
-0.289746028540679 1.43419515785134
-0.27802698286776 1.45677471537699
-0.266304896101596 1.47931795861044
-0.254546080798032 1.50188797789508
-0.242727530341887 1.52453550256799
-0.230967082050408 1.54703718720811
-0.219271147921963 1.56943219936248
-0.207610431009711 1.59175855899183
-0.195961450658644 1.61402577746518
-0.184319250090956 1.63626714664253
-0.172815976997022 1.65822440262419
-0.161439336136941 1.67994597850813
-0.15016652267585 1.70149112186553
-0.13899360534548 1.72293991326181
-0.128007840960263 1.74415341223514
-0.117176625930253 1.76517249595
-0.106537757305152 1.78603893417297
-0.0959768695305784 1.80702207574894
-0.0856379401397258 1.82790421589935
-0.0755180276950389 1.84873774724612
-0.0656201672050779 1.86960062723244
-0.0559535357965045 1.89052509895053
-0.0465198943706304 1.91160450265827
-0.0374206621723433 1.93263741505386
-0.0286294812977437 1.95370173561944
-0.0201251487655426 1.9748468882618
-0.0118116553481455 1.99635558116167
-0.00379294564383091 2.0180145462814
0.00391587899017783 2.03982175901407
0.0113093850187057 2.06178337876777
0.0183757484465862 2.0839485736064
0.0250335375627959 2.1061137698999
0.0313781485785417 2.12858218592462
0.0374071138529824 2.15142115221647
0.0431268057516168 2.17465931341466
0.0484770572694654 2.19805498839094
0.0534891501405679 2.22164652895384
0.0581537145784607 2.24547164060199
0.0624801904923337 2.26955685581982
0.0664677185223876 2.2939220241869
0.0700744859295708 2.31829649662206
0.0733024908160451 2.34273215593391
0.0761522724097973 2.3672405672785
0.078637761839016 2.39189627292355
0.080758572121783 2.41673116633077
0.0825148675756577 2.441807442639
0.0839075473553795 2.46687338306534
0.0849382364159225 2.49199648407081
0.0856251346527251 2.51722012482681
0.0859750340660634 2.54233440525861
0.0859542608117462 2.5673812129495
0.0855288598329336 2.59242480860169
0.0847659152235699 2.61727551269762
0.0837068822859706 2.64203198492654
0.0824054665241087 2.66681073587921
0.0809209450112762 2.69142796351754
0.0792822307570604 2.7159766215099
0.0775231773003334 2.74053806481226
0.0756731027693956 2.76517891746948
0.0737614868841515 2.78996233958266
0.0718381149393488 2.81468843079718
0.0699201444355849 2.83946855810585
0.0680105405820934 2.86433876653738
0.0660982340422481 2.88940101587655
0.0641871288135528 2.91472217867221
0.0622659078453156 2.94037913997583
0.0603522512707539 2.96610014706993
0.0584344173766097 2.99194526455241
0.056515394535736 3.01799553626258
0.0546132280851131 3.04397817770206
0.0527333378975052 3.0699299224429
0.0508850037472619 3.09588228782261
0.0490978286910763 3.12186689565228
0.0473964660939466 3.14788702876574
0.0457891260037524 3.17363101745842
0.0442385879947518 3.19946687644584
0.0427475597303603 3.22544030702545
0.0413286210792147 3.25158649146607
0.0400215601222185 3.27756439263765
0.0388387861551685 3.30334238510482
0.0378287406795787 3.32900705122329
0.0370378307851075 3.35464270447085
0.0364725298082675 3.38034564402268
0.0361460454857104 3.4058856824675
0.0360347484298602 3.43134294229147
0.0361354360259968 3.45681732555251
0.0364342950789239 3.48239475117711
0.0369351202631123 3.50784069303131
0.0376445210933333 3.53323624712945
0.0385577646097837 3.558603312557
0.0396735759817653 3.58399274696772
0.0409935969319145 3.60950988198889
0.0424921468558868 3.63490999441018
0.044173149562776 3.66026642969036
0.0460250207660295 3.68566203955754
0.0480645107924185 3.71115251414711
0.050278296336812 3.73650551849297
0.0526667003530572 3.76180332145149
0.0551978186949019 3.78708507016887
0.0578940895865543 3.81243787125644
0.0607746615495028 3.83787639480168
0.0637517821434904 3.86312572309312
0.0668147323730067 3.88818301459562
0.0699824477391309 3.91331702601737
0.0732147947205309 3.93847162418929
0.0764211677094989 3.96329008147309
0.0796136108080034 3.98801764528556
0.0828018530809481 4.01295175621826
0.0859491679864584 4.03776595801984
0.0890272911695451 4.06245138732579
0.0920269980065493 4.08701116743073
0.0949719912912352 4.11179882660165
0.0978157125889749 4.13653071577047
0.100571252303728 4.16126850228021
0.103236413965565 4.18605795907188
0.105810431100978 4.21098738574262
0.108277420144489 4.23584779501872
0.110648521497259 4.26070570062638
0.112920631810167 4.285627199671
0.115087498815089 4.31065535090991
0.1171419970827 4.33585126652694
0.119073292818758 4.36098345667797
0.120879966581473 4.38611373126666
0.122586365178944 4.41132046903042
0.124216784068715 4.43662741876018
0.125796188286471 4.46210905355963
0.127327815144178 4.48753154357376
0.128820246404902 4.51298891339274
0.130292143067545 4.53854730427106
0.131742182234212 4.56398586345038
0.13319691924225 4.58933375854849
0.134668767203528 4.61465550688353
0.136158051287109 4.64002809256249
0.137666327591192 4.66519923169996
0.13919996550175 4.69025421553204
0.140767879406908 4.71522490646006
0.14238632510612 4.74016268982773
0.14409486227222 4.76514717287193
0.145884979757977 4.78994488847648
0.147763352217174 4.81464553454487
0.149751291442371 4.83932238593378
0.151873355694596 4.86409538238807
0.154114988678028 4.88872906044703
0.156479806705074 4.9132853775122
0.158978590593281 4.93781319777382
0.161612827705122 4.96233966525198
0.164391547303976 4.98694238341388
0.167271457709966 5.01139088008484
0.170255900908283 5.035749572084
0.173365548889145 5.06008456604864
0.176628575066083 5.08447485731267
0.180050496191243 5.10899190820658
0.183601825695321 5.13342454114566
0.187276201161838 5.15785137907546
0.191071078071161 5.18233316908167
0.194974049041397 5.20695683963632
0.198929278692354 5.23152926188828
0.202936027150591 5.25610201926489
0.206985517393961 5.28072675361312
0.211077716407106 5.30539633025332
0.215203092780515 5.33011120463284
0.219296263489317 5.35459229400985
0.223332935228731 5.37887574684538
0.227372305766711 5.40330212917779
0.231411002995479 5.4278983210673
0.23540077811604 5.4524327525657
0.239331696220493 5.47696747525321
0.243182405876801 5.50158148918851
0.246949756222127 5.52636720019194
0.250570347315285 5.55109131148817
0.254032714559853 5.57581416741721
0.257300817195984 5.60057902259766
0.260366091392788 5.62543623727029
0.263201264012341 5.65045177321776
0.265772407351385 5.67538053670651
0.26809300862769 5.70028371018443
0.270181438710953 5.72520125702992
0.272047414869477 5.75020936002938
0.273665421790301 5.77502786592373
0.27499701040298 5.79968242223938
0.276049095491451 5.82449190708235
0.276797163374876 5.84949732658657
0.277227637088764 5.87451944003717
0.277382055339472 5.89966900837005
0.277332456326102 5.9249770007376
0.277100461258108 5.95048428424209
0.276679103490879 5.97621476056151
0.276074785975366 6.00190911713765
0.275293417271328 6.02760601148351
0.274343052537653 6.05336142095125
0.273218642245845 6.07922432348658
0.271947630747305 6.10494091128248
0.270533151687125 6.13051981840682
0.26898605051847 6.15599706254022
0.267339948171354 6.18139146984204
0.265626976756964 6.20665716000187
0.263869835020819 6.23181096029972
0.262067316576089 6.25686764922983
0.260225613748404 6.28185938032549
0.258344418973859 6.30684109979072
0.256448383384617 6.33159576393804
0.254528189508254 6.35622267826918
0.25260202160704 6.38081643808268
0.250672147457176 6.40544913700367
0.248755146100448 6.4301825630838
0.24686806809622 6.45480834536266
0.244988162389161 6.47938710376543
0.243095520702282 6.50398838708884
0.24118138849092 6.52870246034739
0.239279879056839 6.55331767993578
0.237380767683004 6.57792666033704
0.235475885609383 6.60260407111629
0.233584296030085 6.62713207072151
0.231714002659633 6.65162230971978
0.229874880045965 6.67609557990712
0.228068709574059 6.70062047879151
0.226301076426213 6.72527435033811
0.22461546350471 6.74983536360005
0.22300760194295 6.77437007403922
0.221486116850241 6.79892369851461
0.220073956668009 6.82357841941592
0.218795617605739 6.84807878926721
0.217653015826613 6.87276827302393
0.216640731476775 6.89768448224446
0.215758371762875 6.92283579390125
0.215028749414132 6.94796628936317
0.214451919886471 6.9731543826033
0.214040003901733 6.99845444796627
0.21378604976588 7.02393980010435
0.213694370171986 7.04966442307631
0.213779611139923 7.0753849008977
0.214055907182116 7.10112741672187
0.214507724728599 7.12695567143124
0.215118788253155 7.15291407023559
0.215882649020509 7.17874806808677
0.216804933383085 7.20448615247474
0.217884371282156 7.23012369488349
0.219117569224697 7.25567284152253
0.220495124769763 7.28113079883157
0.221985144290824 7.30615836662132
0.223605888696561 7.33103616717916
0.225335949048537 7.35573229339725
0.22715545776353 7.38028547446024
0.229031193597157 7.40445686003718
0.23095252378935 7.42834845176786
0.232932960922862 7.45204869800054
0.234964311802032 7.4757150634101
0.237016978013416 7.49917829028259
0.239079822937264 7.52255572717208
0.241143396878018 7.54589955293569
0.243213292318594 7.56923934698601
0.245281228198792 7.59263779642545
0.247353522555271 7.61609993680051
0.249437214794183 7.63964158570885
0.251542892432631 7.6632582212689
0.253668209928216 7.68695800152061
0.25584392355813 7.71045907384446
0.258094224723766 7.73371626790611
0.260425823896392 7.75699791650314
0.26285891534591 7.78025550984274
0.265445448586467 7.80343646846472
0.268247892864271 7.82650904125583
0.271347941179677 7.84945693848046
0.274846792519925 7.87231033015526
0.278805656207407 7.89484692024151
0.283315600740673 7.9171286702309
0.28846787963618 7.93920822733782
0.294337556380951 7.96115471226362
0.300904751769305 7.98279512363524
0.30818733035788 8.00417496988414
0.316185828975212 8.0253463857811
0.324881869083312 8.04634313046697
0.33426664650002 8.06720316897908
0.344183587764532 8.08772177763767
0.354595840993678 8.10792364057408
0.365442647450697 8.12783053809179
0.376642583818583 8.14748193521828
0.388005564904308 8.16672203068986
0.399475425491284 8.18564644539685
0.411012855541608 8.20431423221185
0.422576569355397 8.22277489434529
0.434159423034226 8.24107663040455
0.445628426190613 8.2590887337132
0.457013872919736 8.27686801690636
0.468368880153921 8.29455014867726
0.479727094646438 8.31220368776026
0.491000593435171 8.32970653180427
0.502248454338722 8.34713068913649
0.513482162637308 8.36453488843144
0.52474996087804 8.38195755764854
0.536067259655098 8.39942336580608
0.547314764280204 8.41674467544135
0.558508908650606 8.43396278994148
0.56962945055846 8.45108467500319
0.580707715208955 8.46815169301871
0.591611905416266 8.48494253555415
0.602449840100236 8.50169595645321
0.613249059826738 8.51842476728578
0.624043048124943 8.53515453357658
0.634737812946431 8.55177580554584
0.645365207358952 8.56831286007641
0.655976839336523 8.58482635561633
0.666606112923875 8.60135610851043
0.677303354466539 8.61796157709513
0.687980028256328 8.63449539552355
0.698664488868413 8.65100081295625
0.709386442573636 8.66751627965683
0.720198263401422 8.68406505722011
0.731028663286173 8.70050706560825
0.741918222840165 8.71685313315983
0.752884316532488 8.7331485398375
0.763943748910226 8.74937931986986
0.775112238614348 8.76560500313557
0.786271816944087 8.78164779736235
0.797439880945552 8.79751855623484
0.808772067723538 8.81345244229514
0.820282924168936 8.8294925809077
0.831866438375406 8.84546355349222
0.843532455321231 8.86139788070365
0.855306072752407 8.87732184122589
0.867200464941176 8.89325999626995
0.879217400414078 8.9092055104356
0.89119586422696 8.92498385285845
0.903258655944922 8.94078077986196
0.915374324485558 8.95656553869717
0.92736846488873 8.97210265157345
0.939349976836702 8.98757083019214
0.951410838369647 9.00313713121423
0.963535765643024 9.01876561141945
0.975566717167528 9.03424500552591
0.987481831130633 9.0495922433151
0.999408291609943 9.064987683414
1.01133832350289 9.08043407545139
1.02325814351262 9.095942747944
1.0351720686268 9.11153555398813
1.04709992694846 9.12721483381959
1.05888249496752 9.1427947950793
1.07051793679685 9.1583093864333
1.08194717341607 9.1737814647741
1.0931100824093 9.1891929335621
1.10395207301806 9.20448165646307
1.114266695801 9.21937888918269
1.123991325647 9.23376929070774
1.13306493085745 9.24751935250467
1.14151692932125 9.26064065558389
1.14927420825942 9.27297083078257
1.15628094580039 9.28436573804099
1.16252238109952 9.29470884842984
1.16798940918695 9.30392531735671
1.17264410241295 9.31185108600044
1.17651237312499 9.3185091924524
1.17965527727723 9.32396164768736
1.18214610962724 9.32831989866646
1.18406601076155 9.33171302203709
1.18553383393237 9.33430956208839
1.18665980950328 9.33632913452708
1.18758021246714 9.33806269337674
1.18846335073853 9.33980347702638
1.18947138725273 9.34186284750171
1.19073762718283 9.34453847863544
1.1923720226386 9.34799581394636
1.1943474126604 9.35219461603484
1.19661045537897 9.35703426946628
1.1991387716664 9.36245502650175
1.20188779207266 9.36836025099353
1.20483286643455 9.37478593989731
1.20796519991113 9.38172100290492
1.21122444985779 9.38910535838879
1.21460418725098 9.39697183797792
1.21811785438274 9.40541789392838
1.22176801436904 9.41440256639417
1.22556325718876 9.42397597347078
1.22955259158012 9.43428782252338
1.233718124071 9.44529178297404
1.23805453758413 9.45695723041195
1.24253524519117 9.46931475624081
1.2471080000479 9.48222482357303
1.25176871308635 9.49570895487184
1.25654428975792 9.50978442453328
1.26141463806327 9.52439960785905
1.26642475562719 9.53961104518039
1.27160972709937 9.55547479593773
1.27695103978993 9.57181941157511
1.28248894975764 9.58862234686194
1.28824668228027 9.60580341265003
1.29416831474994 9.62300856029535
1.30024144750213 9.64007363294936
1.3065446178305 9.65713173244562
1.31297998278541 9.67389246953854
1.31952096666318 9.69030232572504
1.32617490358923 9.70634979243177
1.33285235026856 9.72186069829758
1.33955017644153 9.73689107705803
1.34626861240374 9.75146635345188
1.35292885711744 9.76548430547192
1.35954142696394 9.77900180082621
1.36610897642522 9.7920755696629
1.37254430145245 9.80461965579017
1.3788537922458 9.81672881064926
1.38503956717262 9.82844111446471
1.3910338799712 9.83969372717712
1.39681983344544 9.85051498428896
1.40236264907223 9.86087715410513
1.40763689825929 9.87074238874658
1.41260582827403 9.8800563687697
1.41723178069248 9.88875600776655
1.42145349413504 9.89672496424694
1.425258433993 9.90392599396078
1.42863126260382 9.9103609010913
1.43153482457006 9.91591880367023
1.43398644803547 9.92062762632574
1.43604431821015 9.92459130225736
1.43775113214292 9.92785524237192
1.43919059348246 9.93059994187552
1.44046825573005 9.9330362888484
1.44171244006587 9.93539058989406
1.44307891697504 9.93792595040512
1.44467768483868 9.94085163903311
1.44654165545669 9.94420643113104
1.448632079706 9.94783141761826
1.45094756161691 9.95164716692958
1.45354031833235 9.95560652967494
1.45642586914426 9.95953460649364
1.45956911293851 9.96306538906387
1.46294448294506 9.96566429015138
1.46661416158776 9.96689248363429
1.47055698606579 9.96626512068014
1.47464847079882 9.96330442331289
1.4787391939569 9.95744197809688
1.48267960776588 9.94793952739105
1.48634482988438 9.93347680948228
1.48943269334798 9.91201279561753
1.49152930565467 9.88035588440635
1.49348675898624 9.85086254110632
1.49530522424235 9.8235260898838
1.49698486289111 9.79834032655036
1.49852582149148 9.77529955005184
1.4999282356539 9.75439854615559
1.50119222906413 9.73563256313304
1.50231791353718 9.71899737062505
1.5033053862575 9.70448917687749
1.50415473539066 9.6921047178559
1.5048660364712 9.68184117451437
1.50543935100144 9.67369622400696
1.50587472926702 9.66766802577779
1.50617221019942 9.66375522337649
1.50633181914516 9.66195690335738
1.50635356907551 9.66227267303788
1.5062374652838 9.66470261220813
1.50598349485261 9.66924726628491
1.50559163625505 9.67590765397984
1.50506185544426 9.68468529408701
1.50439410571161 9.69558217119473
1.50358832800506 9.7086007661006
1.50264445042092 9.72374401260772
1.50156239127604 9.74101533998387
1.50034205578106 9.76041867353639
1.49898333500937 9.78195839556708
1.49748610999317 9.80563939043952
1.49585025008067 9.8314670393808
1.49407560997214 9.85944718750223
1.49216203545545 9.88958615296391
1.49010935728874 9.92189077264304
1.48872677395392 9.94340975274441
1.48778622462574 9.95778321574261
1.48713256403057 9.96737496935373
1.48667445470183 9.97379063410593
1.48633633182462 9.97811549256165
1.4860789071311 9.98104732899333
1.48587620348464 9.98304831377175
1.48571126987876 9.98442313021006
1.48557338811745 9.98540433749022
1.48543981397265 9.9861072535819
1.4853050006065 9.98665076867644
1.48517968246565 9.98712682375128
1.48505967506261 9.98744702101064
1.48494166854834 9.98766552231888
1.48483917160334 9.98781928558864
1.48473527611997 9.98793432278677
1.48461284490117 9.98803009803488
1.48446816852121 9.98812280346594
1.48429386945616 9.98822813051041
1.48406119602642 9.98819497070783
1.48374825430272 9.98818662316179
1.48330342359488 9.9882016944751
1.48263684285909 9.98824272738814
1.48162206938136 9.98831666141008
1.48007522894002 9.98843601136585
1.47780711180792 9.9886209630097
1.47474034209958 9.98873386483312
1.47097904662684 9.98845596014976
1.46666116761117 9.98757130022263
1.46189878261769 9.98559238772085
1.45678010748245 9.98235337824644
1.4515324245011 9.97764424747274
1.44616389438947 9.97149615771334
1.44059680252837 9.96405166220049
1.43478662718151 9.95547046522007
1.42874766309334 9.94587205378605
1.42257187547657 9.9354403538878
1.416246226364 9.92426900881453
1.4097494454968 9.91247856844871
1.40306500328355 9.90010176996442
1.39617839172509 9.88719038353182
1.38910769852703 9.87377337607794
1.38189010632025 9.85994619929659
1.37455505308449 9.84581937343905
1.36721151977477 9.83157097645401
1.3598681692611 9.81717272384666
1.35253369607301 9.80260475312472
1.34521825765115 9.78785228147808
1.33795163836297 9.77288621617248
1.33082113509779 9.75787781891638
1.32385406488383 9.74280537953512
1.3170717290125 9.72768703297381
1.3104929308 9.71256693851808
1.3041370733858 9.69752274043235
1.29804384748563 9.68257736339132
1.29223032104733 9.66775352620827
1.28669375146832 9.6530776210926
1.28142762413178 9.63861798549571
1.2764865209193 9.62464845035969
1.27191268766886 9.61135681075974
1.26775949743503 9.59899486023856
1.2640837595645 9.58785342444382
1.26095509821387 9.57827773914219
1.25836851150993 9.57032041717296
1.25626003885112 9.56381793333238
1.25457863840671 9.55865014091173
1.25327776829873 9.55465238385623
1.25227473992041 9.55165518971308
1.25153599046611 9.54950632775203
1.25100560951671 9.54802812637631
1.25061259468136 9.54705498412665
1.25030860322566 9.54642230180015
1.25007646308991 9.54602304510202
1.24989437041865 9.54575589319735
1.2497487753427 9.54557566440596
1.24963214686489 9.54545187349827
1.249541733428 9.54534668710298
1.24947910730612 9.54524229748844
1.24945042525387 9.5451379425657
1.24945095561828 9.54501598065081
1.24944781317578 9.54488955127767
1.24944047972253 9.54473729315946
1.24942774612058 9.54456720970198
1.2494075110349 9.54438433321872
1.24937643638853 9.54419150285288
1.2493293972898 9.54402368207803
1.24925863396825 9.54386937025317
1.24916895976637 9.54370246714695
1.24907855695185 9.54352852380306
1.24898899703151 9.54331812178291
1.24890199725337 9.54306944658662
1.24880320417549 9.54274044127107
1.24870929590629 9.54230922905445
1.24860477922453 9.54173666847101
1.24847241770111 9.54095968414768
1.24827388939109 9.5398806241184
1.24792698828989 9.53833385494564
1.24734151783725 9.53609151756995
1.24643739064157 9.53296012743473
1.24514789978604 9.52876472409
1.24345817761815 9.52352204166888
1.24140274378875 9.51730814304759
1.23893930596403 9.51005170385146
1.23604070205589 9.50167396945042
1.23270690748927 9.4922105952213
1.22891557068523 9.48171629299138
1.22470075957439 9.47032480028489
1.22001020193361 9.45800117214547
1.21479558327103 9.44475553824157
1.20902106089139 9.43061097701013
1.20273978746885 9.41584393493936
1.19595429541936 9.4004904030598
1.18860043655125 9.38458856591065
1.18065219725428 9.36818517167235
1.17223281853259 9.3515800125434
1.16338782312099 9.33490360680612
1.15415847341062 9.31822372465161
1.14455560769257 9.30153999582449
1.13456142018765 9.28483448231085
1.12427490905619 9.26828830488125
1.1136809090753 9.25185612869022
1.10277947563051 9.23552879211937
1.09158589904888 9.21926419210907
1.08010025775205 9.20304771347721
1.06847228389225 9.18702495976326
1.05672931702344 9.17113776625238
1.04487972644224 9.15536780351202
1.03294726738616 9.13964900898355
1.02092553738381 9.12395775633778
1.00880989875139 9.1082581674773
0.996613192969792 9.09252987022186
0.984308512286029 9.07673073938521
0.97203908320987 9.0610262684603
0.959802271660735 9.04536145836683
0.947569501661069 9.02968803567284
0.935329346850688 9.01395626301271
0.923037843211453 8.99814029307166
0.91081071330809 8.98238591578142
0.898620976549414 8.96662967239665
0.886447821570264 8.95084159329269
0.874274816822797 8.93498635435978
0.862220751459563 8.91920305858076
0.850257513735506 8.90342337271955
0.838371977329166 8.88759647341951
0.826534158932281 8.87168045765695
0.81484229475214 8.85580413946651
0.803267022935952 8.83991726148114
0.791783372400603 8.82396774931929
0.780366653472055 8.80792673538322
0.768986724925723 8.79173299515504
0.757741518107424 8.77556971089943
0.746599508494292 8.75940565166779
0.735531359880089 8.74317566845867
0.724505077534513 8.72682032539757
0.713601089102465 8.71046170569533
0.702783157701351 8.69406932594156
0.691999565527285 8.67759005710918
0.681219096549237 8.66097301693518
0.670531453101783 8.64438046457388
0.659876742040266 8.62777612900049
0.649213683129233 8.61113862514822
0.638516732381643 8.59444094825127
0.627769044031655 8.57764591224866
0.61705089233809 8.56090260526099
0.60631623726758 8.54413234475791
0.595511280147847 8.52730255508936
0.584586432656782 8.51031994558706
0.573516248772602 8.49313291699073
0.562272080565753 8.47568907435905
0.550807780105122 8.457926393176
0.539100115087122 8.43977958587554
0.52726015297808 8.42145623459021
0.515266011973541 8.40289756401794
0.502987236578981 8.38386984632214
0.49039307027071 8.36436410500331
0.477624929384124 8.34456075700403
0.464659892058666 8.32443785722652
0.451486249521607 8.30397009515433
0.438110387106965 8.28312448598565
0.424532015165458 8.26185482355086
0.410942549046279 8.24038100624476
0.397383036182537 8.21868572314085
0.38392592805662 8.19679864150607
0.370663924974575 8.1747169713121
0.357873186195366 8.15272598273697
0.34548003603986 8.13051816566136
0.333583305048984 8.10807024509079
0.322274510228512 8.08531838013103
0.311755977086518 8.06246825868428
0.302055991575405 8.03950624290896
0.293186022010318 8.01641665917608
0.285142644028273 7.99321291440872
0.277936797522132 7.9698891256331
0.271459630670758 7.9461656965334
0.26569163220609 7.92201640118028
0.260576678229212 7.8974274438744
0.256092322178099 7.87269919827566
0.252151560397182 7.84785507541271
0.248651406054078 7.82286509560398
0.245500587659383 7.79770839198158
0.242635916703535 7.77260620264453
0.239963721602109 7.74745110378347
0.23742225268083 7.72216052624846
0.234941587601188 7.69669653592925
0.232478469131004 7.67099186539373
0.230067171329085 7.64527642879782
0.227689209511139 7.61945731218294
0.225348203291801 7.59347475767154
0.223053907785474 7.5675792281886
0.220792313858253 7.54171492619089
0.218569760635337 7.51583135025266
0.216369212688187 7.48990851234733
0.214192212540791 7.46391978852549
0.212045859093614 7.4378611980136
0.209970527569128 7.412037913197
0.207973596065787 7.38644013372844
0.206075447763093 7.36116381790464
0.20432942471226 7.33666334516352
0.202751087455974 7.31322026921977
0.201353959282751 7.29097403015185
0.200211455892191 7.27094227768432
0.199134115687565 7.24973646505748
0.197943280962688 7.22376944786084
0.196441471144597 7.18864869856489
0.195357971286119 7.15645826748952
0.194604304228205 7.12614509132411
0.194131764806442 7.09729464438556
0.193893494603255 7.06935129712129
0.193879830886847 7.04191284454238
0.194073582243724 7.01498346732402
0.194460448296953 6.98836628497136
0.195013386002236 6.96186653519386
0.195701381993796 6.93563029076265
0.196517481429868 6.90952722941226
0.197474312778056 6.88343267313195
0.198577996656221 6.85751055799234
0.199827498460718 6.83164992721517
0.20123104427196 6.80571641925489
0.202762708609652 6.77990121302615
0.204416155788337 6.75409453493559
0.206190317075077 6.72820495200198
0.208042796656846 6.70246485316264
0.209944181040387 6.67677717493318
0.211874825584675 6.65107059609737
0.213818280926977 6.62560847608538
0.215791726160153 6.60003668785114
0.217785810150066 6.57432041994666
0.219794601941953 6.54841731505547
0.221814611823581 6.5225743732092
0.223844209316099 6.49672780568257
0.225885014279649 6.47079631398497
0.227953765534919 6.44473492343151
0.230056917713695 6.4184935670632
0.232161855143288 6.39229582368124
0.234281032554918 6.36606856829408
0.236414341071086 6.33975056570859
0.238549092036495 6.31328213845448
0.240657910702319 6.28691602096326
0.242738971951005 6.26058438820223
0.24478584474707 6.23422524697541
0.246786430901458 6.20777196716249
0.248735879032198 6.18114197509849
0.250574427826492 6.15459446014983
0.252286937503283 6.12806470371804
0.253852288691679 6.10149098756452
0.255254808292699 6.07480415571623
0.25646674194002 6.04825379389843
0.257475161207577 6.02177469548685
0.258278159841206 5.99527992489486
0.258856344954246 5.96873058023163
0.259199475585549 5.94206162313424
0.259273252613507 5.91555944454944
0.259058405409184 5.88923456289271
0.258550809885592 5.86309370542152
0.257742568575408 5.83678618428198
0.256652202556439 5.81062574431391
0.255312800358915 5.7846131360445
0.253746001086906 5.75867276335424
0.251952312614493 5.73275814571306
0.249955975750593 5.70709744954006
0.247731039583003 5.68160702179475
0.245274998828037 5.65624893141055
0.242547022667988 5.63065285777555
0.239583891850682 5.60510084334545
0.23638338823989 5.57952764124358
0.23295036327552 5.55391509739514
0.229295878357518 5.52822149318632
0.225440698000143 5.50242519800954
0.221463506165114 5.476841960201
0.217392906894054 5.45146881739552
0.213195582571419 5.42596669612607
0.208893479926428 5.4003294629703
0.204567445039469 5.37479841042008
0.200234662288452 5.34931183366077
0.195894613514523 5.32386623148079
0.19156049990444 5.29844812299504
0.18724816433184 5.27303180389551
0.182975403047552 5.2475749731366
0.178751652166703 5.2220453710372
0.174579551989071 5.1963477435826
0.170516737331759 5.17074696417711
0.166577612694115 5.14518641872294
0.162765586201803 5.11959940197008
0.159075299037551 5.09396542640847
0.155506656806397 5.06818847948081
0.152094658475784 5.04248621914794
0.148838929466868 5.0168017666865
0.145748426453842 4.99109814611566
0.142816392145869 4.96531823882269
0.14007561560594 4.93971299344354
0.13752924956252 4.91427573152919
0.135091635742589 4.88869030905808
0.132725000280148 4.86295339793722
0.130466351689658 4.83735700439751
0.128337441101849 4.81189593395407
0.126321840368369 4.78625003322888
0.124405326993261 4.76038884184212
0.122591772466039 4.73456645906289
0.12087061367356 4.70872256559371
0.119231686736188 4.68281011125409
0.117663542070404 4.65677044288773
0.116134960016157 4.63055717059008
0.114636205945592 4.60441551048181
0.113149228264937 4.57828182038044
0.111656268974977 4.55212759521953
0.110138576209222 4.52585328943632
0.108619728764861 4.49972757545714
0.107076683777259 4.47365572834562
0.105498966750642 4.44758591304365
0.103868695506873 4.42144975710323
0.1021609311059 4.39515075999918
0.100351148375031 4.36893653249876
0.0984144880073686 4.34274804851598
0.0963360572081917 4.31651375332177
0.0941227207271544 4.29018811802263
0.0917742304810739 4.26371017368696
0.0893291096538891 4.23733104723396
0.0867639240149425 4.21101449109956
0.0840812648438048 4.18470107675169
0.0812794945114831 4.15833189260983
0.0783829896980753 4.13217646368614
0.0754004193427699 4.10616881788758
0.0723263416606724 4.08025109477259
0.0691706549948032 4.05436374190957
0.0659449788876816 4.02841857228121
0.0626950894796051 4.00263858394319
0.0594169851571647 3.97698752188513
0.0561172366925209 3.95140027378868
0.0528140823216365 3.92582251680277
0.0495558725452272 3.90047182515932
0.0463524826721849 3.87526631957464
0.0432228727977833 3.85013178864008
0.0401829885908505 3.82498912925595
0.0372483939684395 3.79975786828608
0.0344672073074375 3.77466351026331
0.0318214524481989 3.74963374797948
0.0293308682430138 3.72457342294396
0.0270256700675466 3.69941598509669
0.0249210525784631 3.67407845523601
0.0230500472839337 3.6487852605486
0.0214079881566085 3.6234980339236
0.0199824058840675 3.59814562998404
0.0187662225982638 3.57264588017859
0.0177869174934264 3.54722955700328
0.0170502295217798 3.52180367357232
0.0165562013572543 3.49630742141609
0.0162993516810336 3.47068497834848
0.0162677618057442 3.44487606542632
0.0164561931477637 3.4191267290347
0.0168347064078459 3.39337212245175
0.0173896302280007 3.36754650621456
0.0181060339917006 3.34162280604778
0.0189650892570959 3.31586143786605
0.0199717906224305 3.2901955191746
0.0211401987166736 3.26457431241796
0.0224654218432893 3.23893775244882
0.0239378951574645 3.21320627907939
0.0255267084420262 3.18762214706733
0.0272202436737608 3.16214844980818
0.0289479736228318 3.13671628995037
0.0307058949618147 3.11124689878901
0.0324951791587483 3.08592550243713
0.0342916863919143 3.06070896238761
0.0360875659118234 3.03552119991718
0.037875026889155 3.01027410652404
0.039660139030933 2.98490333412446
0.0414331600053372 2.95964457461691
0.0432282741744569 2.93441460094894
0.0450375663988392 2.90911817408092
0.0469012348012952 2.88371637714336
0.0488074769327574 2.85840587255581
0.0507513832031201 2.83307778771098
0.0527342894429956 2.80762028341562
0.0547182313070118 2.78223748459544
0.0566808421052063 2.75684208739166
0.0585960482543235 2.73134464842068
0.0603992379139214 2.70594253983427
0.0620685795268699 2.68054518538441
0.0635598759985523 2.65511348916722
0.0648453381815545 2.62958565702861
0.0658782996974704 2.60388363595703
0.0666161531707848 2.57823775906666
0.0669978616101014 2.55255000255898
0.066964512309694 2.52676593659829
0.066525834640832 2.50081483684244
0.0657007761509292 2.4749862708522
0.0644594707893588 2.44921887969111
0.0627947218891918 2.42347854839565
0.0607055548602773 2.39770195218335
0.0581817993645886 2.37185342045657
0.0552329139893453 2.34623986885242
0.0518436254433851 2.32083630914775
0.0479715564147016 2.29553502363858
0.043585804499375 2.27022870985421
0.0387080752404407 2.24509638850994
0.0332935795740171 2.21994108433073
0.0272697009515985 2.19454503365599
0.0205087016355459 2.16863286067005
0.0128216247366408 2.14182499002328
0.00400356922020913 2.11386061779563
-0.00490798425040914 2.08630025495759
-0.0139131501776333 2.05913765511236
-0.023012046711827 2.03236666805633
-0.0322047892922579 2.0059812240458
-0.0414914943223131 1.97997534866907
-0.0508722901846599 1.95434314718686
-0.0603471962469729 1.92907881455558
-0.0699163003278934 1.9041766250556
-0.0795797212666883 1.8796309406842
-0.0893366512023193 1.855436195691
-0.0991869759785477 1.83158691021988
-0.10913076227443 1.80807768129431
-0.119168163932862 1.78490318452542
-0.129299603701553 1.76205816858689
-0.139525165945836 1.7395374566891
-0.149845035434739 1.71733594700483
-0.160251045953564 1.69544860906288
-0.170742646066095 1.6738704845994
-0.181319904755661 1.65259668455022
-0.191982746702478 1.63162238906937
-0.202731218028498 1.61094284778269
-0.213565349587575 1.59055337479654
-0.224484882668401 1.57044934945935
-0.235490307112979 1.55062621910828
-0.246581493677559 1.53107949132855
-0.257758275266042 1.51180473934071
-0.269020403021488 1.49279759370409
-0.28036748107263 1.47405374952165
-0.291798910731748 1.45556895973964
-0.303313940882985 1.4373390390157
-0.314848379701253 1.41776159620626
-0.326416352907181 1.39729212552691
-0.338015577684038 1.37623522321762
-0.349643350120247 1.35479611790161
-0.361315474911284 1.33311538406048
-0.372904152754673 1.31156301113741
-0.384424663134762 1.29007459485194
-0.396023325791937 1.26837692969803
-0.407691191556067 1.24639792319036
-0.419311567636718 1.22437264516939
-0.430953118518446 1.20227494337207
-0.442589841106989 1.18010019873927
-0.454265511940482 1.15783076139114
-0.465912740396229 1.13568635941219
-0.477510689457501 1.1135869030079
-0.489079321594597 1.09151058388877
-0.500624201714471 1.06945640161457
-0.512127011764491 1.04739331142472
-0.523476388822093 1.02554216242166
-0.534710350497527 1.00383868487699
-0.545962340197262 0.98200708163042
-0.557234054075634 0.959986388352724
-0.568394969805871 0.938004625196012
-0.579447043263607 0.916044689700834
-0.590390183895657 0.894062761229181
-0.601221703064701 0.872009593624002
-0.611934510793043 0.849825582490539
-0.622410843398083 0.827678839680092
-0.63264626293916 0.805522771591423
-0.64276556419087 0.783028924837979
-0.652643612404025 0.760462331049479
-0.66228453309666 0.73779326291758
-0.67171308856132 0.715008384802064
-0.68095025794185 0.692048545878899
-0.690001628298396 0.668844189470203
-0.698762477204076 0.645644002613185
-0.707296136646696 0.622415741610856
-0.715612572475659 0.599050779622724
-0.72371714795727 0.57555663256377
-0.731672066805354 0.551896555443535
-0.739435309663486 0.528333643460316
-0.747041908799457 0.50483701280453
-0.754594720911285 0.481112014076514
-0.762000870475684 0.457443951835028
-0.769267256236565 0.433849763433962
-0.776500019325004 0.410050853608266
-0.78372228330742 0.386040747129365
-0.79086291017717 0.362120624807457
-0.797914386608358 0.338265505020189
-0.804839054487616 0.314306463585632
-0.811670416968473 0.290118853904565
-0.818458226596772 0.265663135752344
-0.825073369109387 0.241073148056375
-0.831574615646199 0.216303962734676
-0.838018390963843 0.191375684153835
-0.844511992530686 0.166311895997466
-0.851172778613303 0.141082026068517
-0.858035253128487 0.115937797480535
-0.865309122696189 0.0908047421371232
-0.873273328751521 0.0656104368329611
-0.880993847219444 0.0404143332143294
-0.888471526593193 0.0152236098538673
-0.895707192272181 -0.00996847220772649
-0.902701642021625 -0.035159833783751
-0.909455645877475 -0.0603432927156317
-0.915969950699962 -0.0855260979175189
-0.922245270819554 -0.110708124490118
-0.928282299114063 -0.135889341650404
-0.934081701317672 -0.16107091228837
-0.939644116813062 -0.186249408584658
-0.944970157516106 -0.211427265643977
-0.950060411905076 -0.236604240384716
-0.954915440616396 -0.261778830261548
-0.959535778666976 -0.286951069599358
-0.963921933115693 -0.312118583782912
-0.968074389190893 -0.337278371439624
-0.971993604819416 -0.362429325655848
-0.975680011477966 -0.387579772735869
-0.979134016517506 -0.412709148620691
-0.98235599958313 -0.437822485379143
-0.985346315115293 -0.462914178198842
-0.988105294202672 -0.487982252388213
-0.990633241715942 -0.513016332408821
-0.992930435411444 -0.538016083391043
-0.994997129620902 -0.56299263561145
-0.996833549056446 -0.587930201340259
-0.998439898288673 -0.612855397256173
-0.999816353781903 -0.637746622938007
-1.00096306638191 -0.662615951366304
-1.00188016478623 -0.687461626211942
-1.0042579303067 -0.712222446099052
-1.00756649809486 -0.736972902612123
-1.01134687661333 -0.761504740885429
-1.01534392281159 -0.785916460321863
-1.01938940354998 -0.810316584323641
-1.02339050890382 -0.834779870074335
-1.02728059583525 -0.859331357531751
-1.03095752278563 -0.883667495400329
-1.03440099601463 -0.907822171399803
-1.03760188986269 -0.931798870245754
-1.04057752289358 -0.955571259429981
-1.04334094490811 -0.979078776086095
-1.04588630736543 -1.00190493674988
-1.04823833712823 -1.02379882439484
-1.05038922680569 -1.04440860247088
-1.05231413597737 -1.06318404680075
-1.05393338075576 -1.07894936550539
-1.05513265309919 -1.09029244732549
-1.05611376944686 -1.0991137037092
-1.05704183914737 -1.10689104061031
-1.05807304310726 -1.11492746723179
-1.05938091434518 -1.12456940011763
-1.06118555015012 -1.13743224120781
-1.06379063951706 -1.15567102064064
-1.06763458402818 -1.18234144875583
-1.07336425648882 -1.22191186842281
-1.08194387507825 -1.28101187643656
-1.08972157155072 -1.32880784695105
-1.0969130787075 -1.3690698120671
-1.10373638885918 -1.40403756837398
-1.11038116439475 -1.43543241841466
-1.1170238402191 -1.46447756140967
-1.1237732356115 -1.49195227625832
-1.13062157991731 -1.51813824228879
-1.13757775964689 -1.54346971196476
-1.14466880628995 -1.5682712782026
-1.15189399689055 -1.59277875382264
-1.1592415444897 -1.61716192836708
-1.16663616985069 -1.64130177405545
-1.1740777823268 -1.66528973863428
-1.18157420309649 -1.68922532501067
-1.18914247096386 -1.71319925974125
-1.19671081574475 -1.73697369997787
-1.20432520142923 -1.76061237422927
-1.21200570096015 -1.78418976496825
-1.21976669872005 -1.80778683654254
-1.22755203978419 -1.83116960480164
-1.23539375049976 -1.85441992770604
-1.24328289527684 -1.87758072168785
-1.25123533447953 -1.90071340262969
-1.25926076181237 -1.9239249267394
-1.26728025221501 -1.94703396458291
-1.27533160101104 -1.97011001138488
-1.28342433085935 -1.99323378369057
-1.29162538146661 -2.01644374806139
-1.29986856585976 -2.03952481298241
-1.30819568823436 -2.06252503826162
-1.3166458607169 -2.08549568719131
-1.32524527140801 -2.10849981968763
-1.33401158600407 -2.13158935503898
-1.3428728379062 -2.15456252684811
-1.35185712016392 -2.17753306393669
-1.36101323253819 -2.20054725916293
-1.37036843767358 -2.22367546693254
-1.37984896444469 -2.24672239926648
-1.38948621652303 -2.26971364742174
-1.3993043395842 -2.29268222189071
-1.40930746253649 -2.31567408386527
-1.41937949030568 -2.33850460205331
-1.42951683117436 -2.36119660938889
-1.43972687941552 -2.38379997957808
-1.45004608775018 -2.40636648816456
-1.46049563868652 -2.42897523294228
-1.47100093050007 -2.45146114377632
-1.48159764668988 -2.47392331352866
-1.49230321935988 -2.49642336125039
-1.50315340226947 -2.51902925014436
-1.5140569253237 -2.54150843217788
-1.52504921840162 -2.56387512573173
-1.53616382987269 -2.58617495280625
-1.54745489561257 -2.60844233042739
-1.5589726085658 -2.63070624028378
-1.5706034956779 -2.65277733918369
-1.58237085041095 -2.67470197420494
-1.59425366204396 -2.69651870222422
-1.60623353369246 -2.71828150117725
-1.61815704251116 -2.73975057013179
-1.63016399468676 -2.76118963670493
-1.64225688419779 -2.78260615641471
-1.65445266501366 -2.80402055480781
-1.66663424721587 -2.8251849110566
-1.67881669648075 -2.84614440205095
-1.69103204047778 -2.86694338322087
-1.70331783634057 -2.88761606815469
-1.7157066878935 -2.90820901170594
-1.72808036047964 -2.92853765943669
-1.74046942642416 -2.94864117105972
-1.75294068274498 -2.96862148942444
-1.7655411234559 -2.9885599162746
-1.77817130328903 -3.00831299216497
-1.79088814359485 -3.02795744085866
-1.80372951276242 -3.0475517901894
-1.81660287868303 -3.06692843438559
-1.82955563625649 -3.08613453391497
-1.84264853391205 -3.10520542431644
-1.85594909160158 -3.12417054173493
-1.86950932257299 -3.14305834972408
-1.88320630899751 -3.16168336191109
-1.89707468610948 -3.18008405711786
-1.91127884424429 -3.19847908082164
-1.92582104406052 -3.21681813409382
-1.94065953451893 -3.23502478789688
-1.95593696555256 -3.25325168128472
-1.97165122312504 -3.27147059087264
-1.98758780876717 -3.28945096562755
-2.00375463717715 -3.3072070313906
-2.02019835684437 -3.32469868490316
-2.03692813736722 -3.34194202383124
-2.05396764783006 -3.3589617850702
-2.07115723421674 -3.37559450956476
-2.0884970394442 -3.39187988530975
-2.1060293065456 -3.40784965535018
-2.12382866148536 -3.42356643141194
-2.14172878538157 -3.43888294291881
-2.15976530446631 -3.45385285790358
-2.17797998490993 -3.46852202388125
-2.19641093846687 -3.48295290012664
-2.2150822159614 -3.49723501688914
-2.23380605498974 -3.51133248842745
-2.25260534628239 -3.52534598723816
-2.27146522656732 -3.53937886931673
-2.29038102390356 -3.55350423639804
-2.30912203883845 -3.56760968992585
-2.32764677975529 -3.58174699578679
-2.34592781392792 -3.59595649978036
-2.36396396884308 -3.61032414748252
-2.38176330928896 -3.62492887963241
-2.39910906164147 -3.63968836064294
-2.41599401501113 -3.65471367860302
-2.43245113448489 -3.67005996488797
-2.44850865679543 -3.68578586906464
-2.46424529536446 -3.70194665063352
-2.47951758983201 -3.7184191802785
-2.49437301399538 -3.73524980699819
-2.50882252821185 -3.75249462308543
-2.52290968698772 -3.77021211889194
-2.53649934299774 -3.7882217318231
-2.54957443643742 -3.80659283575298
-2.56211540607578 -3.8253548705764
-2.5740800660312 -3.84448552398753
-2.58548061383845 -3.86397399880939
-2.5961838545183 -3.88355119611903
-2.60620833298433 -3.90313038162317
-2.61555926828488 -3.92260840052776
-2.62421262447865 -3.94181490434263
-2.63202697140541 -3.96026606027547
-2.63895512825489 -3.97763622522483
-2.64495214496883 -3.99370339433999
-2.64974909871228 -4.00772576799184
-2.65415324465447 -4.02205264427278
-2.65890573409784 -4.03908433814403
-2.6648063377106 -4.06167432183644
-2.67284803356293 -4.09360730639287
-2.67910344212733 -4.12247866837578
-2.68401981939181 -4.14913907112735
-2.6879031800642 -4.17442049926561
-2.69093614372563 -4.19885690817733
-2.69325913018478 -4.22287417729868
-2.69494352699647 -4.24647621995305
-2.69610461995697 -4.26986537586848
-2.69687053863247 -4.29309107466478
-2.69728608074754 -4.31625910660322
-2.69743799626592 -4.33936510027068
-2.69730139740306 -4.36222679545138
-2.69681966066172 -4.38495599641936
-2.6959789859443 -4.40787680755371
-2.69485653197599 -4.43099368586429
-2.69353248171003 -4.45395869703112
-2.69210355190554 -4.47673346061524
-2.69059835455791 -4.49936496630177
-2.68908311784087 -4.52187620449221
-2.68760556565201 -4.54408576636972
-2.68618612568099 -4.56607994728631
-2.68488864464968 -4.58790895547642
-2.68373022050574 -4.60961207854908
-2.68273453407116 -4.63125775899622
-2.6819190209091 -4.65262007669978
-2.68131461334211 -4.67376062274076
-2.68090368786437 -4.69473734046257
-2.68068436274347 -4.71554721693273
-2.68065336205196 -4.73615929765304
-2.68082228835339 -4.75632523298182
-2.68123638537694 -4.77604017718471
-2.68194852128655 -4.79527397108334
-2.68297763581513 -4.81408334416806
-2.68434555149194 -4.83265491272375
-2.68604702730297 -4.85111869738869
-2.68804886019401 -4.86945266983028
-2.69028431146894 -4.88746230495134
-2.69270913940977 -4.9051499808303
-2.69529414881479 -4.92256463369809
-2.69807074010785 -4.93969270990126
-2.70091755918154 -4.95664014239374
-2.70374188849797 -4.97363333510227
-2.70649767984307 -4.99062162466228
-2.70909371688713 -5.00725202917749
-2.71159688532256 -5.02331260388378
-2.71414253541242 -5.03883092530107
-2.71685634528335 -5.05369347625828
-2.71990911357723 -5.06752612013816
-2.72349503833653 -5.08020091834939
-2.72781396270934 -5.09152993935485
-2.73297044917419 -5.10136778149615
-2.73894092774609 -5.10955370590556
-2.7456033676266 -5.11590145410634
-2.75266629500713 -5.12016804982947
-2.75983835293744 -5.12226433128014
-2.76677928806397 -5.12218976401513
-2.77300904042772 -5.12003235106125
-2.77816331852047 -5.11598338335803
-2.7819491021228 -5.11026898195456
-2.78411216042085 -5.10318798433108
-2.78446152242725 -5.09504476556029
-2.78285417342666 -5.08608298767229
-2.77925506624314 -5.07671066304596
-2.7737312199418 -5.06711627595637
-2.76642937593045 -5.05743435320234
-2.75755022729428 -5.04788525996294
-2.74716357375543 -5.03851061423977
-2.73543868717467 -5.02936451564199
-2.72280734441823 -5.02058959650917
-2.70944787614896 -5.01220643642188
-2.6953992615625 -5.00425099899406
-2.68085358274936 -4.99681416036645
-2.66585200831005 -4.98983918048603
-2.6504094368601 -4.98327970827687
-2.63466836595135 -4.97717575072343
-2.61838470690925 -4.97132539177436
-2.60147532742603 -4.96573669477017
-2.5841554125074 -4.96052856517673
-2.56621789716882 -4.95566591777002
-2.54772176137671 -4.95117155269379
-2.52891787975123 -4.94707972380617
-2.50978670950262 -4.94342512605029
-2.49030408394464 -4.94018170618391
-2.47068940354539 -4.93745930260093
-2.45093803651195 -4.93532104432038
-2.4310223537998 -4.93382743062618
-2.4111561482712 -4.93299622317341
-2.39129245947729 -4.93275515457172
-2.37138475509199 -4.9330303392335
-2.35161453399623 -4.93371713475932
-2.33195099158808 -4.93474635811119
-2.31239809395703 -4.93603920036632
-2.29297842373956 -4.93749402189891
-2.27393879263089 -4.93901956801442
-2.25533771048977 -4.94055318046376
-2.23732429936835 -4.94205030952017
-2.22041565887599 -4.94341003576446
-2.20502891006849 -4.94459217687008
-2.1885747007899 -4.9457947853417
-2.16828403897168 -4.94721934729832
-2.14074231521937 -4.94910452616531
-2.11495362695442 -4.95069418916197
-2.09037888047974 -4.95203692255616
-2.06649827207575 -4.95313993950119
-2.04307698648296 -4.95400379274044
-2.01990705284537 -4.9547229619112
-1.99704142606921 -4.95535093782234
-1.97431519209689 -4.95592593408041
-1.95162052933774 -4.95647728397721
-1.92910719242559 -4.95704711298401
-1.90666955152308 -4.95769739004706
-1.88421471770931 -4.95847006166743
-1.86191598337558 -4.95941083437884
-1.83968701873305 -4.96050982740561
-1.817453234914 -4.9618004169962
-1.79544194036959 -4.96334808239257
-1.77358154318187 -4.96512737406968
-1.75184265695158 -4.96715164535745
-1.7301827118917 -4.9694585512329
-1.70857242151936 -4.97204934061616
-1.68700767527925 -4.97495608587536
-1.66549202278861 -4.97819678941133
-1.64398682867628 -4.98179515590091
-1.6224720308312 -4.98576781334758
-1.60090913267245 -4.9901271006659
-1.57952062690005 -4.99484963510896
-1.55827245334981 -4.99995613933012
-1.53715416640146 -5.00548142061936
-1.51612673105589 -5.01142994862164
-1.49541865909256 -5.01772635924353
-1.47497585707476 -5.02435356436268
-1.45477205735852 -5.03129965421603
-1.43453531505924 -5.03863964239992
-1.41449266618583 -5.04626329381633
-1.39463472694236 -5.05417489276572
-1.37494956935235 -5.06235996731558
-1.35542069840539 -5.07081636851399
-1.33597383637175 -5.07957065182195
-1.31676713678222 -5.08849828133822
-1.29773008464678 -5.09760426806265
-1.27882071515614 -5.10689000150066
-1.25996809884929 -5.11638698545157
-1.24131266688043 -5.12599462115652
-1.22280940531714 -5.13568109303244
-1.2044052743029 -5.14542779950377
-1.1860302807372 -5.15517598248743
-1.16760251819162 -5.16488387610428
-1.14924982482427 -5.17440246771929
-1.13087723111926 -5.18375202705196
-1.11236959656914 -5.19290774989313
-1.09363951248475 -5.20184586421576
-1.07461258460534 -5.21053963376345
-1.05545035989728 -5.21885438356459
-1.03612344794926 -5.22675944168849
-1.01657474075836 -5.23417224054055
-0.996760255001284 -5.24106174089442
-0.976863727054755 -5.24734273074162
-0.956838140862732 -5.25304579160951
-0.93662989857714 -5.25818842571943
-0.916216886143402 -5.26276124627313
-0.895574484531463 -5.2667431447764
-0.874918642768918 -5.27013125788856
-0.854229282392781 -5.272923737538
-0.833497503773616 -5.27511944394224
-0.812752677916323 -5.27671775089636
-0.792270878075234 -5.27773519559535
-0.772046175504768 -5.27825848913164
-0.752163006145647 -5.2783083052178
-0.732793544501966 -5.27792648591706
-0.714166144846256 -5.27713281651623
-0.696849303752209 -5.27604556714994
-0.681386529855972 -5.27480082461089
-0.668631660178372 -5.27359203828451
-0.656438246059092 -5.27221669087169
-0.642754323265793 -5.27044435440728
-0.625277093979403 -5.26797809475215
-0.601065455068424 -5.26440471829166
-0.566044880932516 -5.2591255418439
-0.535527072875444 -5.25437153560629
-0.508266012502453 -5.25011670174761
-0.483265005846988 -5.24633492732772
-0.45970903482553 -5.24294535187224
-0.437074200979902 -5.23999982787405
-0.41532313560828 -5.23760785530457
-0.394262840044329 -5.23585442283795
-0.373815252289432 -5.2347640075254
-0.353784478036474 -5.23430466418329
-0.334047243251844 -5.23446644037899
-0.314749707491051 -5.23515918677623
-0.295853062417915 -5.23629797291523
-0.277143556487355 -5.23785584375099
-0.25871232807711 -5.23985930560246
-0.240942143245526 -5.24222527492397
-0.224034995652413 -5.24486441128449
-0.208336113934165 -5.24768287622297
-0.191242106921652 -5.25115287751949
-0.169918244829076 -5.25585577364597
-0.150411055703305 -5.26068677281199
-0.132374185043234 -5.26558427474919
-0.115550900578612 -5.27046432076974
-0.0997198973365719 -5.27527352624363
-0.0849302205671267 -5.27987963962564
-0.071143923117605 -5.28425038001579
-0.058413175614615 -5.28834753775433
-0.0468429852794063 -5.29210380771293
-0.0366848831129506 -5.29541153203118
-0.0280118454806108 -5.29825539290005
-0.020852772367146 -5.30060935626907
-0.0151667666752636 -5.30246581610594
-0.0108362209868812 -5.30386779961758
-0.00767790733738836 -5.30488270369543
-0.00544316937648596 -5.30559681640212
-0.00388369120144486 -5.30606277566125
-0.00275613179235774 -5.3063586538784
-0.00187350521082713 -5.3065340189782
-0.00108944306992319 -5.30661825331619
-0.000258651449556448 -5.30665896746703
0.000756641398120269 -5.30666298520709
0.00212480374273824 -5.30666447391261
0.00402687352936284 -5.30666368680074
0.00654903165138131 -5.30655999423309
0.00958990384077376 -5.30618527274311
0.0128895844466619 -5.30537624730404
0.0161089157942561 -5.30409787314457
0.0190022467901354 -5.30247096449146
0.0213005480367265 -5.30082593900471
0.0228347623350289 -5.2997246703462
0.0238593118950966 -5.29898265790112
0.0245441010115895 -5.29847558330686
0.0250026902698693 -5.29811849196077
0.0253111263009711 -5.29785155638093
0.0255205601851335 -5.29763005379793
0.0256657215154257 -5.29741687510676
0.0257706840269547 -5.297176306508
0.025852852286238 -5.29686803774095
0.0259258531331133 -5.29644042581349
0.0260017932401035 -5.29582182766575
0.0260932642215054 -5.29490860418272
0.0262154358751269 -5.29354775508048
0.0263885669671974 -5.29151128594012
0.0266413707771259 -5.28845800552879
0.0270157694599729 -5.28387637277719
0.0275738493392716 -5.27699878241429
0.0284081597512455 -5.26667296799581
0.0296570549895084 -5.25116895018602
0.0315276417673958 -5.22788919871109
0.0343301246971675 -5.19293344465258
0.0385292478760253 -5.14044522662536
0.0439196353317428 -5.1014614233587
0.0503405746945396 -5.07169517576031
0.057511904115775 -5.04837042383759
0.0652321372808374 -5.02943857325109
0.0733753887635721 -5.01345300960176
0.0817943922362596 -4.9993099969673
0.0902652351283456 -4.98644898743544
0.098786510825922 -4.97442372607711
0.10738059594926 -4.9629279737681
0.116157883950992 -4.95187820485418
0.125152862484791 -4.94121536852219
0.134237134390578 -4.93089498092066
0.143373388606799 -4.92081269471939
0.152532930969508 -4.91090404892595
0.161828563318834 -4.90096617911295
0.171181745965067 -4.89107606663484
0.18061634528816 -4.88118469720806
0.19013821935825 -4.87132659656164
0.199751120370456 -4.86149161324983
0.209352844375961 -4.8517404659519
0.218947118364054 -4.84206417682481
0.228519854213813 -4.83244955561421
0.238083235001456 -4.82287699032579
0.247681062071798 -4.81336742444006
0.257223554781548 -4.80400259854123
0.266779237938071 -4.79473775783181
0.276403899200368 -4.7855113943812
0.286120002757853 -4.77630195199086
0.295796029417991 -4.76719120887452
0.305416524393729 -4.7581434752457
0.314973416654617 -4.74910012537273
0.324492855385914 -4.73995301479365
0.333963293571148 -4.73062686463236
0.3432589445534 -4.7212006473028
0.352407407702815 -4.71165281626697
0.361443453416769 -4.70197495836912
0.370399892839393 -4.69215362323258
0.37923328668993 -4.68235231416771
0.388017613304473 -4.67263739434783
0.396803381945111 -4.66303944721708
0.405626387733819 -4.65355841214507
0.414378312399074 -4.64428081095552
0.423133823458019 -4.63509275190442
0.431878097952656 -4.62601259620323
0.440591877855962 -4.61700977938588
0.449254094840247 -4.60810019910924
0.457823239575316 -4.59931536968499
0.466219936071463 -4.59074120663266
0.474326319397975 -4.5824989221777
0.481903030249745 -4.57481557813018
0.488734502950191 -4.56792813178763
0.494518576650224 -4.56213988945783
0.4987744445102 -4.55792156166477
0.50221608639741 -4.5545664144589
0.505412189591565 -4.55151232763828
0.508891366265964 -4.54824762635162
0.513229309311922 -4.54422534374639
0.519143445377492 -4.53877158919704
0.527611972450741 -4.53097264113338
0.539972888725045 -4.51952187428216
0.55827958146632 -4.5025008369082
0.573123212444145 -4.48859835825087
0.585657607875816 -4.47675820097003
0.596570162445516 -4.46642039994416
0.606321815210084 -4.45712594151422
0.615199472712793 -4.44862411391946
0.623313715956433 -4.44079699890431
0.630585382925228 -4.43374022171428
0.63687719461339 -4.42757797037347
0.643227017495459 -4.42127782862595
0.650682280182831 -4.41378427189881
0.660472761120623 -4.40384183809655
0.668849664766577 -4.39547966417814
0.675902207554639 -4.38850273552184
0.681488003332516 -4.38301512049248
0.686528447573977 -4.37809742686458
0.691854913797083 -4.37292574515116
0.698346020652914 -4.36663361880392
0.707072492585769 -4.35816687078932
0.712908871035716 -4.35252210924671
0.716817882059011 -4.34875361699856
0.719444327934163 -4.34623002359578
0.721221447752108 -4.34452852982514
0.722442381235173 -4.34336406832151
0.723308523366016 -4.34254154364528
0.723962744311948 -4.34192315303393
0.724512964445548 -4.34140528915759
0.725049940828016 -4.34090118921727
0.725662251975459 -4.34032639778328
0.726450898212976 -4.339584617332
0.727545967903821 -4.33855156750766
0.729128095938234 -4.33705417469238
0.731458260150988 -4.33484156391156
0.734920825662523 -4.33154303801547
0.740086951825437 -4.32660596410469
0.747808805681251 -4.31920319116015
0.759360127759346 -4.30809446153062
0.776646338896952 -4.29141862523716
0.802518846622143 -4.26638182941218
0.841245387267226 -4.22878942777095
0.869460876950517 -4.19952793229669
0.890785831756903 -4.17609009888253
0.907522808765771 -4.15665962879861
0.921125237535141 -4.14015861497439
0.932621823139189 -4.12568170011938
0.942723582214234 -4.11252865304102
0.951896609755788 -4.10038853336272
0.960438968332538 -4.0889358598715
0.968574366079587 -4.07787658013837
0.976383543447572 -4.06709979666804
0.984015430444392 -4.05644144302956
0.991512224946774 -4.04575729536853
0.998834488561291 -4.03506630176161
1.00606567316311 -4.02438626881731
1.01320825876021 -4.01360283909793
1.02028637955472 -4.00265132981579
1.02726408002319 -3.99158964605976
1.03412180953016 -3.98037348552932
1.04086968858154 -3.96898291574071
1.04754917896086 -3.95745253205067
1.05420694364875 -3.94577675689567
1.06105093143967 -3.93404289370471
1.0681714270249 -3.92249602168156
1.07558892354707 -3.9111277969371
1.08327398575189 -3.89994305808861
1.09115887872987 -3.88882664131686
1.09914288702438 -3.87772507312042
1.10707569877489 -3.86673811458125
1.11506234200862 -3.85576698885439
1.12321672028907 -3.84474906545911
1.1315814886023 -3.83373111824976
1.14001968186157 -3.82275992763325
1.14853834077119 -3.81177285149482
1.15720723561722 -3.80080508691096
1.1659890278272 -3.7900123133899
1.17494743454987 -3.77941203589537
1.18414233214049 -3.76900375358604
1.19360666054023 -3.75896988420704
1.20340131661249 -3.74933782575452
1.21359222775128 -3.74015204801156
1.22412933193045 -3.73154854407185
1.23503615987718 -3.72350960882451
1.24633128419265 -3.71609537371367
1.25788300099802 -3.70940364348398
1.269684366559 -3.70340201378203
1.28170367184385 -3.69800621455516
1.29391218260896 -3.6931329681265
1.30609805351774 -3.68873629876105
1.31817759326477 -3.68469932891704
1.33001659975052 -3.68096544612526
1.3413587612061 -3.67754556851981
1.35199770255296 -3.67441947054553
1.36164402221012 -3.67164964494752
1.36966323442709 -3.66939177704833
1.37535029314876 -3.66780357420411
1.37964329258437 -3.66661894890724
1.38325037577904 -3.66563943200941
1.38676654048647 -3.66470091340571
1.39077178694201 -3.66364615691023
1.39592679227739 -3.66229844930086
1.40308188635924 -3.66043199624972
1.41341732165098 -3.65773409223791
1.42863795643485 -3.6537527354799
1.45125447415696 -3.64782088984967
1.48499752979943 -3.63894473952028
1.51205531568096 -3.63198530759278
1.53459946642659 -3.62639634950187
1.55423843027326 -3.62179423875273
1.57200252016573 -3.61809467766236
1.58839845974762 -3.61529758246429
1.60392161437511 -3.61343681865459
1.61890689753456 -3.61241837933614
1.63346860483047 -3.61220563751355
1.64778305666084 -3.61262895137757
1.66195281163541 -3.61357499480204
1.67589169816363 -3.61495102490594
1.68977221894419 -3.61668608402649
1.70360886889757 -3.61875261982452
1.71729350280847 -3.62109486700868
1.73103911426569 -3.62373675042508
1.74488740554422 -3.62661839786296
1.75869917805221 -3.6296866118925
1.77251055909323 -3.63290269682361
1.78635761406385 -3.63626948519483
1.80013391358974 -3.63976480563778
1.8138697339205 -3.64335452466286
1.82762164800302 -3.64702032306439
1.8413169922908 -3.65065613013671
1.85513755611838 -3.65428484946479
1.86912091711669 -3.65791144438081
1.88311718434094 -3.66152377435256
1.89712697217256 -3.66514080615455
1.91118610062258 -3.66878229780264
1.92517366295352 -3.67243860250917
1.93912181053742 -3.67608580945122
1.95307268254706 -3.67971522997974
1.96693697330269 -3.68324819761755
1.98079244729731 -3.68660663638895
1.99468244142821 -3.68971672828298
2.00865598935221 -3.69258029759999
2.0225780648732 -3.69514111449444
2.03648650566767 -3.69739272397151
2.05038392709619 -3.69931036829419
2.06430409903768 -3.70088033333569
2.07825157226539 -3.70208090655306
2.09225188610978 -3.70286198437908
2.10617443251828 -3.70313668383199
2.12004063802996 -3.7028337802261
2.13387912560616 -3.70190252368002
2.14754907823901 -3.70030414094267
2.16109620383557 -3.69802208558733
2.17451297743066 -3.69499226656508
2.18765496666968 -3.69115931185703
2.20044781714523 -3.68641704159871
2.21279255553663 -3.68062417515642
2.22440088831294 -3.67361416654961
2.23501142658368 -3.6651170437886
2.24437954512981 -3.65464718982636
2.25179189075017 -3.64172346597667
2.2563608832031 -3.62552061756718
2.26085798699864 -3.60941745600578
2.26528359953184 -3.59341329766352
2.2696381079408 -3.57750746309775
2.27392189677435 -3.56169927501043
2.27813534314272 -3.54598806118233
2.28227881685564 -3.53037315377199
2.28635268224316 -3.51485388858916
2.29035729743761 -3.49942960502656
2.29429301613311 -3.48409964844983
2.29816018367462 -3.46886336683515
2.30195913890706 -3.45372011370595
2.30569021694127 -3.43866924139456
2.30935374608868 -3.42371011221265
2.31295004828533 -3.40884209199192
2.31647944046677 -3.39406454591676
2.31994223322216 -3.37937684619623
2.32333873081747 -3.36477836835572
2.32666923118654 -3.35026849131482
2.32993402694612 -3.33584659893329
2.33313340838232 -3.3215120777816
2.33626765259035 -3.30726431849515
2.33933703784199 -3.29310271390443
2.34234183343781 -3.27902666265164
2.34528230441898 -3.26503556740493
2.34815870759238 -3.25112883176438
2.35097129808837 -3.2373058642862
2.35372032226169 -3.22356607812449
2.35640602326199 -3.2099088902577
2.35902863547813 -3.19633371700294
2.36158839035053 -3.18283998289015
2.3645122471668 -3.16859632846308
2.36785385563745 -3.15384607905379
2.37158739526254 -3.13894868576523
2.37575169770773 -3.12400444907616
2.38035773180812 -3.10905557264691
2.38537392351219 -3.09429423070998
2.39081986948364 -3.07974302027242
2.39665416482866 -3.06539274444456
2.40280054457689 -3.0512008738626
2.40906935699231 -3.03725191375665
2.41535709463764 -3.02343657184301
2.42157985514976 -3.00965119412647
2.42774193866809 -2.99586414365999
2.43388709598751 -2.9820267568774
2.43997385211297 -2.96824942717436
2.4460500208822 -2.95446836882381
2.45214517697733 -2.94063592010802
2.45825905925303 -2.9267125569847
2.46427908981668 -2.91279427335664
2.47014315371429 -2.89879366705841
2.4758128712533 -2.88462629056124
2.48118482768046 -2.87038075780498
2.48622189922335 -2.85599858464173
2.49086469546767 -2.84143189747574
2.49503824662641 -2.8268196542076
2.49873856076062 -2.81214243303815
2.50198250215566 -2.79740342525299
2.50479408649093 -2.78261221969407
2.50717547002935 -2.76778640808318
2.50917325450741 -2.75293779057128
2.51082021986094 -2.73815808824183
2.51217371617986 -2.72358407042932
2.51329214002205 -2.7092529688605
2.5142610604269 -2.69541020879615
2.51515787031569 -2.68209828879687
2.51604807039786 -2.6693648962915
2.51694660701809 -2.6575053909697
2.51788629321812 -2.64664329581283
2.51889023981481 -2.63695199833323
2.51999215876272 -2.62866703715989
2.52125840868556 -2.62184082961404
2.52271651013079 -2.61641844574914
2.52429373832132 -2.61222840868212
2.52601945076742 -2.60900421503052
2.52799695747771 -2.6065406774371
2.53042056059809 -2.60462605610964
2.53360760835595 -2.60305682378269
2.53775408243545 -2.6019050641203
2.54296692806147 -2.60139655422937
2.54916628989439 -2.601714094223
2.55643504062591 -2.60287738548208
2.56486758681434 -2.60481332960057
2.57443706151527 -2.60741078196777
2.58487549104802 -2.61050192848646
2.59605823885256 -2.61405192002765
2.6079010102792 -2.61803578077399
2.62029649388374 -2.62240072829491
2.63319557478316 -2.62714107693465
2.64659923772054 -2.63223028780248
2.66045980563868 -2.63763326969578
2.67465659011389 -2.64328375257815
2.68915733785514 -2.64914018590261
2.70381506874323 -2.65511177076128
2.71864058963278 -2.66117725377061
2.73360644073682 -2.66729761038724
2.74867533408705 -2.67344301125579
2.76366211565188 -2.67948732576393
2.77874727659065 -2.68548848154794
2.79391320981529 -2.69141342872295
2.80918860573421 -2.69726659774371
2.8244223725941 -2.70297339504212
2.8396543936276 -2.70855194988329
2.85490777058353 -2.71399890832374
2.87024210702437 -2.71935586778393
2.88571386871131 -2.72464934557597
2.90122084520111 -2.72984497330416
2.91683149613359 -2.73495899180645
2.93259840990192 -2.7400107152709
2.94853400367429 -2.74500902279344
2.96444771053176 -2.74988684308912
2.98037616288266 -2.75465741751787
2.99632545179496 -2.75934952186991
3.0123380803584 -2.76402903144162
3.02843402452276 -2.76872621080424
3.04443268073479 -2.77339053910329
3.06033523327177 -2.77799949310346
3.07614350233194 -2.78248776641741
3.09204159249125 -2.78682008589628
3.10786572334919 -2.79088480412621
3.12367072535453 -2.79467618203019
3.13950826982468 -2.79820968713897
3.15541891617276 -2.8015245809103
3.17125743739941 -2.80460675003456
3.18708103874394 -2.8074868380825
3.20292797501107 -2.81017838177018
3.21879089296508 -2.81273032752543
3.23468155947144 -2.81516823611944
3.2504184831619 -2.81746505859251
3.2660091361183 -2.81962036183653
3.28146983274369 -2.82162675276145
3.29677896957041 -2.82348538445253
3.31195588364315 -2.82518940819704
3.32689918661515 -2.82670607012322
3.34171625017337 -2.82798797691401
3.35654308176758 -2.8289986545932
3.37148432845536 -2.82975668097135
3.38644918531961 -2.8302885593264
3.40148263632127 -2.83068340060168
3.41662448874996 -2.83104085543108
3.43186648764948 -2.83142081098345
3.44721690133123 -2.8318869248105
3.46250403865923 -2.8324502905043
3.47772712169964 -2.83312154636464
3.49289129616736 -2.83387915596108
3.50802496415176 -2.83469930077666
3.52301959991593 -2.83553514174331
3.53792509803319 -2.83639271875882
3.55279313532155 -2.83726496070976
3.56770218304153 -2.83818075377075
3.58268801599261 -2.83912603569967
3.59761772632201 -2.84007493071733
3.61253049742235 -2.84103566853639
3.62742974655118 -2.84201846360417
3.64231665919806 -2.84302047406729
3.65722336197886 -2.84402533190011
3.67215226988116 -2.84505064361882
3.68710946252198 -2.84608394016705
3.70192433347106 -2.84711409463042
3.71658414981892 -2.84814620118489
3.73108357762304 -2.84916893178775
3.74542379788576 -2.8501693879832
3.75961270191565 -2.85116443905117
3.77366619359059 -2.85217004745449
3.78761081555753 -2.85318719394615
3.80145515306598 -2.85423554511486
3.81524070515534 -2.85533999295761
3.82881792326309 -2.85655157762864
3.84221721813912 -2.85788904249045
3.85562100580664 -2.85939221881698
3.86881676222696 -2.86099470202275
3.88178846402671 -2.86266347908818
3.8945325854056 -2.86430964240664
3.9070245497209 -2.86582374602472
3.91919818447614 -2.86719146561293
3.93075345735183 -2.8683907052962
3.9416510381183 -2.86945488845352
3.95159473068395 -2.87047855565586
3.96042774685504 -2.87153271614926
3.96812333358301 -2.87267673325166
3.9749122535258 -2.87400177839825
3.98100759710922 -2.87557910154389
3.9865904891326 -2.87727097402505
3.99182345981948 -2.87874111362443
3.99681132582068 -2.879515588453
4.00158549663705 -2.87922166443321
4.00604321818297 -2.87765935264669
4.01006250656817 -2.87475114764518
4.0135809475831 -2.8705123032272
4.01670081225167 -2.86503662921807
4.01973889020867 -2.85849546607103
4.02313037540674 -2.85106588678109
4.02715443272201 -2.84297711723368
4.03189781520517 -2.8344484383593
4.03738457799764 -2.82552493353025
4.04356356498437 -2.8162525352247
4.05033293158067 -2.80666897230809
4.05745741305057 -2.79694408678073
4.06484275621445 -2.78709004489046
4.0724047594159 -2.7770806282109
4.08002241847303 -2.76688033376323
4.0875839100283 -2.75643842883104
4.09485273964057 -2.74578095348842
4.10167603151323 -2.73486432503657
4.10794281634487 -2.72365179399539
4.1135987104961 -2.71210728766984
4.11853802215373 -2.70037360644176
4.12275117967798 -2.68841111120849
4.12625721126023 -2.67620882526207
4.12907387864636 -2.66378259539885
4.13130309900151 -2.65112750074784
4.13308177451687 -2.63830077421067
4.13450546653463 -2.62533089166259
4.13571009884986 -2.61225584824004
4.13684491247638 -2.59909601544791
4.13794872080177 -2.58582406326523
4.13910465973841 -2.57247762425398
4.14033854940992 -2.55906510247866
4.1416396023673 -2.54555033026184
4.14297513504518 -2.53209775566763
4.14433463642187 -2.51871475057964
4.14571154887107 -2.50540359290622
4.14708570126338 -2.49214509660891
4.14845295131252 -2.47896239951696
4.14982450717445 -2.46582434133947
4.15121228742563 -2.45272398867834
4.15263088394133 -2.43959372642109
4.15408348850354 -2.42652842494165
4.15559538951796 -2.41348309176507
4.1572016546295 -2.40041608050196
4.15893614617127 -2.38731561126757
4.16082118637534 -2.37414754788422
4.16285447406028 -2.36103392857435
4.16507465331827 -2.34798866445479
4.16748524645699 -2.33505386623057
4.17013767300629 -2.32227340496968
4.17302425089981 -2.3098175080145
4.17615950690422 -2.29760928331319
4.17953304241422 -2.2855128381564
4.18312430464033 -2.27342775542843
4.1869321680585 -2.26128901976758
4.19090866980588 -2.24920687639276
4.1950485309807 -2.23720055043297
4.19935860129821 -2.22528522176918
4.20379145169936 -2.21345781776381
4.20830299303729 -2.20172999341602
4.21278008597324 -2.19028083601936
4.21718643820768 -2.17915187497899
4.2214411099088 -2.16843828377508
4.2254489780702 -2.15823782844879
4.22902966777283 -2.14893524060425
4.23209784703879 -2.14079768753904
4.23464738267507 -2.1339190311332
4.23676735800033 -2.12821880350916
4.23860963587723 -2.12329473374593
4.24032977994643 -2.11869033990806
4.24204669853364 -2.11403619785356
4.24373033832 -2.10920579194938
4.24522970171251 -2.10407657753993
4.24633040049791 -2.09857644384514
4.24691719110862 -2.09265488522573
4.24703741971059 -2.08627453981591
4.2469913045723 -2.07937143129983
4.24710108999769 -2.07184424498588
4.24754982323328 -2.06377185664804
4.24836206653918 -2.05515854789751
4.24950692025091 -2.04606872235065
4.25094239700818 -2.03662094613278
4.25265796529523 -2.02694081087537
4.25476081944464 -2.01718197844526
4.25736652029253 -2.00735093150275
4.26060876455279 -1.99750905079213
4.26461064178398 -1.98766565080094
4.26937033449156 -1.97794704189541
4.27491414061438 -1.96828269483551
4.28123163167729 -1.95859442205825
4.28822544522029 -1.94876652921119
4.29570355987698 -1.9387269267292
4.30334513619615 -1.92853555315624
4.31096136383662 -1.9181096721417
4.31834067825079 -1.90737750619648
4.32532751358724 -1.89631646231179
4.33178320062535 -1.88506659255515
4.33771620316729 -1.87366931386627
4.34318164383542 -1.86214134703992
4.34829066227719 -1.85047751891621
4.35318167184587 -1.83871689014587
4.35796822128003 -1.82703305216786
4.36284763430852 -1.81534446784575
4.36793106913146 -1.80366930056224
4.3732982029409 -1.79201121462369
4.37897688092374 -1.78035998419703
4.38495021917004 -1.76868978359361
4.3911473526099 -1.75693810873688
4.39748703278082 -1.74506230623821
4.40381043724724 -1.73314967229853
4.41008857020566 -1.72119758348884
4.41630204539847 -1.70919680708223
4.42243691740622 -1.69712995305431
4.42843325369239 -1.68511930839867
4.43429060776389 -1.67317932833889
4.44003528658929 -1.66126930992823
4.44569150426052 -1.64938706831563
4.45128548083766 -1.63750157366882
4.45676676546787 -1.62569849890837
4.46214903082422 -1.61392657749224
4.46739642890107 -1.60220676239762
4.47245079428159 -1.59065248351649
4.4772882002557 -1.57938815513551
4.48184883401117 -1.56858676993321
4.4860601537497 -1.55839838405917
4.48985799749059 -1.54900875559816
4.49314286272851 -1.54073746591312
4.49587926519429 -1.53372297013213
4.49805677607059 -1.5280295405798
4.49973792760599 -1.52355753806889
4.50098584272807 -1.52022771311374
4.50187467272991 -1.51786743140962
4.50248532700808 -1.51623200147823
4.50290231784642 -1.51508092289058
4.50319465216369 -1.51422134559309
4.5034107084665 -1.51350925622109
4.50358624410812 -1.51285885083474
4.50375030537112 -1.51216116215722
4.50393004485515 -1.51133279811298
4.50417175141455 -1.51023497659107
4.50453197195012 -1.50868376786982
4.50510341290629 -1.50641928455147
4.5059475472678 -1.50317938452958
4.50707170520155 -1.49882324928991
4.50841374311285 -1.49345853992483
4.50993102121539 -1.48719143572564
4.51160990561423 -1.47999368241823
4.51346350346069 -1.47184859951476
4.51546765485514 -1.46288228702917
4.5175734755419 -1.45313350570444
4.51978200221275 -1.44269416822534
4.52207817190409 -1.4316409903668
4.52446142805469 -1.420048342872
4.52684635456748 -1.40818495990942
4.52919744387042 -1.39608996949586
4.53147359279869 -1.38381394417024
4.53365438439008 -1.37142737755829
4.53568707684978 -1.35914973838315
4.53759369763436 -1.34703449849467
4.53939194437798 -1.33519583847621
4.54106504621899 -1.32377752672128
4.54249278887805 -1.31309425396164
4.54351435335911 -1.30338286289199
4.54396788452666 -1.29479177925161
4.5436471662502 -1.28730538198879
4.54233366093129 -1.28082513368186
4.5397438176101 -1.27532056760768
4.53566413251577 -1.27074043439816
4.5299489218627 -1.26697062164106
4.52251274934376 -1.26381502735579
4.51333283181891 -1.26102971132529
4.5021474549972 -1.25833226981955
4.48862773617381 -1.25542152255974
4.4723067028485 -1.25206104993894
4.452452935583 -1.24793908681117
4.43266688250644 -1.24381571691504
4.41295679058497 -1.23969076448936
4.39331484330603 -1.23556405415326
4.37375440095051 -1.23143541115427
4.35425488899239 -1.22730465849381
4.33483268284431 -1.22317162214281
4.31550172226956 -1.2190361259332
4.29624553830072 -1.21489799333248
4.27706325606467 -1.21075704842223
4.25794499729869 -1.20661311585897
4.23893688625792 -1.20246601964399
4.22001272105854 -1.19831558303852
4.20117116190385 -1.19416162944505
4.18241086183243 -1.19000398195919
4.16373401238455 -1.18584246417904
4.14514573049579 -1.18167689888404
4.12664378527568 -1.17750710916056
4.10823057156152 -1.17333291789115
4.08990162933024 -1.16915414681322
4.07166281040583 -1.16497061970223
4.05351007050733 -1.16078215690893
4.03544148890709 -1.15658858117847
4.01745670106765 -1.15238971446095
3.99955478554809 -1.14818537836912
3.98173404917317 -1.14397539362548
3.96399330196024 -1.13975958127178
3.94633084307888 -1.1355377622287
3.92874514773691 -1.1313097569092
3.91123466076502 -1.12707538508148
3.89379784283823 -1.12283446721227
3.87504794188434 -1.11867634401538
3.85538724211743 -1.11457435421576
3.83506779082194 -1.11056148964025
3.81423297238402 -1.10661867421333
3.79322143041765 -1.10280556847776
3.77206238547056 -1.09910306513601
3.75077721708901 -1.09547708755475
3.72935000898502 -1.09192313093184
3.70800520382294 -1.08846550614919
3.68670056290382 -1.08507766386142
3.66538398929735 -1.08176149716581
3.64398492954868 -1.0785141565613
3.6226830063883 -1.07534432471016
3.60142657009293 -1.07222342037584
3.5801714746142 -1.06916455888989
3.55887379404505 -1.06614100182231
3.5374990639719 -1.06311517540661
3.51601659796885 -1.06008262529812
3.49436145230311 -1.05703777000119
3.47273695235408 -1.05398971718117
3.45107121261886 -1.05093028807781
3.42928554319907 -1.04784939894897
3.40756174181297 -1.04478361880818
3.3858115399077 -1.04168829957045
3.36395880476608 -1.03856409443005
3.34219075620771 -1.03545706456572
3.32042831453563 -1.03236590108922
3.29860981140752 -1.02927520283254
3.2769281172355 -1.02620314554679
3.2553019084817 -1.02312077993588
3.23365901659144 -1.02004767834051
3.21218830526424 -1.01700496518762
3.1908267677485 -1.01396860670718
3.16949643414972 -1.01093238401519
3.14810799470629 -1.00789010130049
3.12682633898273 -1.0048512963303
3.10557018726848 -1.00177584093539
3.08424598144342 -0.998650965449275
3.0630127169863 -0.995479070761602
3.04178061305199 -0.992227880400971
3.02049304690073 -0.988850157447658
2.9993315492024 -0.985339439484656
2.97820168852946 -0.98163683322424
2.9570307338041 -0.977711595809673
2.93603593738087 -0.973559332627449
2.91516685161807 -0.96913761410511
2.89436079216545 -0.964420839790068
2.87382925214122 -0.959427774548172
2.85351576804987 -0.954162716764798
2.83335038481211 -0.948614546238321
2.81328757413301 -0.942754858689112
2.79326575805568 -0.936566707540995
2.77323011993416 -0.930008265751286
2.75343680516268 -0.923193412349405
2.733852201636 -0.916162919631919
2.7142133422737 -0.908857781990162
2.69469344397268 -0.901422550223678
2.67522156254715 -0.893918005616181
2.65570170413305 -0.886363167760116
2.63627725510684 -0.878890902441002
2.61687653013557 -0.871525636621534
2.59741527114789 -0.864276225339753
2.57803004600162 -0.857266406153872
2.55863915230994 -0.850534435537323
2.53917644055018 -0.844103147176661
2.51956391213646 -0.837983776135172
2.4999461562941 -0.832239816495095
2.48020311574201 -0.826841886215448
2.4602764973018 -0.821788427560421
2.44032502539424 -0.817103771002725
2.42028899775637 -0.812748412905917
2.40012774141936 -0.808677722022586
2.38001073634394 -0.804862934657239
2.35990043259032 -0.801224362998745
2.33974390751204 -0.797711842988624
2.31972790499749 -0.794328153077241
2.29979856757834 -0.79100385753166
2.27989984479574 -0.787711296634441
2.26022803663592 -0.784458275752351
2.24073659934466 -0.781197251612366
2.22117793374577 -0.777879339635125
2.20175497233381 -0.774509773208757
2.18246222122684 -0.771051633014717
2.1630518806418 -0.767454834790173
2.14378432265536 -0.763708036688345
2.1246796877252 -0.759742941919145
2.10583445021914 -0.755454680778337
2.08763516823545 -0.750747888871836
2.07037711907131 -0.745423582589616
2.0545108328186 -0.739242976160187
2.03855490422494 -0.73303931185914
2.02250793004297 -0.726812346536808
2.00636849978117 -0.720561579966578
1.99013519333002 -0.714286578451741
1.97380658329544 -0.7079859597498
1.95738123256429 -0.701659430735924
1.94085769718553 -0.695306848536585
1.92423452273087 -0.688930373885903
1.90751024859583 -0.682526212039563
1.89068340356199 -0.676092575313601
1.87375250598421 -0.669629821668564
1.85671606880337 -0.663137909862951
1.83957259217075 -0.656616773151933
1.82232056738368 -0.65006632291395
1.80495847981621 -0.643486451280586
1.78748480058924 -0.636877033810214
1.76989799352797 -0.630242155144441
1.75219651022017 -0.623576869247916
1.73437879483685 -0.61688114126976
1.71644328093636 -0.610154914983357
1.69838838953302 -0.603406974271735
1.68021253331823 -0.596628989904833
1.66191411347261 -0.589820876748738
1.6434915203658 -0.582982533115803
1.62494313407176 -0.576113842439326
1.60626732279276 -0.569214673516257
1.58746244358152 -0.562284097966748
1.56852684311157 -0.555322735049457
1.54945885493118 -0.548330411017534
1.5302568036544 -0.541307975038497
1.51091899798179 -0.534267911802456
1.4914437390783 -0.527201804169628
1.471829312879 -0.520109494578979
1.45207399408113 -0.512990809515624
1.43217604654308 -0.505845558212922
1.41213371789799 -0.49867353377935
1.39194524551088 -0.491474510634205
1.3716088564661 -0.484248245700608
1.35112275965329 -0.476994475957859
1.33048515266735 -0.469712917597438
1.30969422248922 -0.462403264030115
1.28874813966033 -0.455065185101974
1.26764506010961 -0.447706454378528
1.24638312957987 -0.440319287818789
1.22496047746377 -0.432904745836421
1.20337522066704 -0.425481490414315
1.18162545806025 -0.417992192763305
1.15970927839082 -0.41050519279433
1.13762475269307 -0.402999376240201
1.1153699395667 -0.395474297612715
1.0929428809203 -0.387929456349125
1.07034160596281 -0.380364284487599
1.04756412376221 -0.372778130889994
1.02460843390378 -0.365170238718347
1.00147251588521 -0.357559684083263
0.978154334753453 -0.34994098768361
0.95465183982148 -0.342312066469857
0.930962962621784 -0.33467180075079
0.907085621344688 -0.327032589878235
0.883017715568476 -0.319389834053175
0.860176236257102 -0.311654630010229
0.838061039155774 -0.303960288134632
0.816321456680102 -0.296385762762853
0.794927620207588 -0.289061811741165
0.773677526883598 -0.281974459679212
0.752424212591833 -0.275117490866051
0.731308697301633 -0.268524691357713
0.710252373018232 -0.262212238242225
0.689201238478608 -0.256118152278854
0.668116982972858 -0.25018517482861
0.647198250284242 -0.244416138614794
0.626399445172714 -0.238777922447011
0.605677877520254 -0.233227557121846
0.585024249904486 -0.227799927579426
0.56440082544838 -0.222487107903996
0.543755176377332 -0.217301941535325
0.523296848017985 -0.21234008541706
0.502745407733464 -0.207547979996238
0.482309625668416 -0.202993502199838
0.461934746133814 -0.198659266103824
0.441560906481691 -0.194536246410116
0.421145886043976 -0.190617284316689
0.400859698775148 -0.186930841911399
0.380642641978624 -0.183449221860203
0.360427744853797 -0.180180474990066
0.340178250702981 -0.177138122468576
0.32009648830085 -0.174340077268083
0.299909853641904 -0.171723426166635
0.279587120673837 -0.169285398275584
0.259089427932827 -0.167021329511434
0.238624162815545 -0.164955504005941
0.218160338036488 -0.163085288277715
0.197697054947421 -0.161438935792485
0.176989748019278 -0.159988764013802
0.156272400581224 -0.158773019166013
0.135531917088648 -0.157805721973269
0.114738099228544 -0.157079001799925
0.0938651715612528 -0.156626686048684
0.0728908207751544 -0.156466694234868
0.0520201694038486 -0.156633973409401
0.0312155283403745 -0.157156360173835
0.0104809995426657 -0.158059212911734
-0.0102595909661677 -0.159308060658739
-0.0308387613718547 -0.160862771870245
-0.0513068915758216 -0.162704162608572
-0.0717264479517642 -0.164799182709048
-0.0921520125824388 -0.167125370429754
-0.11265425195295 -0.169637161208651
-0.133088050259069 -0.172288258808602
-0.153525600708753 -0.175025595350884
-0.17403806766712 -0.17781045375836
-0.194710685099562 -0.180612028968916
-0.215350152806431 -0.183370676195138
-0.236016316729262 -0.186051224646182
-0.256758542904944 -0.188605507255579
-0.277655378325466 -0.190994298766508
-0.298747771503927 -0.193184116471977
-0.319850531268181 -0.195106702751716
-0.340991855066025 -0.196740854923291
-0.362166502300579 -0.198080607636198
-0.383399391506342 -0.199071005096819
-0.404707046410809 -0.19969214447541
-0.425850120698352 -0.199922582335471
-0.446842957110233 -0.199739094424538
-0.467896077391405 -0.199142764401898
-0.488748576333008 -0.198157500099002
-0.509451298942666 -0.196774105102421
-0.529999837448482 -0.194947084303199
-0.550448188836769 -0.192681925733687
-0.570820339259168 -0.189972778713977
-0.590903509800636 -0.186863051278763
-0.610697625407022 -0.183391062629219
-0.630464192337277 -0.179504895364213
-0.650213288755213 -0.175206851359343
-0.669750159367971 -0.170553831110798
-0.68915379651983 -0.165543667758445
-0.708465628732694 -0.160146377361682
-0.727726774479028 -0.154329133140721
-0.747000091273745 -0.148112362553991
-0.76615035582919 -0.141574814592799
-0.785253722118935 -0.134710200406986
-0.804348177912565 -0.127549364949801
-0.823453394507315 -0.120137099146633
-0.842358774025859 -0.112599495953021
-0.861084120400503 -0.104950189089042
-0.87963408049177 -0.0971508542761752
-0.897936694009618 -0.0892020835476179
-0.915940601165502 -0.0811406894131806
-0.936675584573016 -0.0716251840313887
-0.956609739700582 -0.0621914728841494
-0.975966272690282 -0.0527229474940348
-0.994903134285489 -0.0431273142411549
-1.01355306204678 -0.0333211211832499
-1.03200050004821 -0.0232479095573072
-1.05008728708493 -0.0129907610514139
-1.06811591278219 -0.0024211672573766
-1.08612680768411 0.00849029870545421
-1.10393879771553 0.0196126275977557
-1.12158949878788 0.0309432693612037
-1.13910620182977 0.0425122124639543
-1.15656091467847 0.0543597346428453
-1.17381340439832 0.0663868719687665
-1.19092485418041 0.0786161658135081
-1.20796634188635 0.0910729325918559
-1.2247953681242 0.103636337003864
-1.24146912341139 0.116357742619858
-1.25805229799084 0.129283455676728
-1.27437571572811 0.142278019894969
-1.29049556914219 0.155370484155076
-1.30646742455629 0.168607813805308
-1.32233876083102 0.181999542303869
-1.33815696035774 0.195549280928653
-1.35397727683477 0.20928694018594
-1.36967033253556 0.223087324593838
-1.38528714114662 0.236960482412408
-1.40088270418435 0.250896954686521
-1.41652526402176 0.264899482567035
-1.43229097023604 0.27898013531385
-1.44807488990197 0.292979269831342
-1.46392651025017 0.306908390699954
-1.47987307653566 0.320798969191546
-1.49597463013487 0.334644445282031
-1.5120986680113 0.348277658120509
-1.5283014630941 0.361710790952446
-1.54465253799773 0.3749243961681
-1.56121273851909 0.387954019427947
-1.57801082990787 0.400834543767652
-1.59489698635379 0.413422945540305
-1.61190437606742 0.425775537510215
-1.62908656898403 0.437907727751802
-1.64647609891467 0.449861482863582
-1.66414038995312 0.461647412177639
-1.68190721146328 0.473156670066299
-1.69980657435842 0.484419086497183
-1.71787397967655 0.495455032562513
-1.73612275556815 0.506247133619209
-1.75454629848024 0.516799056432688
-1.77313378083496 0.527104463940319
-1.79190196259826 0.537116891723524
-1.81088119721613 0.546833364927397
-1.82993554418116 0.556140389747945
-1.84907624265753 0.565131546866474
-1.86832906292217 0.573817902006343
-1.88768819636712 0.582255455776099
-1.90719935326098 0.590456921633493
-1.926647935127 0.598355184863307
-1.94607788945265 0.60599414915329
-1.96556366468271 0.613410445981927
-1.98517228767668 0.620601804962674
-2.00472237508636 0.627512502629506
-2.02429177813755 0.634145948454883
-2.04391114012397 0.640556395313194
-2.06361951414802 0.646761599368561
-2.08345411818016 0.652816824619893
-2.10325477924303 0.65870152563261
-2.12305742956031 0.664427513905227
-2.14293196545562 0.670054189025247
-2.16294356551415 0.675591449870747
-2.1828945751054 0.681002459779688
-2.20284622409926 0.686325991401389
-2.22284303569432 0.691542598628555
-2.24293712657306 0.69665568589361
-2.26318017075766 0.701683191154896
-2.28336301133429 0.706565641956672
-2.30355226032484 0.711281767944636
-2.32379878145861 0.715887212513937
-2.34414626755949 0.720370925384317
-2.36443677274106 0.724705658884256
-2.38473820487265 0.728873840970134
-2.40510349579419 0.732858410669044
-2.42559632281741 0.736642278267823
-2.44628500665846 0.740238547787612
-2.46695129948528 0.743609473087748
-2.48760862390648 0.746747461123467
-2.50828571849192 0.749649042361673
-2.52898100911377 0.752271487873385
-2.54967916507436 0.754557321274175
-2.57036534478826 0.75642499121703
-2.59098904831018 0.757754065980251
-2.61153971553365 0.758357653479185
-2.63203827332896 0.759010285123459
-2.65248697838784 0.759717202748297
-2.67288808720529 0.760482719872402
-2.69324384905625 0.761309817011348
-2.71355650577135 0.762182377326855
-2.73382829614579 0.763096045113971
-2.75406145540157 0.764052532663692
-2.77425821056109 0.765053646376116
-2.79442078993507 0.766100908492008
-2.81455141499123 0.767195593875707
-2.83465230612421 0.768338773105125
-2.85472567399882 0.769531339107945
-2.8747737352781 0.770774029676932
-2.89479869819319 0.772067951759659
-2.91480277069951 0.773413556579779
-2.93478815552516 0.774812258559711
-2.95475705928264 0.776263565108986
-2.97471167969392 0.777767533892107
-2.9946542163535 0.779324085861136
-3.01458686636313 0.780932713591471
-3.03451182719286 0.782615213821866
-3.05443129621221 0.784371154001539
-3.07434746507807 0.786201545796473
-3.09426253432576 0.788105886509358
-3.11417869422552 0.790083551736371
-3.134098140664 0.792134830862877
-3.15402307169627 0.794256517142643
-3.17395567868192 0.796447217357587
-3.19389816383699 0.798725131459488
-3.21385272344553 0.801102140922691
-3.23263638228089 0.803773080025129
-3.25043345864887 0.806651886351281
-3.26749767892427 0.80972249802349
-3.28402672176353 0.813007928473788
-3.30016182523759 0.816499474467972
-3.31582658549897 0.820128885861881
-3.33113452701736 0.823883355336547
-3.34615594187357 0.827739133498918
-3.3609297240092 0.831689613346677
-3.37550354282499 0.835681775106254
-3.38975688195119 0.839639682230524
-3.40373391233181 0.843573966573216
-3.41746595098013 0.847460898271835
-3.43099352810974 0.851299149718576
-3.44417142054111 0.85504706200564
-3.45699736473887 0.85871188760652
-3.46947713325057 0.862285135404726
-3.48162551873168 0.865744512978467
-3.49345199513603 0.86911097033825
-3.50486141603121 0.872357994326894
-3.51592388975162 0.875501011357965
-3.52673523146854 0.878569965756258
-3.53741626077687 0.881614365111696
-3.54798178561026 0.884615375102157
-3.55857852712369 0.887609319074014
-3.5693584642916 0.890633017708891
-3.58058848629524 0.893758732364482
-3.59222443152421 0.896950971687977
-3.60437452849149 0.90021590493673
-3.61709899882604 0.903541590802251
-3.63043699133126 0.906924704073516
-3.64441318298293 0.910310790860504
-3.65885693375521 0.91364589744037
-3.67377719583049 0.916865759418431
-3.68914538803705 0.919918713523199
-3.7049746703569 0.922789058255727
-3.72128852614494 0.925460608069502
-3.73788927924559 0.927883551818943
-3.7548125174136 0.930058935218879
-3.77206401612549 0.931978024270823
-3.78960388504511 0.933652924477409
-3.8071044114505 0.93505274711582
-3.82451716417984 0.936194672462189
-3.84181256491896 0.937082661748486
-3.85900857158381 0.937741919737877
-3.87617367198088 0.938158954720874
-3.89315253619408 0.938341806829084
-3.91001074626508 0.93828930986781
-3.92679358911704 0.938024681304931
-3.94355048079403 0.937565660897593
-3.9601582951848 0.936929203082629
-3.97672107602986 0.93613265640602
-3.99330166834194 0.935198382617455
-4.00994908265031 0.934155856636208
-4.02672357881561 0.933017929567309
-4.04350499650159 0.931811723818063
-4.06034251535695 0.930552810005792
-4.07731156800256 0.929247843696429
-4.09447608936148 0.927926097322403
-4.11164701137053 0.926615696167449
-4.12883815518725 0.925314585219274
-4.14625173743409 0.924022282319284
-4.16389195684153 0.922739795316862
-4.18153208212366 0.921500200105168
-4.19918080007374 0.920313419715227
-4.21681462130816 0.919198311754581
-4.23444117543425 0.918185857174995
-4.25210050411168 0.917293995327249
-4.26961953427381 0.916559373456222
-4.28707067435867 0.915982734256727
-4.30449808105361 0.915593233748608
-4.32197555275741 0.915417895538101
-4.33934986681487 0.915489565604413
-4.35668589392798 0.915789960936222
-4.37402523631443 0.916337797655193
-4.39141005317671 0.917163242444015
-4.40890698350705 0.91828091610636
-4.42639971070141 0.919722775718359
-4.44395621411794 0.92151311361086
-4.46163838008528 0.923640566464191
-4.47949561250046 0.926120390562077
-4.49735451086775 0.928932846123685
-4.51526058780622 0.932084051841598
-4.5332672850946 0.935541139359578
-4.55122636009845 0.939230071450772
-4.56923381209998 0.943116027263053
-4.58734333317479 0.947167107847147
-4.60542398368281 0.951284743692657
-4.62355858688162 0.955474536541111
-4.64182222433617 0.959723886084228
-4.66024444052277 0.964030266198588
-4.67864600154215 0.968338293270208
-4.69706283511129 0.972623270098887
-4.71553343520403 0.976951202911809
-4.73388672583823 0.981300762038053
-4.75218413479111 0.985655964859179
-4.77047770453153 0.990000093748355
-4.78858336943897 0.99428415155734
-4.80655453404487 0.998511485639567
-4.82445560301229 1.00264374636193
-4.84237280754553 1.00666535886745
-4.86037828848674 1.01059778052636
-4.87852540396081 1.0144135920938
-4.89667272472294 1.0180994352969
-4.91488065968591 1.02167064622892
-4.93320300181241 1.02510627820115
-4.95151098469391 1.02836244878187
-4.96987524477706 1.03146627895366
-4.98832543053572 1.03443601253975
-5.00670383891807 1.0372672085286
-5.02505930900417 1.03996577774889
-5.04347046997859 1.04254898270866
-5.06180668707871 1.0449807876459
-5.08010970692664 1.04726711467629
-5.09843251256083 1.04942306491721
-5.11684822169897 1.05147547269288
-5.13547922704721 1.05338297555588
-5.15443370764239 1.05513038282684
-5.17357154962973 1.05669230886148
-5.19291804578587 1.0580795708585
-5.21249996925043 1.0593241176122
-5.23216508100474 1.06041686203219
-5.25197694890499 1.06137372127891
-5.27199019872856 1.06220451552235
-5.29227654127876 1.06289774185612
-5.31266774018693 1.0634355202726
-5.33319819681476 1.06385814245298
-5.35392574996469 1.06413573855503
-5.37465550782138 1.06424769332022
-5.39566383082218 1.06417915789227
-5.4167021852342 1.06391853869948
-5.43776251466379 1.06348932408171
-5.45887410832659 1.0629202668886
-5.48010850766881 1.06219957265963
-5.50130566398004 1.06134047515966
-5.5225347078495 1.06034968907719
-5.54387013209146 1.05927922573262
-5.56511843614628 1.05816761932359
-5.58637417453722 1.05701265724975
-5.60771633218688 1.05583858082294
-5.62895254087768 1.05466639424887
-5.6501416833159 1.05348363784168
-5.67133471949629 1.05230984965361
-5.69234782836534 1.0511660855846
-5.71323597101559 1.0500446935783
-5.73430218333061 1.0489249121012
-5.75537491738353 1.04780314665638
-5.77650237556303 1.04667546467521
-5.79774194394045 1.04553693537938
-5.81890081500453 1.04438079176098
-5.84004167684489 1.04319728702628
-5.86120735875596 1.0420058333255
-5.88219261901336 1.04082449796825
-5.90333363401656 1.03963927159177
-5.92467326838619 1.0384523830066
-5.94603554655787 1.03724888418491
-5.96751738693305 1.03604480519436
-5.98921900975503 1.03482229000497
-6.01094126073357 1.03359415318082
-6.03275755810876 1.03237225476322
-6.05474031153241 1.03113572662479
-6.0766872349782 1.02989500922655
-6.09865908579143 1.02865983716524
-6.12067037393518 1.02744088274922
-6.14280951169123 1.02621777497499
-6.1648500793749 1.02500322461327
-6.1868685949601 1.0237776061774
-6.20893786627726 1.02255320820066
-6.23092061651023 1.02130873700669
-6.25290045926691 1.02005329165807
-6.27489324966086 1.01886168714606
-6.29666474043062 1.01771817943035
-6.31827981500652 1.01659826583446
-6.33982748386271 1.01546454212838
-6.3611499557126 1.01434484037955
-6.38233745423713 1.01323557977347
-6.40350792133277 1.01210115776735
-6.42450738016897 1.01093550192833
-6.44545576831637 1.00974414985725
-6.4664308039627 1.00851139999879
-6.48727924653921 1.0072483333608
-6.50806198790795 1.00596090388636
-6.52906431637575 1.00461715639607
-6.55003666844515 1.00320939538269
-6.57099353734406 1.00171909909892
-6.59196360464682 1.00011378246974
-6.61272552456287 0.998375288845312
-6.63332563072867 0.996462937938592
-6.65381666703252 0.994328596484172
-6.67401438655681 0.991955844697587
-6.69422335625664 0.989287932390143
-6.71451457120006 0.986282372687282
-6.73470380035402 0.982932480930434
-6.75485873020331 0.979205417516574
-6.77500763768985 0.975097455533902
-6.79517778769917 0.970572533785903
-6.81516456452995 0.965649067589346
-6.83500158662603 0.960342247458632
-6.85474772804752 0.954632828619845
-6.87446336801093 0.94852637144005
-6.89398513575842 0.942087973014343
-6.91336929622145 0.935296174916075
-6.93269941216443 0.928130550941703
-6.95203312865481 0.920571633598215
-6.97137824952712 0.912563835949472
-6.99045859180818 0.904195249332913
-7.00949158265705 0.895335828146594
-7.02848486663182 0.885993771417255
-7.04742259676007 0.876188575662524
-7.06604411705936 0.866081004226341
-7.0845400969963 0.855560773551627
-7.10291189690171 0.844699735927817
-7.12117361944025 0.833542614128218
-7.13913584855568 0.82223934036544
-7.15684466166131 0.810759290899745
-7.17435394415732 0.799104936938551
-7.19173446798121 0.787338307342977
-7.20879986966133 0.775632192486859
-7.22557984042938 0.764013505603939
-7.24208969729932 0.752508305969762
-7.25856834750451 0.741020156645945
-7.27484584973493 0.729675530583152
-7.29093702201088 0.718377413425539
-7.30687579285431 0.707094402953123
-7.32268726277567 0.69575982932808
-7.33840873084438 0.684282986925776
-7.3539782746012 0.672794246106874
-7.36951020778503 0.661355198370893
-7.38501162252756 0.649923507112226
-7.40060217957738 0.638547019200528
-7.41613066255722 0.627319527657153
-7.43165401020048 0.616216347411931
-7.44721152797668 0.605181589279433
-7.46283139948132 0.594192913725983
-7.47856913239322 0.583210526421858
-7.49429827310222 0.572312345445755
-7.51005908958224 0.5614390687882
-7.52588035491257 0.550573692747713
-7.54176739541114 0.539706065571482
-7.55756842191023 0.528943400048973
-7.5732853483854 0.518255294797336
-7.58892275586763 0.507656375247801
-7.60450547576106 0.497144981794985
-7.62003231109808 0.486764139724501
-7.63532450023051 0.476670132519182
-7.65039923275402 0.466913953705701
-7.66523711164989 0.457572785540804
-7.67972841296663 0.448698020103215
-7.6936546390103 0.440388712580613
-7.70675265935277 0.432746486729912
-7.71877133003141 0.425859168422141
-7.72956374300165 0.419815400165355
-7.73891119452565 0.414752673571524
-7.74677224817597 0.410663878582382
-7.75315803426768 0.407456027234785
-7.75808367438782 0.405057921612185
-7.76162130160749 0.4033184456067
-7.76397945004714 0.402102895869272
-7.76547085991366 0.401270566099747
-7.76634652295138 0.400712903356286
-7.76677061572226 0.40033710806001
-7.76683131469878 0.400080644839123
-7.76653884072088 0.399900836146358
-7.76584397706217 0.399706189370586
-7.7646634187323 0.399434360275198
-7.76279850025596 0.399008496764731
-7.76000264484128 0.39826454319882
-7.75585581419104 0.396987175257828
-7.74966016722033 0.394900593490394
-7.74035623324095 0.391596002726005
-7.73156545419969 0.388291406998404
-7.72328686235072 0.38498674577125
-7.7155195297147 0.38168202308698
-7.70826261057233 0.378376890892853
-7.70151530886875 0.375071244036804
-7.69527687017642 0.371765120606238
-7.68954661810781 0.368458544605143
-7.68432390545583 0.365151540023176
-7.67960817101095 0.361844126181087
-7.67539888473746 0.3585362325244
-7.67169559708188 0.355227870275638
-7.66849788354848 0.351919051044379
-7.66580540460854 0.348609783280382
-7.66361785594025 0.345300072554603
-7.66193500533345 0.341989921712
-7.66075665349223 0.338679330980131
-7.66008268306977 0.335368420056236
-7.65991301289221 0.33205716975559
-7.66024762490574 0.328745574947557
-7.66108656430028 0.325433628320125
-7.66242991503595 0.322121320332661
-7.66427782828871 0.318808635483884
-7.66663049827661 0.31549555765045
-7.66948819806764 0.312182560606379
-7.67285123406844 0.308869569965247
-7.67671997906618 0.305556535783571
-7.68109486344179 0.302243465860271
-7.68597636424474 0.298930398460274
-7.6913650208813 0.295617331303526
-7.69726142891268 0.292304263286461
-7.6973532056396 0.290416356979499
-7.69357298562049 0.289855780076887
-7.6872690803095 0.290404442004782
-7.67939872534387 0.291659412572907
-7.6705882886457 0.293027442115474
-7.66120502378105 0.293807624764521
-7.65165432919487 0.293357655771743
-7.64216207611617 0.291263184204099
-7.63286305867952 0.287237244070919
-7.62382368570131 0.281227260369664
-7.61505316194361 0.273314775759166
-7.60648819997975 0.263729019893811
-7.5980328520924 0.252697013984361
-7.58962642110703 0.240484677110327
-7.58125008515556 0.227316271265586
-7.57297416494284 0.213471627310798
-7.5647681681802 0.198996566629117
-7.55659654010787 0.183891911283872
-7.54846316137413 0.168178519609627
-7.54036151411743 0.151805917693079
-7.53239133176788 0.134891813366464
-7.5245399353564 0.117403942542479
-7.51676418593475 0.0993062998027178
-7.50901685311391 0.0805842737509857
-7.50133956482262 0.0614619849105175
-7.49373530516093 0.0420002997359063
-7.4862025225137 0.0222980753854107
-7.47873485953752 0.0024189710094921
-7.47128648726903 -0.0176115824229198
-7.46391572758997 -0.0375401628162343
-7.45660986199226 -0.0573784546325881
-7.44935028877707 -0.0771523660053501
-7.44214298164932 -0.0968471362506932
-7.43505320303471 -0.116248924002161
-7.42798098251404 -0.135589011312403
-7.42087977726701 -0.155088637243086
-7.41381585102813 -0.174531451486368
-7.40679450307171 -0.193939558143765
-7.39979454888589 -0.213359240406974
-7.39278159307305 -0.232870321124395
-7.38573585794033 -0.252506261655191
-7.37871613674717 -0.272073353145188
-7.37170153962427 -0.291645995027255
-7.36467201947966 -0.31129784436548
-7.35762182812307 -0.331054140778698
-7.35055857535015 -0.350927558182567
-7.34357173802325 -0.370670244777372
-7.3366627671141 -0.390322600242783
-7.32972848528472 -0.410154514129893
-7.3228464532425 -0.430000047685747
-7.31600211413589 -0.449911049884278
-7.30918725673078 -0.469946827523624
-7.30239863813186 -0.490166922858895
-7.29560378076706 -0.510606384366991
-7.28887007134729 -0.53106038724829
-7.28217428293476 -0.551550656047183
-7.27549956674534 -0.572097798983329
-7.2688326255039 -0.592734169447038
-7.26216146846313 -0.613493010333003
-7.25554065627023 -0.634153223879189
-7.24896595820296 -0.654786586759359
-7.24240727497022 -0.675427880085723
-7.23583720339272 -0.696145243817347
-7.22931049596475 -0.716739794188785
-7.22280512075798 -0.737271792295457
-7.21630265109021 -0.757791019578028
-7.20975149510575 -0.77837526813948
-7.20312550801437 -0.799083073280072
-7.19650368155851 -0.819713951326511
-7.1898816145933 -0.840334251364835
-7.18325487128866 -0.861008548153162
-7.17661821476883 -0.881778640597636
-7.17003201845364 -0.902424331255487
-7.16348106591869 -0.922950800672439
-7.15688879358385 -0.943624650775401
-7.15025577675257 -0.964457454939342
-7.14365981678862 -0.985175874237791
-7.13699996197715 -1.00601669491169
-7.13028223268442 -1.02694911684301
-7.12350289776979 -1.04799620970928
-7.11674878429743 -1.0689322605225
-7.11001004187904 -1.0897809486623
-7.10327941302911 -1.11058481452362
-7.09655100034134 -1.13137888934325
-7.08981929708349 -1.15217981463512
-7.08316229752833 -1.17280438365141
-7.07653606095606 -1.19332458784869
-7.06991862285633 -1.2138116817427
-7.06330632105264 -1.23431462379139
-7.05672999368224 -1.25463378188404
-7.05015926129536 -1.27484022860851
-7.04356468946386 -1.29498615197848
-7.03692964658074 -1.31514709891892
-7.0302306923385 -1.3353843843081
-7.02351770525458 -1.35553761689671
-7.01677096037623 -1.37565006911712
-7.00993139106389 -1.39580843052977
-7.00295794794036 -1.41609033227959
-6.99587115490962 -1.43627585944263
-6.98860518779022 -1.45639694564678
-6.98111450632251 -1.47645797787934
-6.97334940871985 -1.49648678206403
-6.96526444290531 -1.51652253313991
-6.95689489795406 -1.53637107541889
-6.94817769064972 -1.55605813401941
-6.93905852418968 -1.57564937614387
-6.92949952717104 -1.59516091804426
-6.91960744699665 -1.61437777696822
-6.90929844118617 -1.6335380883219
-6.89863712526889 -1.65261927004929
-6.88771291915201 -1.67166923120913
-6.87673917299155 -1.69052965258948
-6.86578636297683 -1.70927845520147
-6.85489486071926 -1.72800835863056
-6.84413215497044 -1.74680891744914
-6.83368832244425 -1.76553027618052
-6.82360530260924 -1.78426055771179
-6.8139521010429 -1.80308937833676
-6.80476916137197 -1.82205586601776
-6.79609198246253 -1.84120546234715
-6.78805797439174 -1.86034629320298
-6.78067735531067 -1.87948601165422
-6.77386848618847 -1.89888332826184
-6.76771320426537 -1.91833762430644
-6.76221836287669 -1.93789231918076
-6.75736739776965 -1.95759090456676
-6.75313438909857 -1.97746748116371
-6.7495133190691 -1.99755247001137
-6.74650024277007 -2.01764321442073
-6.74402535967538 -2.0380237632763
-6.74202535723697 -2.05865822527915
-6.74043273585024 -2.07951975132834
-6.73918129684595 -2.10033478886499
-6.73821179829354 -2.12132448727155
-6.73744517456939 -2.1424712646602
-6.73681968969881 -2.16376710603774
-6.73629734579032 -2.18501122400677
-6.73584069828317 -2.20624540939824
-6.73544016812153 -2.22754329047276
-6.73509562297733 -2.24897241691325
-6.73479953354077 -2.27032083259904
-6.73455252204122 -2.29168130338881
-6.73436347520201 -2.31308161584474
-6.73416694293747 -2.33450598195169
-6.73394667528535 -2.3559928943042
-6.73369923436499 -2.37735682651625
-6.73341660935998 -2.39865964528274
-6.73306805917115 -2.41991947851696
-6.73264538851228 -2.44114725165962
-6.73214473190062 -2.46241551851325
-6.73154910643891 -2.48353562115878
-6.73084236266527 -2.50456207340931
-6.73000647514856 -2.52550019995638
-6.72901849834259 -2.54640752837775
-6.7278635235588 -2.56705167717681
-6.72651536539203 -2.58769216505632
-6.72493214431345 -2.60857114692468
-6.72309968475325 -2.62946824299963
-6.72101232507298 -2.6504006061852
-6.71865514147929 -2.67139129649314
-6.71603508276481 -2.69247314707271
-6.71314848202024 -2.71372776270796
-6.71003091249543 -2.73494744753156
-6.70671270780096 -2.75616990299671
-6.7031904353519 -2.77743329091419
-6.69947678647501 -2.7987826325518
-6.69563632525964 -2.82000935201275
-6.69167864338742 -2.84111883057337
-6.68759359782469 -2.86216380134221
-6.68340008165374 -2.88315269165696
-6.67911552544563 -2.90411803091457
-6.67479251996481 -2.92485390956877
-6.67042671293021 -2.94536722169646
-6.66600655989699 -2.96567781496535
-6.66149454715639 -2.98583857875795
-6.65687133227343 -3.00592754507532
-6.65216614994176 -3.02579297442645
-6.64737760019685 -3.04553041848627
-6.64247345140952 -3.06521398752233
-6.63741882686724 -3.08490876363091
-6.63225443979223 -3.10443046315067
-6.62696888803398 -3.12380030287051
-6.62156402450629 -3.14316479910182
-6.61602161666661 -3.16261932515358
-6.6103675531477 -3.18225759289522
-6.60460893353018 -3.2019528489051
-6.59871845447531 -3.221755384751
-6.59266342928397 -3.24171671526137
-6.58643393381033 -3.26183121264686
-6.58009160013672 -3.28183416695064
-6.57359496634833 -3.30171017150838
-6.56691025152057 -3.32148955453677
-6.5600224713654 -3.34118645245292
-6.55288247087641 -3.36090167952245
-6.54555000729726 -3.38052137974587
-6.53798517270035 -3.40013343362735
-6.53015941210388 -3.41980769248041
-6.52213480204008 -3.43938993435727
-6.51390632352821 -3.45896176707319
-6.50546827546516 -3.47856955284739
-6.49683015402316 -3.49826567823833
-6.48810231260565 -3.51789958518948
-6.47929592011021 -3.5375447852451
-6.47040893623423 -3.55722643442242
-6.46145938990858 -3.57700929800467
-6.45240432111224 -3.59694159268063
-6.44335119961978 -3.61684532635537
-6.43429008737128 -3.63677222665642
-6.42520971587264 -3.65676113938171
-6.41606192255508 -3.67682781123166
-6.40695539644411 -3.6967497678486
-6.39787120938778 -3.7165482810634
-6.38882784141172 -3.7362406922216
-6.37969927621482 -3.75607781479707
-6.37058061630637 -3.7758657148313
-6.3614677099438 -3.79561995440866
-6.3523405716424 -3.81538395918765
-6.34314317059474 -3.83526978779263
-6.33384130130965 -3.85534291146988
-6.32453452554917 -3.87541543444142
-6.31520386712512 -3.89555060776739
-6.3058095017163 -3.9157719262799
-6.2963345259069 -3.93613407084517
-6.28686619077741 -3.95644732436378
-6.27740878056789 -3.97674826534714
-6.2679515918966 -3.99708816474752
-6.25848396932386 -4.01750806864598
-6.2489766797666 -4.03804568138709
-6.23951153315829 -4.05850717737771
-6.23007653433511 -4.07892047821235
-6.22064793654259 -4.09932217652568
-6.21121989052892 -4.11973017562152
-6.20176983192672 -4.14018018468763
-6.19237237678401 -4.1604804516147
-6.18301011365259 -4.18071563044721
-6.17363789155884 -4.20094272294828
-6.16422571656475 -4.22121737868676
-6.15487141425406 -4.24136872189486
-6.14554799768367 -4.26143970118656
-6.13625050223988 -4.28145981404757
-6.12697832534372 -4.30146677516058
-6.11771830328666 -4.32151285125034
-6.10856023405722 -4.34137211776222
-6.09937561678636 -4.36132334246246
-6.09014920822679 -4.3813927012097
-6.08085873229031 -4.40160941550387
-6.07158885374312 -4.42177614688521
-6.06230998602733 -4.44195518038364
-6.05302465637016 -4.46216061016275
-6.0437174905611 -4.48247795598781
-6.03445352433999 -4.50274349016344
-6.02520417232938 -4.52298580081712
-6.01594328783152 -4.54327983170717
-6.00665961309295 -4.56365894756004
-5.9973212354901 -4.5841540140263
-5.98798794930061 -4.60454733377927
-5.97865311796606 -4.62487215450543
-5.96929304646862 -4.64515024128313
-5.9598965911531 -4.66544578943856
-5.95053058307369 -4.68557466749233
-5.94116608179016 -4.7055928220156
-5.93180801597725 -4.72558788959161
-5.92242877742935 -4.74557664466912
-5.91303081955355 -4.76560830458565
-5.90371435879498 -4.78548802505938
-5.89447559432374 -4.8052635046067
-5.88530699767185 -4.82494822780971
-5.87621282998639 -4.84454047137167
-5.86717625678095 -4.86402301937421
-5.858206944291 -4.88339377944408
-5.84919135037977 -4.90288316778416
-5.84014267344289 -4.92249023928085
-5.83113581730465 -4.94201601738388
-5.82216863545333 -4.96149914870459
-5.81319521172245 -4.98097113390976
-5.80420219592127 -5.00047835627258
-5.79527386740658 -5.01987211200762
-5.78637084425332 -5.03920242496224
-5.77747481731356 -5.05850869686375
-5.76855183640818 -5.07780954783918
-5.75969796213398 -5.09692169530719
-5.75091997698514 -5.11588148538564
-5.74218697592994 -5.13468298698402
-5.73342517604712 -5.1535617509776
-5.72464004550411 -5.17248151946593
-5.71573220457258 -5.1916474041927
-5.70669935353591 -5.21107126035334
-5.69763587882591 -5.23052341236308
-5.68853010372857 -5.25003016869265
-5.6793296052571 -5.26959349409704
-5.66999964871492 -5.28922482516373
-5.6605005319838 -5.30894699702271
-5.65088181160854 -5.32851331442308
-5.64105518992115 -5.34790221670884
-5.6308973814455 -5.36706242332234
-5.62024618986629 -5.38592108232349
-5.60894047453117 -5.40408683054273
-5.59659123428243 -5.42126995971299
-5.58266747809045 -5.43704963083604
-5.56633962572013 -5.45075312433173
-5.55028532089183 -5.46463391516406
-5.53450278614507 -5.47869259367973
-5.51899028809145 -5.49292975717143
-5.50374612088518 -5.50734601160053
-5.48876860073331 -5.52194197058693
-5.47405607829704 -5.53671825313692
-5.45960692705884 -5.55167548704364
-5.44541955809406 -5.56681431125662
-5.43149241142465 -5.58213536566013
-5.41782394976644 -5.59763930421559
-5.40441266127578 -5.61332678469544
-5.39125707998712 -5.62919847568047
-5.37835574383619 -5.64525505206742
-5.3657072396172 -5.66149719595039
-5.35331016744456 -5.67792559664311
-5.34116316314302 -5.6945409559204
-5.32926488864681 -5.71134397671195
-5.31761403010948 -5.72833537444401
-5.30620930158167 -5.74551587140362
-5.29504945146779 -5.76288620077387
-5.28413325080828 -5.78044709700108
-5.27345948946268 -5.79819930917989
-5.26302699608692 -5.81614359177194
-5.25283461832474 -5.83428070806546
-5.2428812340056 -5.85261143247282
-5.23316574787797 -5.87113653724585
-5.22368708673525 -5.8898568165924
-5.21444420793892 -5.90877306294406
-5.20543609227615 -5.92788608294649
-5.19666174462294 -5.94719668736799
-5.18812020145036 -5.96670569751243
-5.17981051481563 -5.98641394317458
-5.17173177938029 -6.00632225864671
-5.1638830988163 -6.02643149758644
-5.15626360603857 -6.04674251325199
-5.14887246308605 -6.06725616773899
-5.14170885648286 -6.08797333016925
-5.13477199741401 -6.10889488447839
-5.12806112039204 -6.13002172040535
-5.1215754839139 -6.15135473867412
-5.11531437291427 -6.17289484013816
-5.10927710174207 -6.19464294457591
-5.10346300355007 -6.21659997688278
-5.09787143667562 -6.23876686961979
-5.09250178177296 -6.26114456607575
-5.08735345258289 -6.28373401732708
-5.08242587786111 -6.30653618297771
-5.07771851525499 -6.32955203852559
-5.07323084594874 -6.35278255521733
-5.06896237472303 -6.37622872264589
-5.06491263522301 -6.399891538943
-5.06108117514436 -6.42377201161026
-5.05746757651303 -6.44787115333518
-5.0540714384445 -6.47218998916967
-5.05089239188611 -6.49672955137481
-5.04793008382877 -6.52149088686111
-5.04518418593303 -6.54647505163478
-5.04265438996813 -6.57168310259424
-5.04034042777244 -6.59711611419126
-5.03824203813061 -6.62277516301099
-5.03401161776998 -6.64637922802033
-5.02836673796653 -6.66865023222938
-5.02180374927738 -6.69012025159032
-5.01476540548487 -6.71093592641002
-5.00749674642915 -6.73135201087272
-5.00015359668767 -6.75157306657455
-4.99282880026927 -6.77178772087981
-4.98568554708983 -6.79186529663571
-4.97879986401881 -6.81186988238116
-4.97219021790033 -6.8318533512321
-4.96587107987996 -6.85188078330489
-4.95985535347079 -6.87200787457136
-4.95419017688385 -6.89202226905608
-4.94884715063798 -6.91196072846626
-4.94376799596595 -6.93186404545742
-4.93890520847013 -6.95176711625826
-4.93421405615541 -6.97170480676191
-4.92966187466464 -6.99171777305278
-4.92518893681131 -7.01189278560786
-4.92076566229185 -7.03231026846339
-4.91640446669942 -7.05285676906266
-4.91206094636165 -7.07357444829012
-4.90771048690936 -7.09448389867418
-4.9033273033065 -7.11560434182958
-4.89889691986861 -7.13695684856682
-4.89446419519081 -7.15829987750396
-4.89002314053605 -7.17962479144263
-4.88556635354617 -7.20097017489091
-4.8811006192009 -7.2223612818003
-4.87669847423586 -7.24361328499191
-4.87235906463224 -7.264769299698
-4.86807529950692 -7.28588985784605
-4.86383260573462 -7.30706305309111
-4.8596233455666 -7.32838578642544
-4.85551276100115 -7.34967852804555
-4.85153226080666 -7.37099121699433
-4.84771831237398 -7.39234363494545
-4.8441354230878 -7.41379572082202
-4.84080289285272 -7.43537385292335
-4.8378326055344 -7.45685755289106
-4.83529650643956 -7.47824499518748
-4.83318823548276 -7.49981973378299
-4.83155662750728 -7.52131045430248
-4.83039618779839 -7.54269977381057
-4.82969662521282 -7.56400355692515
-4.82944111039829 -7.58520659947275
-4.82963710111915 -7.60636055784259
-4.83026712666937 -7.62727437457008
-4.83133630947085 -7.64800149759652
-4.83283957982414 -7.66858085380169
-4.83477765489232 -7.68907686020532
-4.8371571547018 -7.7095232190577
-4.83997489286899 -7.72974431507127
-4.84320052621262 -7.74989529018618
-4.84682189974663 -7.77008603905328
-4.85087664771088 -7.79039961922371
-4.8553407884334 -7.81063817139582
-4.86024242021839 -7.83082576652977
-4.86559917619527 -7.85099467637796
-4.87140436518287 -7.87115730721556
-4.87764268620341 -7.89134175546521
-4.88420355185173 -7.91131178025658
-4.89106428249373 -7.93111346004393
-4.89820221987813 -7.95076466375047
-4.90559089447461 -7.97032505168304
-4.9131451226813 -7.98962132681897
-4.92089167893844 -8.00872068193411
-4.92882242795277 -8.02767407598556
-4.93694341492158 -8.04652475450454
-4.94524224707662 -8.06528200221068
-4.95358510319341 -8.0837049651955
-4.96194647337388 -8.10173127653857
-4.97027032873287 -8.11933260563675
-4.97852797081346 -8.13644292837441
-4.98661227891576 -8.15281345488854
-4.99457195177558 -8.16855719254776
-5.00243472512864 -8.1838161366814
-5.01022883894033 -8.19868454944291
-5.01795416206102 -8.21329177981388
-5.02549807824434 -8.22753915226579
-5.03283517218849 -8.241485461467
-5.04012385416054 -8.25534005144211
-5.04749755727799 -8.26916290701474
-5.05505358082924 -8.28280769432536
-5.06283580450978 -8.29638299480714
-5.07084210448348 -8.30990199717353
-5.07905760802074 -8.32335170991273
-5.0873680970262 -8.3366237853513
-5.09575943973891 -8.34976435704728
-5.10421428337931 -8.36279752988754
-5.11275960668125 -8.37576291303261
-5.12140397050525 -8.38870551296658
-5.13005488345207 -8.4015325140665
-5.13872186189022 -8.41428248002373
-5.14741712108221 -8.42699783027705
-5.15617446165865 -8.43969841614601
-5.16490358849269 -8.45223413398118
-5.17362717788357 -8.46466174966188
-5.18235015873882 -8.47696966226358
-5.19109417069226 -8.48920997404957
-5.19988440030378 -8.50144020679344
-5.20860245253379 -8.51353194447826
-5.21723550024954 -8.52551782004892
-5.22577324171764 -8.53741272575604
-5.23423977918712 -8.54928331034629
-5.24254632967215 -8.56107490721328
-5.25069491365526 -8.57285359911892
-5.25864414939978 -8.58473357730874
-5.26643682091541 -8.59677898318231
-5.27419028274389 -8.6089476880441
-5.28189756748264 -8.62106747136788
-5.28952710414955 -8.63315891752301
-5.29705106584379 -8.64528809680218
-5.30442386526342 -8.65749391721097
-5.31159128708287 -8.66986161851183
-5.31843092404516 -8.68231932443754
-5.32493337076455 -8.69489396985626
-5.331099756864 -8.707615346842
-5.33695874789924 -8.72048758164023
-5.34247045479344 -8.73337256456944
-5.34763745458304 -8.74628508790349
-5.35248840228381 -8.75921103953802
-5.35704921729767 -8.77217207032656
-5.3613642659298 -8.78516220269521
-5.36543655757145 -8.79799609068942
-5.36931232502218 -8.8106464534132
-5.37305519679953 -8.82310552152706
-5.37675687836351 -8.83531671269236
-5.38043488195427 -8.84711492582623
-5.38411954892868 -8.85850036716308
-5.38777506801213 -8.86943757765941
-5.39137754311054 -8.87993325885438
-5.39491097213567 -8.88997038322433
-5.39829728614942 -8.89937134968337
-5.40146728157191 -8.90805296907082
-5.40434898806058 -8.9158785332781
-5.40690601442481 -8.9227689139955
-5.40906414449693 -8.92857204311488
-5.41078294114267 -8.93317144506999
-5.41208257737853 -8.93658396269642
-5.41301357832456 -8.93894583568099
-5.41369898799643 -8.94050202313286
-5.41422051662115 -8.94151325164701
-5.41466592262988 -8.94218243684158
-5.41509335355197 -8.9426216895036
-5.41547383031817 -8.94290461155991
-5.41580410563351 -8.94307859593099
-5.4160556811149 -8.94320629171413
-5.41622043432456 -8.94330909732035
-5.41630927927828 -8.94340423729296
-5.41635398042617 -8.94350764745469
-5.41636205761364 -8.94360315355611
-5.4163516931966 -8.94370675866939
-5.41635477280858 -8.94380232001459
-5.41635500084061 -8.94390585038185
-5.41635241473603 -8.94403468979589
-5.41634658130897 -8.94421043001064
-5.41635333346642 -8.94442901065857
-5.41639062577013 -8.94472705463509
-5.41648155473638 -8.94512099588471
-5.41664141720756 -8.94567683503609
-5.41688030074942 -8.94645419778644
-5.41720477423181 -8.94753307227237
-5.41763579962503 -8.94906071847735
-5.41824591396028 -8.95122607324032
-5.41905370690018 -8.95420767195895
-5.42004376042491 -8.95803605895887
-5.42114723704822 -8.96254865919508
-5.42231438828923 -8.96769751930356
-5.42350619207009 -8.97344079544378
-5.42465412259882 -8.97968548693747
-5.42568228866578 -8.98628859265687
-5.42651145110066 -8.99315041480319
-5.42706251012051 -9.00019785938918
-5.42722640006807 -9.00740566728254
-5.42689615859794 -9.01464145902569
-5.42588166681437 -9.02174403497769
-5.42391130201456 -9.02859688613704
-5.42063666805974 -9.03500815924628
-5.4156075909141 -9.04062827057118
-5.4108354481622 -9.04657831703343
-5.40631971197755 -9.05285856310248
-5.40205988639298 -9.05946926689079
-5.39805549987797 -9.06641071026087
-5.39430611567909 -9.07368318881938
-5.39081131818233 -9.0812870105977
-5.38757071673036 -9.08922250067191
-5.38458396065973 -9.09748999315833
-5.38185071932113 -9.10608984568211
-5.37937068995649 -9.11502241849108
-5.37714360086861 -9.12428809676668
-5.37516920622014 -9.13388727226871
-5.37344728713008 -9.14382035019911
-5.37197764964766 -9.15408775308938
-5.37076014056772 -9.16468991794545
-5.36979461995883 -9.17562729993796
-5.36908098954096 -9.18690036125504
-5.36861916437012 -9.19850957988898
-5.36840908786735 -9.21045545230193
-5.36845074569246 -9.22273848625998
-5.36874414146894 -9.23535919956309
-5.36928930336132 -9.24831813509768
-5.37008629089147 -9.26161584047737
-5.3711351953162 -9.27525287861429
-5.37243613674962 -9.28922982924637
-5.37398925086517 -9.30354729360789
-5.37579470924104 -9.31820587541351
-5.37785271334585 -9.33320619842064
-5.38016349298755 -9.34854890460616
-5.38272729867412 -9.36423464071445
-5.38432133299086 -9.3783604147428
-5.38526429717885 -9.39126613767188
-5.38578214264422 -9.40332181315009
-5.38599566186352 -9.41477177411155
-5.38600804632512 -9.42575886736998
-5.3859054721568 -9.43638189910255
-5.38575476735418 -9.44659493309377
-5.38556419906258 -9.45641733313105
-5.38535215507298 -9.46581977038807
-5.38509976486698 -9.47488679953882
-5.38483182747121 -9.48346253456105
-5.38457052345766 -9.49147628120774
-5.3843055120331 -9.49888043631836
-5.38404264930421 -9.50567600800335
-5.3838217843066 -9.51171179482549
-5.38365620733212 -9.51682655094519
-5.38351804996453 -9.52187720059911
-5.38338406206754 -9.52770992564263
-5.38323169647093 -9.53530193621211
-5.38303531041809 -9.54592518865888
-5.38287958053029 -9.55507836976985
-5.38275511910056 -9.56285452630343
-5.38267461662755 -9.56903225101103
-5.38262451460105 -9.57464655231369
-5.38259639378865 -9.58063804647959
-5.38258551763376 -9.58801053845035
-5.38259005055464 -9.5979992168863
-5.38261076643707 -9.61227757547333
-5.38265113799581 -9.63323779939829
-5.38261706296353 -9.65171204927801
-5.38250280409545 -9.66865152557886
-5.38225550622302 -9.68446555524701
-5.38179990874283 -9.69965964589047
-5.3811770662499 -9.71443445507866
-5.38046715819437 -9.72856864371376
-5.379651619039 -9.74221928794031
-5.37869411449113 -9.7555294490731
-5.37750077633622 -9.7686186389603
-5.37597169614885 -9.78140186219466
-5.37403453937551 -9.79396059230472
-5.37163239101002 -9.80642245495197
-5.36873099528464 -9.81884858487488
-5.36538024662369 -9.83110989305467
-5.36160487031615 -9.84321716533747
-5.35747586783562 -9.85517214040983
-5.35310562558642 -9.86698453794849
-5.3485659380354 -9.87864017716765
-5.34398417279935 -9.89004837739109
-5.33941333931006 -9.90121100377885
-5.33487465828019 -9.91213901438546
-5.3303947619591 -9.92282081563115
-5.32597653446381 -9.93327056729147
-5.32166687386543 -9.94341329410541
-5.31746368314352 -9.95327334029113
-5.31339959537445 -9.96284442376786
-5.30948021204956 -9.97208861511854
-5.30578591052589 -9.98081271494614
-5.30236770791432 -9.98892062869062
-5.29925579625745 -9.99636378130327
-5.29646464905096 -10.0030659632142
-5.29399544524332 -10.0089775830946
-5.29186990952643 -10.0140840825756
-5.29010032985057 -10.0184035177853
-5.28865799481607 -10.0219895786827
-5.28753562504751 -10.0247730837984
-5.28671251730375 -10.0268853845369
-5.28616787927046 -10.028345383624
-5.28584368551156 -10.0293976910326
-5.28568538412317 -10.0300511028589
-5.28563269745084 -10.0304151018335
-5.28564313183121 -10.0307181582307
-5.28570161735488 -10.0310110535475
-5.28576755030981 -10.0313428560759
-5.28581838428223 -10.0317691575222
-5.28582904663609 -10.0323613815148
-5.28578450689626 -10.0330512488265
-5.28564363469987 -10.0339543401202
-5.28536591033268 -10.0352219608038
-5.28488778005146 -10.0370664826675
-5.28417923167472 -10.0396294445879
-5.28322193120427 -10.0431727413721
-5.28205659203012 -10.0476200326422
-5.28077300137989 -10.0528789246474
-5.27942422882887 -10.0589930066568
-5.27808600770524 -10.0658141498965
-5.27681903906833 -10.0731451913628
-5.27564555056969 -10.0808743940371
-5.27455305495584 -10.0887892337044
-5.27355951266323 -10.0963732880315
-5.27271635338724 -10.1030547218427
-5.27211713397796 -10.1079429816385
-5.27166100249595 -10.1118570365371
-5.27127120965658 -10.1154526480329
-5.27088216107006 -10.1193322165639
-5.27042837486784 -10.1241457272251
-5.26983349155176 -10.1306996286921
-5.26899740050107 -10.1400919632452
-5.26777940612306 -10.1538963101059
-5.26597453428075 -10.1744254345999
-5.26327905171212 -10.2051187787239
-5.26129107895627 -10.2298465965066
-5.25972652365714 -10.2504068124129
-5.25847345176529 -10.2680666145687
-5.2574218949417 -10.2837747477925
-5.25647897941844 -10.2981529857526
-5.25563738397791 -10.311767783858
-5.25497410663018 -10.3247226897922
-5.25461297897681 -10.3370107069839
-5.25457730905986 -10.3486806004716
-5.25494518630225 -10.3598450690338
-5.25569442758085 -10.3705321350176
-5.25668203774467 -10.3806898388684
-5.25768742056466 -10.390345026305
-5.25837523773954 -10.399607852712
-5.25840717472413 -10.4085227315628
-5.25746906496898 -10.4170757799508
-5.25528532587435 -10.4251925080935
-5.25170709056643 -10.4327253101789
-5.24670398598709 -10.4397637505239
-5.24042630147553 -10.4464820642954
-5.23323026241706 -10.4530008510754
-5.22533436913553 -10.4595747815674
-5.21692343555883 -10.46613276812
-5.20816286559677 -10.4727685477785
-5.19917605025201 -10.4794214034674
-5.19014920690105 -10.4860334694208
-5.18112727822989 -10.4925400494706
-5.17215603988974 -10.4990262702091
-5.16322251849726 -10.5055738459965
-5.1542360027481 -10.51210727935
-5.14526504174709 -10.51854869819
-5.13627989566953 -10.5249723119402
-5.12723162104014 -10.5312818502558
-5.11812791695189 -10.5375294314343
-5.10906806314897 -10.5437567865084
-5.10002417870085 -10.5500022642537
-5.09095425082125 -10.5563072445202
-5.08181188792802 -10.5625555851428
-5.07253849674576 -10.5687891451522
-5.06320481104027 -10.5750473089759
-5.05383776681493 -10.581373587415
-5.04442505513065 -10.5878228998943
-5.03494668252117 -10.5944707763836
-5.02550614839949 -10.601091029761
-5.01606196488235 -10.607787824949
-5.00657201729494 -10.6145106607375
-4.99703695374795 -10.6212133987017
-4.98743301144374 -10.6280140363029
-4.97784255544856 -10.6347119672278
-4.96819885906312 -10.6414243818339
-4.95849351696816 -10.6481033955244
-4.94874137757242 -10.6546955254713
-4.93906713749606 -10.6613002316941
-4.92949087865022 -10.6678515881353
-4.91999871794393 -10.6744422198601
-4.91055729993635 -10.6810038403714
-4.90112497721674 -10.6876307995167
-4.89176254553313 -10.6942608951125
-4.88240802569126 -10.7008324524466
-4.87298394453847 -10.7074414842599
-4.86341794157883 -10.7141902795471
-4.8538155858267 -10.7210370554841
-4.84425950926453 -10.7277889360446
-4.83482332026412 -10.7344046526207
-4.82558397672557 -10.7409876147861
-4.8166515633221 -10.7473007519577
-4.80830554653978 -10.7532292915264
-4.79914141738919 -10.7597664902028
-4.7876169879457 -10.76800758386
-4.77665312070244 -10.7758158570666
-4.76603606237189 -10.7834945152506
-4.75567768067392 -10.7909900610858
-4.74549974067882 -10.7983858157505
-4.73547120541307 -10.8057483877326
-4.72555253207137 -10.813138811263
-4.71582357287713 -10.8204553001188
-4.70621111537231 -10.827751182052
-4.69664473860663 -10.8349088242581
-4.68706177099633 -10.8419549342082
-4.67743038228093 -10.8488975344367
-4.66784514186416 -10.8557273139827
-4.65832429399714 -10.8625835402284
-4.6489305509812 -10.8694424260603
-4.63978164700662 -10.8762806271883
-4.63107016652299 -10.8829038367779
-4.62286048666566 -10.8892492247992
-4.61523415453949 -10.895374913117
-4.60821954612128 -10.9011347198514
-4.60177979009848 -10.9066561539882
-4.59589067620861 -10.9118592877339
-4.59041884875035 -10.9167783722456
-4.58520026345505 -10.9215675580909
-4.58001259562329 -10.9265267326917
-4.57477325614506 -10.9316492663697
-4.56937415363494 -10.9369558996105
-4.56374757079215 -10.9423307261057
-4.5578715991033 -10.9478367559914
-4.55183371779192 -10.9533914798179
-4.54564371270881 -10.9589205593355
-4.53928576202745 -10.9645128409033
-4.53281668860763 -10.9701002781619
-4.52627461737413 -10.9757814943379
-4.51971902131456 -10.9815033433888
-4.51303892370701 -10.9872194778398
-4.50613602667829 -10.9930500883439
-4.4989754514971 -10.9991345462291
-4.49151258522239 -11.0054872547655
-4.4839378410339 -11.0120000764404
-4.47617055057515 -11.0187591662558
-4.4682153538263 -11.0258919726034
-4.4601293557304 -11.0334210334506
-4.4520990216124 -11.0412677951914
-4.44418563356534 -11.0494069240148
-4.43633560949111 -11.057862062385
-4.4285396759206 -11.0665423176537
-4.42079765522004 -11.0753944950638
-4.41318571465929 -11.0843942146518
-4.4057010093623 -11.0933743159664
-4.39834528695427 -11.1023318443027
-4.39122608440185 -11.111260069933
-4.38449074882282 -11.11997985071
-4.37813258925723 -11.1284446196236
-4.37229248830181 -11.1365650933654
-4.36701350659818 -11.1441942997632
-4.36238268579189 -11.151270451464
-4.35852886581792 -11.1574716127013
-4.35552666628581 -11.1624967473325
-4.35352673864273 -11.1658477870361
-4.3521925258651 -11.168086163157
-4.3512995022645 -11.1695868957524
-4.35069738949857 -11.1706014174871
-4.3502848567656 -11.171299694033
-4.3499924768914 -11.1717987153337
-4.34977105427425 -11.1721820902217
-4.34958332390783 -11.1725140471054
-4.34939769492885 -11.1728502077767
-4.34918292664086 -11.17324687901
-4.34890287640531 -11.1737705315514
-4.3485104164805 -11.1745088921867
-4.34793950338193 -11.175585668152
-4.3470940598407 -11.1771812574689
-4.34583181396703 -11.179562984289
-4.3439403430527 -11.1831298886042
-4.34110134297039 -11.1884795523837
-4.33683705583366 -11.1965082635819
-4.33042987325843 -11.2085611545536
-4.32080155520824 -11.2266575420058
-4.30633180633636 -11.2538292871577
-4.28458558914842 -11.2946287174854
-4.26776220490507 -11.3249044540845
-4.25405639603207 -11.3480539061675
-4.24284342685501 -11.3666155344662
-4.2333462819826 -11.3821916780729
-4.22497578184272 -11.3957169723301
-4.21738280381722 -11.4077824770861
-4.21041633201711 -11.4187346661569
-4.20392987800208 -11.4287335008357
-4.19792500672133 -11.4379466972688
-4.19248432694504 -11.4462428643453
-4.18780219795212 -11.4533369714058
-4.1830906988847 -11.4604175564307
-4.17960868114982 -11.4656559594899
-4.17677017636034 -11.4699298181211
-4.17409750486773 -11.4739551669807
-4.17114089729308 -11.4784064057892
-4.16740280400621 -11.4840293001174
-4.16225415849564 -11.491765899553
-4.15482852774959 -11.5029123926062
-4.1438762828217 -11.51933624886
-4.13606396740765 -11.5310593639681
-4.13007688185844 -11.5400458196775
-4.12490749225407 -11.5478011971791
-4.11968587029635 -11.555624823164
-4.1135332956567 -11.5648274672282
-4.10541439038571 -11.5769509320816
-4.09396285591448 -11.5940263717139
-4.08448308663552 -11.6083622997679
-4.07627110506012 -11.6205180900418
-4.06892036743698 -11.6310228365667
-4.06225336236028 -11.6401290226388
-4.05627490017087 -11.6480223200531
-4.05098794343074 -11.6548526915644
-4.04647819653669 -11.6605920205413
-4.0427603348474 -11.6653643768458
-4.03979731147818 -11.6691318464563
-4.0375109310967 -11.6720231241527
-4.03578550814738 -11.6741876315976
-4.03449885461917 -11.6758205001379
-4.03353535553495 -11.6770278020439
-4.03280013678149 -11.6780118183087
-4.03220310322321 -11.6787699066184
-4.0317110599952 -11.6794290798635
-4.03130846692436 -11.6799322738465
-4.03096121361154 -11.6803637977067
-4.03067812942817 -11.6807959492115
-4.03044521228427 -11.6811336297368
-4.0302568995872 -11.6814334163824
-4.03009831967288 -11.6817455351849
-4.02997642107183 -11.6819547793464
-4.02987068662818 -11.6820962175176
-4.02978014263038 -11.6821935260694
-4.02970637290995 -11.6822630215745
-4.02965377472196 -11.6823163427498
-4.02961350391697 -11.6823624208013
-4.02957877792985 -11.6824089842955
-4.02954375168056 -11.6824638353979
-4.02950253401424 -11.6825361506693
-4.02944818417147 -11.6826380519443
-4.02938838222827 -11.6827866132554
-4.02932987490753 -11.6830067195026
-4.0292796343778 -11.6831677635086
-4.02924602235717 -11.6832967126649
-4.0292065687464 -11.6834151738688
-4.02917145328412 -11.6835429946735
-4.02915157829556 -11.6837015966682
-4.02914359727438 -11.6839175434487
-4.02912935565616 -11.6840595214457
-4.02910645564711 -11.6841513171337
-4.02905422783678 -11.6842083115059
-4.02896387831115 -11.6842400533069
-4.02880339093203 -11.6844193505886
-4.02851211566932 -11.6847762495933
-4.02800740496202 -11.6853705446974
-4.02717068965651 -11.6863018011631
-4.02581070832299 -11.6878935382158
-4.02378268913471 -11.6902449347481
-4.02104805718094 -11.6934149506057
-4.01771840663394 -11.6972647051526
-4.01388929664463 -11.7017691897841
-4.00958903334753 -11.7068456058188
-4.00485074192363 -11.7125069657119
-3.99981881837865 -11.7186967882169
-3.994520968348 -11.7252796334386
-3.98900742657362 -11.7321859113846
-3.98332575903041 -11.7394002146103
-3.97752887384499 -11.7467912501983
-3.97173437761392 -11.7542573354304
-3.96595937782841 -11.7617093552599
-3.96020743837871 -11.769223341157
-3.95446919313421 -11.7767182036125
-3.94870394291849 -11.784277152367
-3.94295052932606 -11.7918266284541
-3.93719934166752 -11.7994589968656
-3.93145794856681 -11.8071129952027
-3.92566830582814 -11.8148985019682
-3.91991560686949 -11.8227799121137
-3.91420716604592 -11.8307377029028
-3.90852411911445 -11.8387651399074
-3.90286869675125 -11.8468671641859
-3.89719731259025 -11.855061203265
-3.89151415713401 -11.8632126148394
-3.88578780249212 -11.8713471068094
-3.87999636719077 -11.8794875439955
-3.87414065709419 -11.8876578010401
-3.86832837235574 -11.8957192491863
-3.86259044636066 -11.9036825169185
-3.85698713866799 -11.9115417971526
-3.85160136246241 -11.9192738508785
-3.84653581994554 -11.9268341233461
-3.84189665246803 -11.9341492919855
-3.83776084823943 -11.9412724500073
-3.83415556182807 -11.9480570295325
-3.83111314868524 -11.9542997488761
-3.82876067767395 -11.9595390316438
-3.82670226364941 -11.9646526665359
-3.82459150540495 -11.9704973857621
-3.82207319761782 -11.9780524083031
-3.81872354658772 -11.988583500353
-3.81397885675146 -12.0038550271847
-3.81005099095139 -12.0175482549761
-3.80656484975278 -12.030449856288
-3.80325329894271 -12.0428788888648
-3.7999122187763 -12.0550752477783
-3.79641661499695 -12.0672398369934
-3.79286774410494 -12.0794007341711
-3.78930744831009 -12.0915854162386
-3.7857588281521 -12.1036578316191
-3.78223013556867 -12.1156306245215
-3.77866570497039 -12.1276672414231
-3.77503748924128 -12.1397743305099
-3.77154215129167 -12.1518028399827
-3.76811282249297 -12.1637580671834
-3.76486191029129 -12.1754655185157
-3.76178050151629 -12.1870441852838
-3.75887137810374 -12.1984239949255
-3.75613268720534 -12.2095015451518
-3.75354080139461 -12.2202903053292
-3.75106316349339 -12.2307553393779
-3.74870326753801 -12.2409749907184
-3.74643395505094 -12.2509864886309
-3.7443105887597 -12.2606246811112
-3.74232901096392 -12.269829378211
-3.74047528341041 -12.2784677675808
-3.73874016308466 -12.286647136382
-3.73710073496536 -12.2942303796921
-3.73553337235702 -12.3013155174999
-3.73406020641446 -12.3077496100328
-3.73266877123011 -12.3137731285125
-3.7313099033308 -12.3195577854837
-3.7299230989379 -12.3252352340806
-3.7284431549111 -12.3310866935163
-3.72682282682826 -12.3374225240911
-3.72499125104266 -12.3442992438641
-3.72292609423553 -12.3516965033665
-3.72066662268315 -12.3595136541781
-3.71820258947832 -12.3677203991007
-3.7155061351279 -12.3763517132944
-3.71259437896778 -12.3851787237816
-3.70939775047718 -12.3943403236176
-3.70583237803857 -12.4038639691995
-3.70183642136614 -12.4136702759756
-3.69746057439751 -12.4235597302362
-3.6927084163758 -12.4335142213426
-3.68770519116394 -12.4433590606824
-3.68261797684329 -12.4529011771966
-3.67753245543962 -12.462064291377
-3.67251776668634 -12.4708761102506
-3.66758770445958 -12.4793054855458
-3.66277030688537 -12.4874246992049
-3.65799484734721 -12.4952540657279
-3.65331537839144 -12.5029328250396
-3.64870120232531 -12.5107424961671
-3.64404852049396 -12.5186515369564
-3.63929751044296 -12.5268125388054
-3.63445588902979 -12.5350853199165
-3.62946566818849 -12.5435159184479
-3.62429430783006 -12.5523443263783
-3.61891241519143 -12.5615421614016
-3.61338971938887 -12.5711429598496
-3.60778905918922 -12.5810802683375
-3.6021937982996 -12.5913440016554
-3.59657048168791 -12.6019787770127
-3.5909650171732 -12.6129238624555
-3.58540950582448 -12.6241705357094
-3.57987718014301 -12.6355930871493
-3.5743115425093 -12.6470952861664
-3.56870093727952 -12.6585942421956
-3.56302615024456 -12.6700065211054
-3.55730761110112 -12.6812341657557
-3.55159202548083 -12.6923157840697
-3.54590978021899 -12.7032655209428
-3.5403641315082 -12.7139079220175
-3.53498043065985 -12.7241835406878
-3.52976082781902 -12.7341389875061
-3.52471826615429 -12.7435997201697
-3.51992911997285 -12.7521408144189
-3.51541150461058 -12.759350791716
-3.51119559494108 -12.7645951347172
-3.50734551761583 -12.7670775076812
-3.50412150817519 -12.7652038471693
-3.50094375818315 -12.7638184892281
-3.49781192328931 -12.7629213901232
-3.4947256581233 -12.7625125021224
-3.49168461998039 -12.7625918036051
-3.48868847312423 -12.7631593015691
-3.48573689566236 -12.764215027902
-3.48282954915545 -12.7657590138073
-3.47996611844943 -12.7677913410479
-3.47714629307778 -12.7703120802571
-3.47436975805923 -12.7733213501779
-3.47163621152812 -12.7768192726987
-3.46894534318721 -12.7808059938481
-3.46629686325494 -12.7852816868601
-3.4636904774568 -12.7902465457004
-3.46112590028403 -12.7957007751498
-3.45860284994317 -12.8016446160261
-3.45612104874915 -12.808078306313
-3.45368021821673 -12.8150021328897
-3.45128008998391 -12.8224163830681
-3.44892039994764 -12.8303213748158
-3.44660088962097 -12.8387174431159
-3.44432130513169 -12.8476049438971
-3.44208139406329 -12.856984253411
-3.43988091050147 -12.8668557765904
-3.4377196094591 -12.8772199351015
-3.43559725183237 -12.8880771600678
-3.43351360625818 -12.8994279197677
-3.43146844231611 -12.9112726996149
-3.42946153365968 -12.9236119948215
-3.4274926645081 -12.9364463359289
-3.42618262320564 -12.9449936169933
-3.42531095238108 -12.9506858472656
-3.42473096194439 -12.9544767074865
-3.4243450500365 -12.9570013030794
-3.42408827231546 -12.9586826092684
-3.42391741936526 -12.959802309723
-3.42380374039848 -12.9605479973567
-3.42372810004041 -12.9610445975931
-3.42367776803331 -12.9613753228602
-3.42364427891852 -12.9615955756188
-3.4236219988761 -12.961742256793
-3.4236071725307 -12.9618399441713
-3.42359730592138 -12.9619049965392
-3.42359074080872 -12.9619483201129
-3.42358637344099 -12.9619771773403
-3.42358346884805 -12.9619963912087
-3.42358153416152 -12.9620091913253
-3.42358024623431 -12.962017712936
-3.42357938847385 -12.9620233888225
-3.42357881990389 -12.9620271724307
-3.42357844044677 -12.962029690942
-3.42357818323066 -12.9620313733945
-3.42357801159408 -12.962032496808
-3.42357789304451 -12.9620332508106
-3.42357780608415 -12.9620337659968
-3.42357774111922 -12.9620341192305
-3.42357768486475 -12.9620343758056
-3.42357762728818 -12.9620345732103
-3.42357755728879 -12.9620347504698
-3.42357746576399 -12.962034933667
-3.42357733670006 -12.9620351582614
-3.42357715179993 -12.9620354568281
-3.42357687203889 -12.9620358840428
-3.42357645682035 -12.9620365130209
-3.42357583218942 -12.9620374406109
-3.42357489497373 -12.9620388323174
-3.42357348867949 -12.9620409166783
-3.42357137341333 -12.9620440363952
-3.42356819767063 -12.9620487285821
-3.42356342330775 -12.9620557717247
-3.42355624604012 -12.9620663335168
-3.42354546068433 -12.9620821977095
-3.42352925179512 -12.9621060206887
-3.42350489163144 -12.9621417998577
-3.4234682794862 -12.96219552057
-3.42341325546993 -12.9622761804239
-3.42333055976299 -12.9623972981213
-3.42320627697721 -12.9625791666984
-3.42301949065689 -12.9628522566682
-3.42273876804765 -12.9632623134312
-3.42231686982125 -12.9638780467957
-3.4216827936012 -12.9648026071748
-3.42072983636965 -12.966190902975
-3.41929762530924 -12.9682755333073
-3.41714514602862 -12.9714057397038
-3.41391016778509 -12.9761059688896
-3.40904829469964 -12.983163678083
-3.40174134590195 -12.9937613244705
-3.39075967515736 -13.0096744288391
-3.37425523424647 -13.033569052558
-3.34945057768616 -13.0694484839253
-3.31217146553856 -13.1233239344058
-3.28619717075707 -13.1611750079335
-3.26789658385932 -13.1881707673974
-3.25481224334237 -13.207661578296
-3.24541495014695 -13.222075434431
-3.23872871548337 -13.2329897391985
-3.23413286801934 -13.24139558472
-3.23124079979505 -13.2480312944479
-3.22971717961607 -13.2535061151538
-3.22935605457533 -13.2582348031094
-3.23001256942409 -13.2626746086893
-3.23156175788633 -13.2668993907931
-3.23397842912717 -13.2711144704629
-3.23718156227432 -13.2753560525425
-3.24095341801512 -13.2796647837193
-3.24513831726521 -13.2840925607616
-3.24959964274968 -13.2883762235243
-3.25414639042357 -13.2923959781375
-3.25870283338127 -13.29615528908
-3.26319488226165 -13.299614018426
-3.26760487576263 -13.3028491396198
-3.27180044195476 -13.3060676708434
-3.27583174979123 -13.3093063548011
-3.27975496011885 -13.3124378027428
-3.28362486320704 -13.3154841730014
-3.28742000397843 -13.3184533629136
-3.29110635470185 -13.3213403406963
-3.29466519930164 -13.3241262868556
-3.29810683198896 -13.3267754777503
-3.30145545012918 -13.3293967510185
-3.30470277939473 -13.3319267889751
-3.30785714162312 -13.3342869940859
-3.31094483852129 -13.3365377808315
-3.31394732915581 -13.3387212613168
-3.31683172598559 -13.3408682582897
-3.31952845366381 -13.3428359863601
-3.32202044408454 -13.3446191257975
-3.32434025648336 -13.3461814310773
-3.32662601681435 -13.3477846426304
-3.3290605749943 -13.3495298689169
-3.33175091955605 -13.3513745131163
-3.33466208764127 -13.3532926312421
-3.33767855402865 -13.3552705889126
-3.34070251294832 -13.357137279838
-3.34373832961316 -13.358870448782
-3.34689326645325 -13.3605929839853
-3.35026099893967 -13.3624259748797
-3.35400464569605 -13.3643415296577
-3.35803105435911 -13.3663255847372
-3.36221056923482 -13.3683755509735
-3.36642248338279 -13.3704998808401
-3.37041700705457 -13.3728869953199
-3.37395821384356 -13.375099341586
-3.37678433992492 -13.3771725818064
-3.37903463792755 -13.3789515796924
-3.38090279874825 -13.3805668740769
-3.38250140297551 -13.3821216121968
-3.38401537838147 -13.3837087705708
-3.38556496937961 -13.385426764256
-3.38720913424952 -13.387228425697
-3.3889722989684 -13.3890806148834
-3.39076437997578 -13.390958652802
-3.39253378983682 -13.3928421919545
-3.39429239415802 -13.3947118091822
-3.39598297657413 -13.3965457360664
-3.39753686573237 -13.3981487437802
-3.39896329632621 -13.3992869087337
-3.40025005442389 -13.4001509138526
-3.40147914136748 -13.4007180125489
-3.40272285034657 -13.4010832268329
-3.40403912770754 -13.4013077408913
-3.40553175848127 -13.4014291748902
-3.40725012077777 -13.4016353620852
-3.40923112760946 -13.4017933515217
-3.41147180085075 -13.4017621141493
-3.41384468654838 -13.4013689229186
-3.4163286712905 -13.4003804141067
-3.41898860782551 -13.3986309663483
-3.42185168305742 -13.3961624665707
-3.42477746640651 -13.3932313356446
-3.42766970761044 -13.3900164827124
-3.43042651938284 -13.3868167758174
-3.43292321714136 -13.3839336300612
-3.43499134449541 -13.3815539743662
-3.43697894116205 -13.3792791357232
-3.43922048839716 -13.3767279821231
-3.44209319968589 -13.3734731075235
-3.44401196680682 -13.3713141413126
-3.44529968774779 -13.3698893640644
-3.44617306369154 -13.3689600753952
-3.44677907403284 -13.368370586197
-3.44721969809272 -13.3680221333264
-3.44756908977808 -13.3678563275096
-3.44788604441235 -13.3678454032812
-3.44822390241674 -13.367987518327
-3.44863951438852 -13.3683064950678
-3.44920282843857 -13.3688557662464
-3.45000863910975 -13.3697273520968
-3.45119254879161 -13.3710672840405
-3.45295379437694 -13.3731000539495
-3.45558876745446 -13.3761662282428
-3.45954089151791 -13.3807795147814
-3.46547524728225 -13.3877128147443
-3.47439049925136 -13.3981277326227
-3.4877869452893 -13.4137691740294
-3.49880394887731 -13.4230204704458
-3.50820236806611 -13.4281015662193
-3.5166556699062 -13.430366235604
-3.52382058267454 -13.4322038598543
-3.5297928989029 -13.434257307522
-3.53468634630892 -13.4365356125799
-3.53898806774246 -13.4390854924032
-3.54328743436576 -13.4416641630056
-3.54763526755002 -13.4440336729264
-3.55195600398506 -13.4462560085642
-3.55628724375143 -13.4483685028099
-3.56058426385807 -13.4505575798884
-3.56484703490142 -13.4530225126433
-3.56893523000334 -13.4558412739706
-3.57267913721398 -13.4589836293387
-3.57601928762154 -13.4623060586353
-3.57892916127985 -13.4656952037379
-3.58137710524174 -13.4688814093916
-3.58328736931945 -13.4715610000861
-3.58456097951468 -13.4733454236731
-3.58541226722887 -13.4745336536561
-3.58598449401984 -13.4753247528755
-3.58637395803732 -13.47585126848
-3.58664619316888 -13.476201407099
-3.58684701861153 -13.4764338384678
-3.58701022367475 -13.4765875001774
-3.58716327803663 -13.4766881295933
-3.58733194150287 -13.4767525896111
-3.58754459160868 -13.4767916903553
-3.58783701597587 -13.4768119759306
-3.58825842842317 -13.4768168441654
-3.58887974349258 -13.4768071041611
-3.58980551588271 -13.4767811348251
-3.59119154161851 -13.4767345808376
-3.59327106405252 -13.4766596384873
-3.59639404309805 -13.4765437649951
-3.6010860242734 -13.4763675324877
-3.60813659442964 -13.4761014231347
-3.61873225475125 -13.4757008468057
-3.63465609976704 -13.4750987004905
-3.64642884886687 -13.4731891138699
-3.65535897928522 -13.4703221471485
-3.66214205724111 -13.4666874548937
-3.66744868309839 -13.4628485609903
-3.67161690406641 -13.4589997838216
-3.67507908972831 -13.455333786339
-3.67813197523749 -13.451738856429
-3.68110432401062 -13.4481151961835
-3.68412635145414 -13.4443581922231
-3.68716846349838 -13.4405083804085
-3.69017080205516 -13.4367582576854
-3.69315091085284 -13.4331495107187
-3.69595441877665 -13.4297475243407
-3.69841404955692 -13.4266523122202
-3.70047283124682 -13.4238478052167
-3.70214087322705 -13.4215341211176
-3.70342980803678 -13.4196586158744
-3.70428746299851 -13.4184095613628
-3.70485816582658 -13.417577695954
-3.70523795977473 -13.4170236492821
-3.7054907574345 -13.4166545933387
-3.70565910116803 -13.4164086965637
-3.70577131829139 -13.4162447624465
-3.70584629728472 -13.4161353257304
-3.70589665248506 -13.4160620521991
-3.70593086103801 -13.4160126646972
-3.70595467575469 -13.4159788905297
-3.70597210798329 -13.4159550660205
-3.7059860894429 -13.4159372046579
-3.70599897255069 -13.4159223182539
-3.70601292785274 -13.4159079063622
-3.70603030121413 -13.4158915538243
-3.70605401890551 -13.4158705189279
-3.70608806848618 -13.4158412894724
-3.70613818205563 -13.4157989473114
-3.70621279161304 -13.4157364214292
-3.70632445585796 -13.4156432210421
-3.70649196330292 -13.4155037378172
-3.70674350499565 -13.4152945992511
-3.70712141054101 -13.4149807740897
-3.70768927740705 -13.4145096707184
-3.70854266396193 -13.4138023725361
-3.70982518701011 -13.4127403800383
-3.7117526713518 -13.4111457644426
-3.71464948768486 -13.4087513598108
-3.71900312450535 -13.4051560140423
-3.72554623320989 -13.3997573695312
-3.7353799136275 -13.3916509341924
-3.75015903013919 -13.3794785744762
-3.77237067548012 -13.3612009278945
-3.78713581912672 -13.3489979774654
-3.79693920807671 -13.3408252529873
-3.80343060215761 -13.3353135048255
-3.80770240372727 -13.3315392867694
-3.81047349377011 -13.3288702745911
-3.81221020351733 -13.3268593047827
-3.81320479160669 -13.3251694659908
-3.81362463103636 -13.323517643736
-3.81354037949088 -13.3216270867016
-3.8134087470536 -13.3203535270566
-3.8132580299539 -13.3194836047943
-3.81307968186577 -13.3188715636776
-3.81287732644068 -13.3184148695967
-3.81265054637112 -13.3180370114396
-3.81242845085323 -13.3176746677488
-3.81217365803297 -13.3174346405115
-3.81186011422478 -13.3172767170998
-3.81148549726197 -13.3171744394687
-3.81110449617144 -13.317110656697
-3.81075389553813 -13.3170746969338
-3.81047559710841 -13.3170605337841
-3.81030685839523 -13.3170658021952
-3.8102529197211 -13.3170913618351
-3.81030470199244 -13.3171415186017
-3.81048773787387 -13.3170571634302
-3.81081600892717 -13.3168241696696
-3.81132794087016 -13.3164034997711
-3.81210968589072 -13.3155571925394
-3.81325916237604 -13.3141434383674
-3.81490254234605 -13.3120928857923
-3.81713138657173 -13.3093969720603
-3.81984989310258 -13.3061065224686
-3.82297738184911 -13.3023402402253
-3.82645155005306 -13.2981371110776
-3.83018435157701 -13.2936304412156
-3.83411444271358 -13.2889026554366
-3.83818004838183 -13.283999163753
-3.84229174663727 -13.2791034016007
-3.84645195726417 -13.2742326248246
-3.8506039926506 -13.2692407783346
-3.85463934063388 -13.264296514845
-3.85848030120678 -13.2592414544187
-3.86218464168876 -13.2542336642302
-3.86577031808911 -13.249439126052
-3.86925213153348 -13.2447245424258
-3.87266104086866 -13.239970031597
-3.8761334650143 -13.2350490061122
-3.8798165029968 -13.2299744947097
-3.88385906704761 -13.2245662938675
-3.88816785321269 -13.2189233033705
-3.89269436347489 -13.2131050841228
-3.89737628196023 -13.2071418313215
-3.90212700022353 -13.2012069398526
-3.90687148277197 -13.1953110691089
-3.91156728480331 -13.1894714101924
-3.91614694394363 -13.1837145657608
-3.92059072566079 -13.1780810246762
-3.92487285153536 -13.172464427688
-3.9289907869527 -13.1669287576776
-3.93293118418679 -13.1615515513543
-3.93671808745409 -13.1564368979247
-3.94034971250545 -13.1515653699288
-3.94376450598683 -13.1469582829413
-3.94694851771592 -13.142681257513
-3.94981532038937 -13.1388552011317
-3.95227556133534 -13.1355090862836
-3.95430600509094 -13.1327522935409
-3.95587836346751 -13.130457947902
-3.95700498091017 -13.1287441453797
-3.95774090778553 -13.1274912580944
-3.95817634936655 -13.1266568630793
-3.95840140600838 -13.126101182685
-3.95848758639446 -13.1257311160407
-3.95849984738412 -13.1254846472565
-3.95847388267003 -13.1253204929087
-3.95843895868141 -13.1252111491006
-3.9584060189272 -13.1251383002088
-3.95838633594695 -13.1250897419012
-3.95837659683913 -13.1250573344545
-3.95837516717366 -13.1250356471148
-3.95838179875388 -13.1250210524839
-3.95838079188332 -13.125011106146
-3.95837197834113 -13.1250041351719
-3.95835387920625 -13.124998973774
-3.95834025857105 -13.1249947565464
-3.95832883025671 -13.1249907706018
-3.95831766795624 -13.1249863617142
-3.95830489276532 -13.1249807805907
-3.95828835693737 -13.1249730991716
-3.95826527623643 -13.1249620288968
-3.95823176622089 -13.124945714138
-3.95819900856992 -13.1249214188471
-3.95817830485998 -13.1248850743993
-3.95816617234486 -13.1248305917058
-3.95816056872968 -13.1247488546727
-3.95816055240079 -13.1246261460054
-3.95816611808527 -13.1244419189691
-3.95819502003019 -13.1241653104923
-3.95825212173969 -13.1237499767168
-3.95834703520541 -13.123293831295
-3.95851254806655 -13.1227204512887
-3.95877651464935 -13.1219337647337
-3.95920017447709 -13.1208019784209
-3.95987163881529 -13.1193029743947
-3.96092072604249 -13.1171856010769
-3.96240625794349 -13.114430112136
-3.96430914301636 -13.1110773438172
-3.96661326250434 -13.1072355690862
-3.96931956144016 -13.1029311242844
-3.97244621612555 -13.0981128276877
-3.97596442266061 -13.0928109150394
-3.97975990129974 -13.0871420896165
-3.98378186352094 -13.0809940786158
-3.98798398918732 -13.0743418301348
-3.99228301123908 -13.0674108002691
-3.99664560629002 -13.0602122623573
-4.00101551258293 -13.0528801377912
-4.00535451698233 -13.0453584904343
-4.00960238518489 -13.0377271242591
-4.01376763660944 -13.0300474590916
-4.01786170591086 -13.0223728229891
-4.02195131482601 -13.0145898907417
-4.02600153226857 -13.0067346877175
-4.03007170162618 -12.9988311320406
-4.03422452507494 -12.9908950432599
-4.03835162066379 -12.9827692940011
-4.0423234557248 -12.974432478863
-4.04615255878155 -12.965995316354
-4.04986105584993 -12.9573842348106
-4.0534507890516 -12.9486639980262
-4.05695405921611 -12.9397136112474
-4.06035498750821 -12.9307084883186
-4.06365410182724 -12.9216474007943
-4.06683480372035 -12.9126872149003
-4.06987737923437 -12.9038342294769
-4.07273887161635 -12.8951126869992
-4.0754131192145 -12.886568852466
-4.07789608986499 -12.8782787776017
-4.08016836740061 -12.8703610149226
-4.08219191410376 -12.8629965016627
-4.08390363406752 -12.8564588622672
-4.08559158966521 -12.8496527815431
-4.08753983629464 -12.841437980273
-4.0900762344497 -12.8304381625478
-4.09214768529424 -12.8213427848064
-4.09410278900831 -12.8126280242118
-4.09627055025008 -12.8028338138614
-4.09901577647308 -12.7903192376218
-4.10125323118177 -12.7790174960449
-4.10307355950929 -12.7687100688115
-4.1045476428681 -12.7591775270058
-4.10572174173392 -12.7503302698308
-4.1065916309891 -12.7421935085276
-4.10721960976993 -12.7347439899892
-4.1076945391015 -12.7280735934428
-4.10806270821174 -12.7224047412461
-4.10835243716129 -12.7179601588883
-4.10854521564721 -12.7150001823694
-4.10867348531042 -12.7130288988511
-4.108758831905 -12.7117160434846
-4.10881562076335 -12.7108416644897
-4.10885340304623 -12.7102592621917
-4.1088785387705 -12.7098712668578
-4.10889525953823 -12.7096126670863
-4.10890637577346 -12.7094401423015
-4.10891376227124 -12.7093247858727
-4.10891866332151 -12.7092472729012
-4.10892189573488 -12.7091946150652
-4.10892400956001 -12.7091579946221
-4.10892536148822 -12.7091312730791
-4.10892617939215 -12.7091099739701
-4.10892659443688 -12.7090905242187
-4.1089266832183 -12.7090696723456
-4.1089264550247 -12.709043918673
-4.10892587628024 -12.7090089587524
-4.10892485001712 -12.7089589315747
-4.10892319901035 -12.7088854471757
-4.10892065158909 -12.7087761972147
-4.1089167739778 -12.7086128851493
-4.1089109122829 -12.7083681356773
-4.10890208430101 -12.708000956973
-4.10888879784337 -12.7074498330537
-4.1088688216739 -12.7066224199025
-4.10883879836766 -12.7053800963753
-4.10879367436086 -12.7035147242898
-4.10872584818843 -12.7007137844091
-4.10862391287926 -12.6965080109252
-4.10847071009637 -12.6901927635742
-4.10824045748173 -12.6807100012222
-4.10789441381477 -12.6664709807557
-4.10737433665795 -12.6450901096379
-4.10659271202208 -12.6129852539528
-4.10526664866781 -12.5878921331677
-4.10366068575717 -12.5676166306764
-4.10190819585367 -12.550436765878
-4.10000015190202 -12.5353167126645
-4.09825452185903 -12.5217332128876
-4.0967475262846 -12.5094204691784
-4.09537692130974 -12.4981580799286
-4.09408022507347 -12.4877341079853
-4.09295875986241 -12.4784121037372
-4.09197515455843 -12.470305221845
-4.09118250805796 -12.4637302115775
-4.09064924359578 -12.4592604682181
-4.09028562088059 -12.4561471388474
-4.09003044452194 -12.4538686138202
-4.08984077371567 -12.4520431529691
-4.08968468836985 -12.4503649181774
-4.08953592656222 -12.4485527422281
-4.08936945037408 -12.4463030123041
-4.08915724433943 -12.4432388137418
-4.08886359797973 -12.4388467689935
-4.08843909450914 -12.4323910402396
-4.08781229661509 -12.4227900357035
-4.08687772546617 -12.408435216392
-4.08547810622941 -12.3869215863851
-4.08337790310911 -12.3546447666927
-4.08192224614132 -12.330484149147
-4.08079890251445 -12.3113968600549
-4.0795329391581 -12.2951900086116
-4.07791130209159 -12.2803207898774
-4.07623289572746 -12.2653030092801
-4.07461888515741 -12.2487930721017
-4.0736889811515 -12.2293648923335
-4.07286811612076 -12.2104022362309
-4.07215619712074 -12.1919042940719
-4.07155314720527 -12.1738702846888
-4.07105890000363 -12.1562994454251
-4.07067339699649 -12.1391910232242
-4.07039660255774 -12.1225442884825
-4.0702284793208 -12.106358539312
-4.07016901041741 -12.0906330791103
-4.07021819189988 -12.075367249459
-4.07037602922778 -12.0605603920095
-4.07064253723718 -12.0462118858509
-4.07101775264974 -12.0323211150432
-4.07150170818288 -12.0188874882422
-4.07209446261222 -12.005910433577
-4.07279607385825 -11.9933894004933
-4.07360662730419 -11.9813238608044
-4.07452620514881 -11.9697133001268
-4.07555491334785 -11.9585572238546
-4.07669286435601 -11.9478551566566
-4.07794018539152 -11.9376066453913
-4.07929700695606 -11.9278112490574
-4.0807634884708 -11.9184685577761
-4.08233978920041 -11.9095781721791
-4.08402608001146 -11.9011397135463
-4.08582254208944 -11.8931528199422
-4.08772937827782 -11.8856171559166
-4.08974680347377 -11.8785324068499
-4.09187503277203 -11.8718982618889
-4.09411430351095 -11.8657144382648
-4.09646486304231 -11.8599806786478
-4.09892697047696 -11.8546967398604
-4.10150089802309 -11.8498623897313
-4.10418692780557 -11.8454774325013
-4.10698535647804 -11.8415416773321
-4.10989648931432 -11.8380549507915
-4.11292065138696 -11.8350171089452
-4.11605817147956 -11.8324280241208
-4.11930940203675 -11.8302875909337
-4.12267469544918 -11.828595712013
-4.12615442554909 -11.827352317601
-4.12974897456664 -11.8265573504612
-4.13345874049081 -11.8262107893917
-4.13728413015876 -11.8263126061246
-4.14122556314667 -11.8268628030529
-4.14528347666776 -11.8278614160346
-4.14945831735108 -11.8293084836079
-4.15375054645411 -11.8312040610063
-4.15816064115741 -11.8335482398732
-4.16268907950054 -11.8363411085236
-4.16733636326315 -11.8395827912307
-4.17210300742004 -11.8432734266476
-4.17698953314773 -11.8474131704142
-4.18199647955289 -11.8520021996057
-4.1871244042812 -11.8570407016627
-4.19237386702332 -11.8625289009418
-4.19774544613778 -11.8684670233111
-4.20323973365101 -11.8748553278048
-4.20885733930139 -11.8816940775406
-4.21459887850705 -11.8889835684674
-4.22046498262671 -11.8967241152648
-4.22577771669064 -11.8991749469407
-4.23075842973253 -11.8979191524659
-4.23552214931948 -11.8942537976905
-4.2401642064681 -11.8889047736107
-4.24474263837755 -11.8824833745312
-4.24930477468366 -11.8752537429544
-4.25387838268491 -11.8675120966114
-4.25847634471538 -11.859468883401
-4.26308201164182 -11.8512840172276
-4.26771366023742 -11.8429261813564
-4.27232666662809 -11.8345025883542
-4.27695736676569 -11.8261094227493
-4.28161143058747 -11.817680471731
-4.28628164921639 -11.8091435290873
-4.29092989229979 -11.8005757936308
-4.2955479681881 -11.7920493045097
-4.3001057868362 -11.7834755102335
-4.30456312708706 -11.7749254372141
-4.30889648946501 -11.7663065829607
-4.31296060169168 -11.7578499262959
-4.3167498773522 -11.7494786200899
-4.32027974546525 -11.741130104472
-4.32358925919598 -11.7329131546321
-4.32671401215593 -11.7247910817461
-4.32972578076236 -11.7169105923121
-4.33271096915806 -11.7092913617806
-4.33568423013098 -11.7019968468948
-4.33865821152105 -11.6951448999705
-4.34146069192451 -11.689095030549
-4.34384013426775 -11.6841736161854
-4.34542335846165 -11.6808960999336
-4.34647679857013 -11.6787133768356
-4.34717773498933 -11.6772597504914
-4.34764412200432 -11.6762916752589
-4.34795444207564 -11.6756469675388
-4.3481609253288 -11.6752176106996
-4.34829831233866 -11.6749316714071
-4.34838972615526 -11.6747412415794
-4.34845055089324 -11.6746144284379
-4.3484910238931 -11.6745299709395
-4.34851795411164 -11.6744737261765
-4.34853587107112 -11.6744362651662
-4.34854779442737 -11.6744113194503
-4.34855572364477 -11.6743947084176
-4.3485610011554 -11.674383647358
-4.34856451432471 -11.6743762731463
-4.34856685102493 -11.6743713687947
-4.34856840919948 -11.6743681028199
-4.34856944230355 -11.6743659267286
-4.3485701314138 -11.6743644726472
-4.34857058992707 -11.6743635025455
-4.34857089591684 -11.6743628638682
-4.34857109820334 -11.6743624367791
-4.34857123510256 -11.6743621512584
-4.34857132714795 -11.6743619618547
-4.34857138471564 -11.674361836372
-4.34857142285762 -11.6743617516536
-4.34857144781855 -11.6743616943433
-4.34857146436692 -11.6743616549796
-4.34857147769681 -11.6743616270501
-4.3485714870716 -11.6743616069585
-4.34857149648054 -11.6743615904032
-4.34857150551933 -11.674361574577
-4.34857151552303 -11.6743615568381
-4.34857152848678 -11.6743615343343
-4.34857154641227 -11.674361503281
-4.34857157221653 -11.6743614582878
-4.34857161041961 -11.6743613875318
-4.3485716669102 -11.6743612842208
-4.34857175223852 -11.67436113116
-4.34857187599986 -11.6743609012226
-4.348572064246 -11.6743605561589
-4.34857234789129 -11.6743600402932
-4.34857277863993 -11.6743592665771
-4.34857342288935 -11.6743581022081
-4.34857438929535 -11.6743563571684
-4.34857584177348 -11.6743537365643
-4.34857802601296 -11.6743497977407
-4.34858130690555 -11.6743438845054
-4.34858623541286 -11.6743350071767
-4.34859364803022 -11.6743216799001
-4.34860478407553 -11.6743016606429
-4.34862152120999 -11.6742716051483
-4.34864668041267 -11.6742264732215
-4.34868449125438 -11.6741587053003
-4.34874131533445 -11.674056947489
-4.34882671908023 -11.673904156734
-4.34895507014639 -11.6736747220803
-4.34914796793744 -11.6733302119961
-4.34943787630098 -11.6728129034053
-4.34987358488187 -11.6720361299452
-4.3505284147838 -11.6708697547494
-4.35151256218664 -11.6691183617464
-4.35299164500783 -11.6664885191928
-4.3552145722629 -11.6625396346683
-4.35855542379355 -11.6566101106609
-4.36357642320869 -11.6477065147093
-4.37112251535172 -11.6343371509013
-4.38246359817844 -11.6142621284277
-4.39950820401904 -11.5841180834025
-4.4251246789109 -11.5388547115327
-4.46362388038162 -11.4708886298555
-4.49128050024337 -11.4232692730468
-4.51140332819147 -11.3885210196531
-4.52650422019423 -11.362162152652
-4.53843491199098 -11.3414514931818
-4.54821092544002 -11.3245941667758
-4.55650199880401 -11.3102733734936
-4.56387934281125 -11.2975972898502
-4.57054176870472 -11.2861171439117
-4.57668550068095 -11.2755845243258
-4.58236901944005 -11.265742279665
-4.58753973193228 -11.2567839098198
-4.59215964657963 -11.2487160062578
-4.59609807727418 -11.2418618426343
-4.60001781060807 -11.2350730837433
-4.60457847323982 -11.2272123383823
-4.61054755618191 -11.2169626330252
-4.61892956092863 -11.2026067316731
-4.62624117570037 -11.1902817976286
-4.63280468888953 -11.1794303791664
-4.6388669552015 -11.1692394289708
-4.64455683021308 -11.1595090291304
-4.65005823536581 -11.1499489321201
-4.65545609971014 -11.14046491741
-4.66073383252057 -11.1311430083418
-4.66588828180691 -11.1217613854278
-4.67082826091555 -11.1124232314386
-4.67552740586867 -11.1029040147885
-4.67998609045882 -11.0932838599499
-4.68418103970651 -11.083458479115
-4.68812868504536 -11.0734567036344
-4.69197201586669 -11.063277818365
-4.69571874742023 -11.0530589144718
-4.69937713969294 -11.0427628955732
-4.70299105595686 -11.0325072435948
-4.70654641705943 -11.0224161910172
-4.7100529269026 -11.0124740604737
-4.71356251781962 -11.0026901218327
-4.71706037568044 -10.9931001510307
-4.72039516503415 -10.9836049167804
-4.7235562788838 -10.9742885638148
-4.72648708827178 -10.9652652084486
-4.72914263512315 -10.9565305537726
-4.73144847368622 -10.9481286717573
-4.73328810849454 -10.9401593887878
-4.73443297226991 -10.9327950055716
-4.73465528625483 -10.9259741768286
-4.7336897554655 -10.9193916351629
-4.73113844666134 -10.912782017023
-4.72876863367639 -10.9065819489285
-4.72658005670483 -10.9007911607451
-4.72457247426559 -10.8954094028872
-4.72274566181518 -10.8904364486446
-4.7210994202938 -10.8858720866583
-4.71963356961064 -10.8817161272451
-4.71834794845419 -10.8779683973499
-4.71724241480006 -10.8746287230326
-4.71631684533996 -10.8716969734273
-4.71557113782563 -10.8691730129148
-4.71500521130881 -10.8670567476786
-4.71461900394401 -10.8653480794735
-4.71441246854744 -10.8640469368179
-4.71438559381227 -10.863153262844
-4.71453836387043 -10.8626670240436
-4.71487080347227 -10.8625881989751
-4.71538294444288 -10.8629167824729
-4.71607484720286 -10.8636527891132
-4.71694658070175 -10.8647962479957
-4.71799824923619 -10.8663472157512
-4.71922996634701 -10.8683057520051
-4.72064186612408 -10.8706719355701
-4.72223410764543 -10.8734458701096
-4.72400686030703 -10.8766276784776
-4.72596032665045 -10.8802174936864
-4.72809471662411 -10.8842154678722
-4.73041026796038 -10.8886217700674
-4.73290723762408 -10.8934365869773
-4.73558589722116 -10.8986601242284
-4.73844654241133 -10.9042926057419
-4.74030327660122 -10.9080383275404
-4.74150219776675 -10.910524840887
-4.74226187833412 -10.9121687342662
-4.74274380093128 -10.9132454292712
-4.74304588092001 -10.9139353105908
-4.74321895155668 -10.914353962488
-4.74330896252838 -10.9145715229704
-4.74334787086903 -10.9146244497705
-4.7433590454429 -10.914689102252
-4.74334436501712 -10.9147763099277
-4.74331817875873 -10.9149006902059
-4.74327608078626 -10.9150830740799
-4.74321098299782 -10.9151865176634
-4.74312875372032 -10.9152283624679
-4.74303236744237 -10.9152156119345
-4.74292241955776 -10.9151461344194
-4.74283086098258 -10.9150082880549
-4.74277592548819 -10.9147789738541
-4.74276517760917 -10.9144197815343
-4.74281362988044 -10.9138705290336
-4.74296307011074 -10.9130391910271
-4.7432554608139 -10.911786480902
-4.7437568307491 -10.9099025299956
-4.74453472909695 -10.9072392046823
-4.74560234545632 -10.9038527717588
-4.74690390643843 -10.8998458603384
-4.74842298962474 -10.8953846432555
-4.750146156121 -10.8903916767314
-4.75204385764503 -10.8850354222448
-4.75413273904497 -10.8792559781015
-4.75632707020511 -10.8732575430848
-4.7585588626786 -10.8672076176116
-4.76078326732001 -10.8610975802824
-4.76297099560852 -10.8550762393194
-4.76508658605595 -10.8491397651409
-4.76713289415297 -10.8432985442642
-4.76908430534106 -10.8375789243517
-4.77088240105277 -10.8321951249706
-4.77249343564609 -10.8272501240051
-4.7738689858965 -10.822752921247
-4.77498826865107 -10.8187875484085
-4.77583784015764 -10.8153596317193
-4.77645975865346 -10.8125648520187
-4.77689141524759 -10.8104374609316
-4.77717181702158 -10.8089560328934
-4.77734815400194 -10.8078723716536
-4.77745010378394 -10.8070049144326
-4.77751163320287 -10.8063758289535
-4.77754310483447 -10.8058797200616
-4.77756661934393 -10.8054334701852
-4.77758614007245 -10.8049623122424
-4.77760495649846 -10.8043873104158
-4.77760941300765 -10.8036121304198
-4.77758344290113 -10.8025068968128
-4.77750586221292 -10.8008864373637
-4.77734679306042 -10.7986467667165
-4.77707947015161 -10.7957476417239
-4.7767261754026 -10.7923733353622
-4.77636199405507 -10.7886284961533
-4.77612744646715 -10.7845557026787
-4.77620169405924 -10.78014259952
-4.77678221675385 -10.7754873009803
-4.77798353095976 -10.7708148386899
-4.77978916925189 -10.7663473718846
-4.78188075049082 -10.7625089035064
-4.78366847897953 -10.7596613133762
-4.78545319957075 -10.7573275266207
-4.78753525503804 -10.7551165425097
-4.79026502330776 -10.7526579263938
-4.79410187779663 -10.7495397684961
-4.79969150749393 -10.745239656559
-4.8079745589887 -10.7390371575183
-4.82034494794799 -10.7298930969637
-4.83888441506697 -10.7162754984918
-4.85458749554593 -10.7049476955807
-4.86838139075375 -10.6951843133875
-4.88088884364746 -10.6865220844936
-4.89254973758832 -10.6786822161956
-4.90367831892712 -10.6713562056765
-4.91438151987785 -10.6644891347467
-4.92479559739337 -10.6579354841294
-4.93500815145246 -10.6514347407033
-4.94500560595038 -10.6450702519896
-4.95487271786009 -10.6389481919444
-4.96458823381133 -10.6328803650691
-4.97417265053498 -10.6266876492187
-4.98355713446235 -10.6205050012252
-4.99277373461332 -10.6143015624844
-5.00182626934582 -10.6082104958371
-5.01057321128022 -10.6022162848958
-5.01897296044545 -10.5963196423041
-5.02702640939376 -10.5907051330023
-5.03462570496429 -10.585437080845
-5.04175475613467 -10.5804703653649
-5.04840241201583 -10.5758103531674
-5.05451015156799 -10.5715137935678
-5.06011358151646 -10.5675308271585
-5.0652307197798 -10.5638641333977
-5.06981455728421 -10.5605693938782
-5.07386287954269 -10.5575970947325
-5.07736744117336 -10.5549517518851
-5.08026212676997 -10.5526926597224
-5.08246227715841 -10.550776315688
-5.08396816002577 -10.5492166599405
-5.08483137483724 -10.547919879902
-5.08526446537319 -10.5468362153485
-5.08542439919461 -10.5459515995522
-5.08542217745757 -10.5451178320855
-5.0853078802008 -10.5443627172388
-5.08516317329717 -10.543727239974
-5.08504779915641 -10.5431049328951
-5.08499278993806 -10.5425590357208
-5.08505615827209 -10.5419980879562
-5.08529902302605 -10.5413281046094
-5.08579588922294 -10.5406043371037
-5.0866471844876 -10.5397055253053
-5.08799617320011 -10.5384810804908
-5.09001941476953 -10.5368933616571
-5.09283876489538 -10.5348438575403
-5.09644096577249 -10.5323241897737
-5.10067543593349 -10.5294147088174
-5.10554843314542 -10.5259629498474
-5.11108959212789 -10.5220605988917
-5.11718872184215 -10.5177238428738
-5.12377907684988 -10.5130635933228
-5.13075885937734 -10.5079690571507
-5.13792362384533 -10.50259168403
-5.14513369721455 -10.4970355308227
-5.15230748792929 -10.4912072046577
-5.15927319636776 -10.4853027129711
-5.16599219203711 -10.4791703027226
-5.17245158973303 -10.4727875395305
-5.17866209113731 -10.46609003673
-5.18462614750836 -10.4591281788257
-5.19042245737736 -10.4517405573674
-5.19611830861618 -10.4438619405466
-5.20171363665516 -10.435512317547
-5.20725872022454 -10.426632776119
-5.21277856895772 -10.4170756188855
-5.21815940684109 -10.4069146157549
-5.2232136162788 -10.3961223629544
-5.22779951090352 -10.384900698418
-5.23176387379684 -10.373379509176
-5.23508432147483 -10.3614710108709
-5.23774784409259 -10.3491900225221
-5.23981587243775 -10.3368239563822
-5.24143461641274 -10.3243109647814
-5.24272512259333 -10.3117320902375
-5.24382047833149 -10.2991573350758
-5.24483775264275 -10.2866573913387
-5.24584722426791 -10.2743154879541
-5.24683377965587 -10.2624088206943
-5.24776163378385 -10.2512875100727
-5.2485851214439 -10.2412657595481
-5.24927464722123 -10.2328419908686
-5.24974444384114 -10.2271173445261
-5.25007356314022 -10.2231327150785
-5.25031739587757 -10.2202205189479
-5.25051697069631 -10.2178928538838
-5.25070588018747 -10.2157597420163
-5.25091591064535 -10.2134638030755
-5.25118240323002 -10.2106203733648
-5.25155021466825 -10.2067530729473
-5.25208123417165 -10.2012139802512
-5.25286482729834 -10.1930750710827
-5.25403285542866 -10.1809727682742
-5.25578188484675 -10.1628794533624
-5.2584062465931 -10.1357637907191
-5.26234758345912 -10.0950828387668
-5.26741147350292 -10.0624953891333
-5.27302061084846 -10.0360591882171
-5.2789248804006 -10.0140250868231
-5.28480612689593 -9.99497946499324
-5.29059457417663 -9.97800938940639
-5.29633936615056 -9.9625329065147
-5.30201503583789 -9.94805081484811
-5.30758447796885 -9.93436450200824
-5.31305998822621 -9.92120767690692
-5.31848850409711 -9.9084027690383
-5.32385860363774 -9.89578091924276
-5.32913221352157 -9.88322067574162
-5.33423820859484 -9.87076166889197
-5.33917815779295 -9.85812548826386
-5.34389205692398 -9.84525529074628
-5.34823140184933 -9.83222251196666
-5.3521023849425 -9.81897086756347
-5.35546691664053 -9.80549113815409
-5.35830259181722 -9.79178614511517
-5.36059889408935 -9.77798846774054
-5.36238910374504 -9.76408116239285
-5.36377266379055 -9.75004565924755
-5.36484787122406 -9.73597592117494
-5.36572839806893 -9.72182615903035
-5.36651196591764 -9.70752044155311
-5.36724635762588 -9.69319120554642
-5.36793743560296 -9.678832937877
-5.36858377438472 -9.66440176983258
-5.36917642190699 -9.65005936701039
-5.3697142060643 -9.63579801768443
-5.37028762575418 -9.62162358504956
-5.3709259155284 -9.60762350071088
-5.37163557581397 -9.59386415354555
-5.3724519502285 -9.58041876375273
-5.37336106661649 -9.56754715209352
-5.37433091917097 -9.55542102623936
-5.37528927464241 -9.54432023812629
-5.3760610633334 -9.53486391920574
-5.37677616605706 -9.5254677636094
-5.37755492581522 -9.5145575562091
-5.37852838408779 -9.50030539849828
-5.3798603744874 -9.48032350161142
-5.38085007627729 -9.46277113339741
-5.3816808729068 -9.44698553670189
-5.38249256471083 -9.43032201193964
-5.38291722710766 -9.41462842048314
-5.38311041093605 -9.39941943045479
-5.38307099204037 -9.38450864134889
-5.38279233502537 -9.3697596159647
-5.38246299502555 -9.355180257837
-5.38202754478455 -9.34094091033523
-5.38147996909799 -9.32700087406884
-5.38087948539967 -9.31331935202282
-5.38024276589971 -9.29984860914882
-5.37963082372078 -9.28665996516638
-5.37904159282254 -9.27353702371521
-5.37845999639894 -9.26042513625339
-5.37788907071471 -9.24730499990975
-5.37738409018766 -9.23437367695743
-5.37696097682436 -9.22164212723542
-5.37663261792826 -9.20908777127349
-5.37642784081328 -9.19673447424135
-5.37639627523991 -9.18470677981107
-5.37663351030529 -9.17304978313894
-5.37728037525984 -9.16177019179074
-5.37847937080663 -9.15087095020958
-5.38041544396183 -9.14033496745182
-5.38309486834438 -9.13025626991566
-5.38631267779609 -9.12063799265614
-5.38978632337472 -9.11151016694982
-5.39329313286695 -9.10280074702606
-5.39664963885092 -9.0946417822249
-5.39976481613302 -9.08709004676218
-5.40252383145754 -9.08025379840582
-5.40485283548356 -9.07422717439091
-5.40674014098823 -9.06907246069404
-5.40818381626369 -9.06488077063498
-5.40924182027773 -9.06163656799151
-5.40999130240887 -9.05926531331666
-5.41050792654742 -9.0576209714683
-5.41086181946723 -9.05647829682333
-5.41112935547033 -9.0556458532741
-5.41135555072916 -9.05498417043016
-5.41156165582072 -9.05438239224192
-5.41176554378073 -9.05373969501058
-5.41196788031356 -9.05294840152733
-5.41216908690309 -9.05187594077756
-5.41235256899516 -9.05034263513801
-5.41249875264015 -9.04812508880801
-5.41256497045138 -9.04503603012416
-5.41252872750243 -9.04102690611186
-5.4123334763854 -9.03617977102004
-5.41196316862103 -9.03055351711073
-5.41142276544527 -9.02411001376323
-5.41078949832724 -9.01697568912872
-5.41009133583465 -9.00926174873055
-5.40931169386216 -9.00113301932527
-5.40840344967715 -8.99266809191937
-5.40728103546281 -8.9838389852777
-5.40582283982179 -8.97450645087418
-5.40393482573384 -8.96466464121687
-5.40156834747083 -8.95437313098197
-5.39871196456725 -8.94364965393055
-5.39542315119587 -8.93262382583588
-5.39175387164907 -8.92135784974452
-5.3877593487439 -8.90985693777883
-5.38347369480078 -8.898120454649
-5.37899975796505 -8.88629253181983
-5.3744086953802 -8.87430099573113
-5.36978559166684 -8.86196250939241
-5.36521013535807 -8.84913609808651
-5.3607027696369 -8.8357665486254
-5.3562617624799 -8.82209288657402
-5.3517620023919 -8.80821944114447
-5.34711894620265 -8.79428406776268
-5.34229120869367 -8.78038074316891
-5.33723995826172 -8.76654182354507
-5.33198965088006 -8.75292769687047
-5.32649761549048 -8.73950191723223
-5.32073095175271 -8.72622609607317
-5.31466099389192 -8.71301997136182
-5.3083257610085 -8.69998245184346
-5.30171862733875 -8.68703970466104
-5.29480403477133 -8.67413376808358
-5.28757876169037 -8.66121284615261
-5.28002094675614 -8.64828963920822
-5.27215366810378 -8.63537646007454
-5.26399838358295 -8.62245380346799
-5.2555785275495 -8.6094838239483
-5.24690664793445 -8.59640399935919
-5.23808742517132 -8.58331765265025
-5.22909983647251 -8.57019301596658
-5.21994498473055 -8.55702539815158
-5.21064626275411 -8.54381967078111
-5.20135421877695 -8.53067480633628
-5.19211962497783 -8.51758273376081
-5.18285155174872 -8.50441022634255
-5.17355435437483 -8.49116132624827
-5.16434519360188 -8.47794453447116
-5.15523833245065 -8.46475646825838
-5.14611389463299 -8.45141432144289
-5.1369667648437 -8.43789371894311
-5.1278720945788 -8.42432463466917
-5.11879614339976 -8.41062791879059
-5.10970831904807 -8.39678678530368
-5.10060967139359 -8.38274351731538
-5.09148260296789 -8.36845676085046
-5.08242245943694 -8.35401187997291
-5.07338449799342 -8.33936724393647
-5.06432770416511 -8.32443100243275
-5.05517425559641 -8.30912945271122
-5.04598142429548 -8.29351193637983
-5.03669894885799 -8.27747435397825
-5.02729512214173 -8.26097622880759
-5.01770099314176 -8.24393342942613
-5.0078996756134 -8.22618731166421
-4.99800758557548 -8.20788014442552
-4.98799133329865 -8.18902669907493
-4.97788071235899 -8.16963396993949
-4.9678409981716 -8.1499703450132
-4.95791485695114 -8.13007458587182
-4.94811361559691 -8.10992979725675
-4.93855371239145 -8.08977887678463
-4.92920719504523 -8.0695286806698
-4.92004839443556 -8.04911969670806
-4.91116732027977 -8.02871704686264
-4.90254935807807 -8.00825241999349
-4.89417371688048 -7.987630376588
-4.88612766156109 -7.96703059041622
-4.87840255931041 -7.94640197678074
-4.87099333617307 -7.92570536231735
-4.86391444478597 -7.90485668739486
-4.8572527315635 -7.88403116000579
-4.85103073411097 -7.8631733690654
-4.84526087478535 -7.84227274366801
-4.83996440717158 -7.82131154628803
-4.8350740681782 -7.79997714027283
-4.83065821507986 -7.77848059851757
-4.82671369601337 -7.75677134053975
-4.82323259283097 -7.73479663879607
-4.82023451216999 -7.71279429840331
-4.81770263068542 -7.69067924103053
-4.81563130644603 -7.66838100327135
-4.81400831676242 -7.64611667996119
-4.81288007443314 -7.62385779965823
-4.81224169559035 -7.60161030125288
-4.81210347655513 -7.57956628318485
-4.8124926117607 -7.55763368707891
-4.81337368046812 -7.53572236782617
-4.81476041623547 -7.51372900504242
-4.81660028836648 -7.49183801322628
-4.8189001977818 -7.46998298331276
-4.82166039124857 -7.44812026077226
-4.82484084047371 -7.42640591101584
-4.82840497240958 -7.40475285531711
-4.83233033153827 -7.38310104044309
-4.83652071433175 -7.36162511444198
-4.84090768430684 -7.34021093940746
-4.84543909067033 -7.31850349707838
-4.84996935877413 -7.29668486564044
-4.85443680407757 -7.27471749578363
-4.85883644333083 -7.25258916563011
-4.86310143615775 -7.23054543651147
-4.86725999740221 -7.20856129442039
-4.87137287948239 -7.1866217098864
-4.87547634779786 -7.16497037010809
-4.87963870427701 -7.14354750576524
-4.88388770972058 -7.12228137323724
-4.8881479829249 -7.10131073939202
-4.89243013836452 -7.08053913652388
-4.89674846029089 -7.05992018539481
-4.90108968667834 -7.03938283615163
-4.9054275965872 -7.01912095841618
-4.90978586089281 -6.99910657474694
-4.91417475752698 -6.97935317832598
-4.91864335921733 -6.95980073356945
-4.92323732415057 -6.94042287415744
-4.92793933179885 -6.9213564796824
-4.93280068780872 -6.9025728726579
-4.93781542582307 -6.88404050339825
-4.94305384941022 -6.8655186594105
-4.94847254739647 -6.8472038967309
-4.9540920804421 -6.82900921476971
-4.9599331661317 -6.81086773020267
-4.96603699516558 -6.79270472530961
-4.97235442767727 -6.77464287249781
-4.97892273258259 -6.75663753959076
-4.98573729908741 -6.73863682574047
-4.99283496340836 -6.72053934305242
-5.00024974974642 -6.70231124497778
-5.00788405956344 -6.68409779907076
-5.0157445067546 -6.66586248195664
-5.02377484997439 -6.64759859785358
-5.03203152495608 -6.62911042463961
-5.04045770952242 -6.61043265488796
-5.04910918843731 -6.59140072189321
-5.0579283039125 -6.57209222042058
-5.06676787654281 -6.55282314330491
-5.07561872575169 -6.5336146076222
-5.08445674267487 -6.51449785979975
-5.0932388440038 -6.4955027863635
-5.10197979951519 -6.4765791534237
-5.11058656032385 -6.45795668963676
-5.11907802937344 -6.43959734249864
-5.1274705206503 -6.42147364127565
-5.13578052115897 -6.40353059995861
-5.14401061918905 -6.38574349697782
-5.15228451328314 -6.36784573665953
-5.16061553991809 -6.34982018223368
-5.16894256448394 -6.33184604072028
-5.17728787033308 -6.31387662482526
-5.18567681254432 -6.29584929907329
-5.19412526941989 -6.27769171731895
-5.20264232671938 -6.25937671124501
-5.21114770592458 -6.2410852439913
-5.21964323978014 -6.2227677081501
-5.22814594408668 -6.20440364301348
-5.23677492487006 -6.18571353835586
-5.24543509587182 -6.16691600895704
-5.25412072409688 -6.14801066778601
-5.26296488509219 -6.1287613158087
-5.27195916363146 -6.10917562211253
-5.28101950162073 -6.08947289168874
-5.29014022521743 -6.06965183689124
-5.29932582652706 -6.04969133452456
-5.30857489698163 -6.02958040145363
-5.31777898600133 -6.00951734810379
-5.32697343569726 -5.98947401081123
-5.33615831744165 -5.96939202696918
-5.34531528554228 -5.94924005993672
-5.35445493498753 -5.92899180493277
-5.36355129730752 -5.90888955664147
-5.37263840466223 -5.88889858995234
-5.38174872874268 -5.86896931892544
-5.39091863870033 -5.84906248655192
-5.40014408809198 -5.82914261185028
-5.4094303640744 -5.80915526667921
-5.41877619223739 -5.78905147196899
-5.42817349198479 -5.7688297363116
-5.43752193929745 -5.74870330575203
-5.44684752514993 -5.72863339670464
-5.45617240155827 -5.70859069477759
-5.46550178091779 -5.68853370250218
-5.47485844209288 -5.66846876475072
-5.48413522997804 -5.64858515224015
-5.49334608324794 -5.62883451306123
-5.50261156381274 -5.60895627941583
-5.5119436999967 -5.58888625144326
-5.52124801537256 -5.56884634292134
-5.53054308693854 -5.54881227278773
-5.53982911132414 -5.52874397804481
-5.5491383914917 -5.50861241784834
-5.5583726112351 -5.48864594783803
-5.56765626612514 -5.4685813328054
-5.57700445171604 -5.44842365427121
-5.58640948000519 -5.42812889256352
-5.59577232680393 -5.40789805735437
-5.60512141201561 -5.38769162426604
-5.61446602437543 -5.36749090659858
-5.62379787296696 -5.34727817660735
-5.6331233500264 -5.32705044123104
-5.64233003711898 -5.3070534441311
-5.65152098294182 -5.28708583733014
-5.66081339939111 -5.26693445239399
-5.67019021824955 -5.24665682664341
-5.67966539349457 -5.22627259380282
-5.68923578902881 -5.20575002686762
-5.69889748471831 -5.18510122545531
-5.70852738183258 -5.16455163434887
-5.7181315690742 -5.14407532020115
-5.72771181302186 -5.12359163081541
-5.7372826569011 -5.10306893050315
-5.74678980955492 -5.08267004101003
-5.7562523272948 -5.0623440338985
-5.76569859641288 -5.04203543318446
-5.77517106368675 -5.02170844395404
-5.78468296368184 -5.00130742593304
-5.79415325985532 -4.98103233950256
-5.80357799773871 -4.96080274493301
-5.81296238350752 -4.94059607989106
-5.82232163708024 -4.92039361394321
-5.83166675667364 -4.90016057920644
-5.84102317725487 -4.87987382954018
-5.85040141627865 -4.85951797473151
-5.85983248670626 -4.83904929476248
-5.86923856531076 -4.81867315853541
-5.87863852399339 -4.79832569913617
-5.88805021231848 -4.77796460630444
-5.89747663328679 -4.7575452777525
-5.90694008083624 -4.73703010852462
-5.91635137536772 -4.71661668127267
-5.92573026255701 -4.69630188457648
-5.93509104729232 -4.67608237163211
-5.94454588346694 -4.65565252235059
-5.95398731649813 -4.63525771215771
-5.96342335051345 -4.61486444048248
-5.97287790917623 -4.59442272594149
-5.98236120903007 -4.57389120421888
-5.99178739446445 -4.55346472423521
-6.00117873322794 -4.5331212289906
-6.01055161158263 -4.51280232484824
-6.01991932864143 -4.49247047655315
-6.02929430240906 -4.47208598847857
-6.0386228964097 -4.4518013377115
-6.04792778335232 -4.43155123576131
-6.05721079255062 -4.41132642837963
-6.06650373012805 -4.39110515115452
-6.07582325591247 -4.37084948067464
-6.0851736705463 -4.35056597880956
-6.09468221339027 -4.32998855572144
-6.10435136625494 -4.3091033669474
-6.1140592883099 -4.28812975080245
-6.12380837422227 -4.26703773498344
-6.13360795603297 -4.24581101090957
-6.14347586577043 -4.22444419932143
-6.15332370496298 -4.20312619561016
-6.16316060113858 -4.18178608058817
-6.1729938450351 -4.16039947905788
-6.182846926827 -4.13893422389087
-6.19271304592978 -4.1173282256894
-6.20248668932028 -4.09579730547924
-6.21214767855508 -4.07430186982698
-6.22169051510916 -4.05282501972122
-6.23114027713594 -4.03136971870159
-6.24047274422429 -4.00989229417481
-6.24949259287181 -3.98856310956281
-6.2580696095833 -3.96725912347403
-6.26599907041289 -3.94586151363966
-6.2730009012583 -3.92426927116057
-6.27855485600407 -3.90253302752087
-6.28199790592504 -3.88041129191672
-6.28579337820218 -3.85840008289361
-6.28994169156941 -3.8364984633941
-6.29444329531903 -3.81470550187008
-6.29929868708548 -3.79302027320289
-6.30450841118875 -3.77144185577784
-6.31007303356731 -3.74996933092934
-6.31599317108872 -3.72860178641539
-6.32226947643744 -3.70733831060995
-6.32890264257253 -3.68617800349994
-6.33589339669239 -3.66511996302655
-6.34324251222959 -3.64416329500933
-6.35095080621464 -3.6233071083448
-6.35901911523106 -3.60255051556757
-6.36744833263258 -3.5818926324952
-6.37623938457156 -3.56133258286388
-6.38539324701779 -3.54086949260805
-6.39491092319562 -3.52050249124346
-6.4047934685367 -3.50023071002271
-6.4150419638547 -3.48005329076826
-6.42565754394631 -3.45996937424439
-6.43664137655718 -3.43997810666881
-6.44799467388449 -3.42007863848779
-6.45971868530176 -3.40027012303637
-6.47181470402353 -3.38055171788075
-6.48428405903496 -3.3609225840457
-6.49712812750697 -3.34138188776003
-6.51034833883844 -3.32192879812419
-6.52394613141104 -3.30256248665404
-6.53792300760789 -3.28328213088978
-6.55228051335871 -3.26408691158538
-6.56397299150349 -3.24415308732898
-6.57379089155122 -3.22387613410619
-6.58231009036745 -3.20330929910959
-6.58993838273447 -3.18252426432058
-6.59693361971404 -3.16155615297501
-6.60348077377571 -3.14042616132214
-6.60963893981765 -3.11939636209065
-6.61555312361071 -3.09820962626455
-6.62125999639873 -3.07686727353238
-6.62681179575162 -3.05536130511766
-6.63215056884297 -3.0339577980107
-6.63731706759788 -3.01263850730072
-6.6423398600949 -2.99141604490311
-6.64727378159162 -2.97001775198858
-6.65207463119119 -2.94876124427399
-6.65677672155459 -2.92736816246234
-6.66136409983407 -2.90583875567026
-6.66585196977916 -2.88415043059002
-6.67020513648819 -2.86252196967844
-6.67443303256645 -2.84086393460834
-6.67854081749649 -2.81909868656178
-6.6825302520133 -2.79716412190652
-6.68631590933594 -2.77523776527266
-6.68984534516025 -2.75331426873713
-6.69312389202446 -2.7314222650602
-6.69614830854372 -2.70952866229217
-6.69893984796236 -2.68766710043674
-6.7014974038403 -2.66584309061871
-6.70383092804138 -2.64401818127939
-6.70596314125405 -2.62217052389505
-6.70791650732148 -2.60025770748716
-6.70964976338046 -2.57849439896601
-6.71116870201731 -2.55680206921377
-6.71247666593536 -2.53513083307778
-6.71355832275744 -2.51341758825374
-6.71442752722032 -2.49186018770218
-6.71512964647493 -2.47038110340469
-6.71569874886646 -2.44893267640518
-6.71616333775347 -2.4274723831816
-6.71655114407486 -2.4059724619588
-6.71687698252215 -2.38466650909495
-6.71714522395217 -2.36328462435128
-6.71738419665019 -2.3418121843182
-6.71763411641434 -2.32020264332849
-6.71790340446832 -2.2986879804896
-6.71817011328757 -2.27724807450368
-6.71842866540046 -2.25582508153185
-6.71870575220388 -2.23438073943762
-6.71903119745626 -2.21314120469488
-6.71944294248975 -2.19206545202081
-6.71999346623173 -2.17110639924942
-6.72067450211327 -2.15023651259966
-6.72149975753616 -2.12937620197566
-6.72250720572189 -2.10873222293711
-6.72371502061497 -2.08817908456938
-6.72517510943735 -2.06764001304337
-6.7269818259484 -2.0469903434775
-6.72923740246897 -2.02613712915139
-6.73201870205051 -2.00525481866173
-6.73539015638254 -1.98432873542533
-6.7394146111676 -1.96340387070878
-6.74416388335241 -1.94249191970188
-6.74959633781399 -1.92180773608091
-6.75570120949686 -1.90131959115051
-6.76247999056111 -1.88097834915522
-6.76991304756441 -1.86074273964681
-6.7780235738189 -1.84053893887367
-6.78669650562999 -1.82056644625358
-6.79592825173293 -1.80077877138733
-6.8056915363984 -1.78116038761937
-6.81593080246446 -1.76165713232405
-6.82645197336634 -1.74245187757135
-6.83715886793041 -1.72340892875299
-6.84802018873888 -1.70443653495938
-6.85898017825166 -1.68540451557104
-6.86999967813454 -1.66620620039945
-6.8808981903027 -1.64695830325937
-6.89164303293732 -1.62756823386408
-6.90220929390359 -1.60796983364676
-6.91255883168269 -1.58807878751822
-6.92253294436031 -1.5680799964387
-6.9321115541951 -1.54792256436904
-6.9412921302134 -1.52752901419485
-6.95010604338355 -1.50686606101316
-6.95859023653556 -1.4859222807396
-6.96669254498254 -1.46492394565823
-6.97448151691618 -1.44387044294291
-6.98202341825368 -1.42275191465136
-6.98939340060902 -1.40156435990269
-6.99663768067876 -1.38030896231373
-7.00373087115743 -1.35922678392685
-7.01070624614361 -1.33826966453952
-7.01759405955404 -1.31739364519812
-7.02440983894978 -1.29658510745591
-7.03107290962552 -1.27605948026607
-7.03761142548813 -1.25577810045591
-7.04411667443739 -1.23547522913439
-7.05059025331955 -1.21511602312329
-7.05699479647718 -1.19489069546919
-7.06336541856449 -1.17477739385717
-7.06974831915624 -1.15470603767982
-7.07619178595449 -1.13461357107737
-7.08270383612551 -1.114399904642
-7.08923676315162 -1.09421263666665
-7.09579682445151 -1.07396929247141
-7.10241166593325 -1.0536115027307
-7.10908450890066 -1.03307847331705
-7.11574424737861 -1.01254804486004
-7.12241848227648 -0.991997498773606
-7.12910361505442 -0.971405769121089
-7.13579788611305 -0.950775041110394
-7.14250107231784 -0.930100965923985
-7.14914717489895 -0.909621501384973
-7.15581195362435 -0.889109746384564
-7.16249020870476 -0.868549460097727
-7.16919584826703 -0.847917946235735
-7.17586328277245 -0.827398319454942
-7.18248776706536 -0.806942871419906
-7.18909098164103 -0.78651464503323
-7.19569099612564 -0.76604942012848
-7.20223810232153 -0.745726347874233
-7.20877456977812 -0.7254686761284
-7.21532406561982 -0.70521060973094
-7.22189564259734 -0.684886287778657
-7.22850206437157 -0.66441874624162
-7.23507786440607 -0.643986762178606
-7.24165332860828 -0.623557258152662
-7.24824188737625 -0.603097338012723
-7.25485914692238 -0.582571034152899
-7.26152556127333 -0.56195752585465
-7.26818575714444 -0.541444002345922
-7.27488423654673 -0.521004500824852
-7.28163829276007 -0.500612551166918
-7.28845776959254 -0.480194696319154
-7.29527941630117 -0.459945663802942
-7.30212444504358 -0.439861354150633
-7.30908527347923 -0.419670669522764
-7.31617289357706 -0.399344642113502
-7.32330191375009 -0.379098511288637
-7.33046118858371 -0.358894735531583
-7.33766147339263 -0.338681549725721
-7.34492043400221 -0.31840221191147
-7.35224877660611 -0.298020016931324
-7.35955117727942 -0.277727691511664
-7.36684566754428 -0.25745494907793
-7.37416575888925 -0.237164725723547
-7.38151563008834 -0.216787092791641
-7.38883720765417 -0.19648386014074
-7.39613492555098 -0.176182934330213
-7.40344285827774 -0.155844207310679
-7.41079677874206 -0.135451263022638
-7.41820655509511 -0.114978616675586
-7.4256073750713 -0.0946352611393269
-7.43300019413763 -0.0744036660235857
-7.44036779983046 -0.0542520889485265
-7.44775602972349 -0.0341351892378446
-7.45507961734201 -0.0142215468818535
-7.46236015486901 0.00552000828623016
-7.46962880000322 0.0251082371489448
-7.47689789926871 0.0445803966271074
-7.48419668523932 0.0639645581632389
-7.49147531242222 0.0830871821321535
-7.49876462874908 0.102006839121576
-7.50611408578709 0.120813413599376
-7.51354962116221 0.139545660777007
-7.52099395804401 0.157984356072232
-7.52847217905735 0.176137562601376
-7.5360487192099 0.193997488409136
-7.54372046170916 0.211570773525517
-7.5514498714035 0.228907887170725
-7.55897366825526 0.245895191368935
-7.56604441379503 0.262672020339376
-7.5723384432147 0.279404254502874
-7.57750227909806 0.296218735815187
-7.58121087586081 0.313224375291277
-7.58323063532615 0.330254039272298
-7.58363239131774 0.347329304936917
-7.5826519134556 0.364449384492371
-7.58054463949814 0.381526094949368
-7.57752774938218 0.398280987462987
-7.57371578188821 0.414658297484459
-7.56917357883833 0.430602662711996
-7.56387672023003 0.446048246463978
-7.55777470534326 0.461000970115903
-7.5507992548719 0.475291547842093
-7.54283599703683 0.488979982519366
-7.53370524474214 0.502089268131991
-7.52324946523526 0.514667515950154
-7.51167675161277 0.526737730945345
-7.49915858995089 0.538453978710662
-7.48590917862169 0.549880443851176
-7.47212055160129 0.561065016084792
-7.45809581130483 0.571917403720466
-7.44394698321998 0.582512580734421
-7.42973154793395 0.592914477900356
-7.4154621755122 0.603144190087992
-7.40114244953802 0.613262841377564
-7.38691881665287 0.623223596822575
-7.37275211541248 0.633092949943419
-7.35861273304187 0.642909159515366
-7.34445886893474 0.652694541115534
-7.33024626222376 0.662509805555904
-7.31593758966296 0.672371689656725
-7.30149618394267 0.682314741046819
-7.28687985964175 0.69240250746055
-7.27225242921309 0.702501132689493
-7.25760766611208 0.712624536977807
-7.24297006774882 0.722780112751794
-7.22833169774776 0.732913530391416
-7.21368448956679 0.743012246308663
-7.19915342737437 0.753008068708433
-7.18468123032756 0.762885054558913
-7.17022053576209 0.772681810818538
-7.15570908445938 0.782449733176895
-7.14122744000224 0.792078805271349
-7.12669300976124 0.80156538390503
-7.11204793528795 0.810946434227111
-7.09723258488314 0.820247611731436
-7.0821922848276 0.829442224920401
-7.06691860071843 0.838485915713592
-7.05119564709047 0.847468076108482
-7.03500101379523 0.856370674333657
-7.01851881725729 0.865037368631462
-7.00176704535328 0.87345942753684
-6.9847687776343 0.88161919023716
-6.96750561834571 0.889516768766
-6.9499482280952 0.897170433705137
-6.93228682496616 0.904494693843518
-6.91450914327495 0.911505473856251
-6.89661696822055 0.91819604544736
-6.87855919307909 0.924537613265424
-6.86050910387925 0.930474442321617
-6.84243947894048 0.93603774889543
-6.82433660180294 0.941258071222556
-6.80616437099464 0.946140026776475
-6.78785818373123 0.950695186365539
-6.76956633327015 0.954908560360953
-6.75122090255496 0.958803635698094
-6.73274495906839 0.962410870078478
-6.71407325715846 0.965713720425017
-6.69539347755311 0.968674561937976
-6.67667333199805 0.971324616589272
-6.65792418010637 0.973703347326298
-6.63913568973522 0.975836417620585
-6.62029091136198 0.977740034407132
-6.60161568863258 0.979421997412199
-6.58284304079309 0.980947211229213
-6.56395868308291 0.982355759421889
-6.54491272126953 0.983666295325098
-6.52589712972845 0.98488089833
-6.50687354899934 0.986047920921906
-6.48781911147037 0.987175571470764
-6.46870573459175 0.988267086899032
-6.44971405873983 0.989317903307236
-6.43080976061073 0.990350357158519
-6.41195638406949 0.99138198975798
-6.3928906284228 0.99243020469576
-6.37381795279372 0.993483110900195
-6.3547072185954 0.99456339976224
-6.33552087783572 0.995664609071686
-6.31622548200719 0.996753406340539
-6.29678603758414 0.997856861872882
-6.27739558360325 0.998974121435806
-6.25798631565284 1.00010666463355
-6.23852090402015 1.00122307989536
-6.21895256555953 1.00232639317083
-6.19920037638854 1.00338362862609
-6.17947201623909 1.00442158793869
-6.15955873422105 1.00548070443655
-6.13942245606042 1.00655431103771
-6.11925667482395 1.00763819344514
-6.09903129608203 1.00872987313469
-6.07873981279087 1.00982819412779
-6.05839824952031 1.01095001378493
-6.03798054461674 1.01206548583138
-6.01766646708908 1.0131774756549
-5.9974009590764 1.01428826153424
-5.97715395281243 1.01539991738133
-5.95688167754705 1.01651466285289
-5.93655292837485 1.01763524258724
-5.91636243586709 1.01876538801781
-5.89625912331024 1.0198935535782
-5.87617295127173 1.02102475265135
-5.85603675506427 1.02216451159674
-5.83602722696419 1.02328601846629
-5.81610691570103 1.02439316066036
-5.79621976052262 1.02548739853636
-5.77631528067496 1.02656800489198
-5.75634007492387 1.0276319474319
-5.73646428795864 1.02865648355863
-5.71663934090143 1.02966288283312
-5.69684204142477 1.03063556220185
-5.67705384760553 1.03155323232488
-5.65751000449411 1.03235151261534
-5.63820098830439 1.03301340273654
-5.61913996543242 1.03343125335861
-5.60039866534974 1.03350681855426
-5.58210261310027 1.03306705340381
-5.56440181964324 1.03185174390255
-5.54759690673698 1.02950327109106
-5.53106347295913 1.02731313740859
-5.51479970275382 1.02528083743041
-5.49880379317769 1.02340588885303
-5.48307399183586 1.02168786989754
-5.46760855501282 1.0201261746623
-5.45240578176083 1.01872038968805
-5.43746399902347 1.0174702125996
-5.4227815611821 1.01637537231085
-5.40835684844848 1.01543521030778
-5.39418826713587 1.01464936211696
-5.38027425909296 1.01401767194825
-5.36661328845928 1.01354000628291
-5.35320385171396 1.01321512282701
-5.3400444708775 1.01304294312094
-5.32713369920637 1.01302343717938
-5.31447010653519 1.01315659905026
-5.30205230530823 1.01344245481868
-5.28987891501455 1.01388105539234
-5.27794860432473 1.01447370864223
-5.26626005159735 1.01522048598909
-5.2548119782269 1.01612154147545
-5.24360311311337 1.01717705948265
-5.23263222590179 1.01838790491007
-5.22189810198973 1.01975431390065
-5.21139956611237 1.02127656946638
-5.20113545689348 1.02295500244339
-5.1911046447659 1.02479036878119
-5.18130602331312 1.02678298227607
-5.17173851310036 1.02893329479982
-5.16240105656691 1.03124181758716
-5.15615176911713 1.03278701146802
-5.15195580939903 1.03383025943547
-5.14914069729922 1.03451424973132
-5.14723269242043 1.03495468464248
-5.14592752851346 1.03522606831493
-5.14502237760921 1.03537430480793
-5.1443985550868 1.03545825980138
-5.14396789296499 1.03549213204805
-5.14365792448816 1.0354816531959
-5.14341648533521 1.03547572994574
-5.14321975907441 1.03547335774217
-5.14303464002841 1.03547413769122
-5.14282998015817 1.03547820127417
-5.14257133555316 1.03545244995171
-5.14221517797025 1.03539252645045
-5.14168475693198 1.03532208281031
-5.1408739910211 1.03522919955082
-5.13964644349414 1.03509816750232
-5.13779553608404 1.03490682180792
-5.13509387449512 1.03462279196015
-5.13140634817728 1.03423181923135
-5.12665055287312 1.03373534055928
-5.12085022132221 1.03311694621587
-5.11402147360359 1.03239028046823
-5.10617554110719 1.03153378168569
-5.09727020087023 1.03052081899452
-5.08733720682863 1.02933207791945
-5.07647081936318 1.02798608466513
-5.06482685331409 1.02647476543094
-5.05239662651427 1.02479586516502
-5.03924095762236 1.02295256813677
-5.02548357940882 1.0209371426349
-5.01124786087241 1.01879720573169
-4.99679529956513 1.01657620425816
-4.98216591989793 1.01427008924268
-4.9674372616115 1.01189419670669
-4.95267014530787 1.0094351680182
-4.93803665215275 1.00691626089726
-4.92349589921172 1.00431681765478
-4.90902261650134 1.00163634329397
-4.89463651467517 0.998860635920524
-4.88030471401626 0.995993169783068
-4.86615482153211 0.993075956559012
-4.85210962041243 0.990085156118635
-4.83809269300581 0.987016576492585
-4.82401556031498 0.983852869233595
-4.80996460096678 0.980624789107773
-4.79584561089435 0.977350512281622
-4.78160339087161 0.974008300526676
-4.76719563158983 0.970598713945159
-4.75277079955718 0.967171727966596
-4.73828949004657 0.963743969589831
-4.72371962182471 0.960299836626287
-4.70903101740434 0.956820958340432
-4.69435851226178 0.953347278608729
-4.67962102691414 0.949855428037538
-4.66477699891793 0.946351059115615
-4.64981739101696 0.942837709215278
-4.63471371293715 0.939317394993357
-4.61943003838921 0.935789265424901
-4.60391698280584 0.932252834933235
-4.58817064070052 0.928738314287179
-4.57216466238363 0.92524583247389
-4.55586264651599 0.921813033720237
-4.53922882177999 0.918485969075898
-4.52220521408824 0.915298166054725
-4.50476915850184 0.912274612835173
-4.48713129494262 0.909469772541781
-4.4693337717624 0.906910909317449
-4.4514086720999 0.904604393509887
-4.43333298263292 0.90256718402387
-4.41529378938599 0.900831388342483
-4.39723206610721 0.899417565786452
-4.37911862010253 0.898336575787253
-4.36089874298317 0.897562665148085
-4.34251663292433 0.89708993038109
-4.32412462804838 0.896933006723627
-4.3056547766778 0.89706534983684
-4.28705966200302 0.897446841162763
-4.26827078039094 0.898079508141002
-4.24947292974827 0.89891326523071
-4.23061394505117 0.899932043606989
-4.21166501360384 0.901120662334862
-4.19261581352053 0.902428479277847
-4.17343915444601 0.903825042931113
-4.15435547837939 0.905264477447912
-4.13534860510749 0.906708159802372
-4.11641528866813 0.908150280960056
-4.09756465795054 0.90958308091155
-4.07881975040543 0.910997219735283
-4.06042320902818 0.912349795373196
-4.04237359322232 0.913649996664262
-4.02467738278136 0.914867743348909
-4.00733326222686 0.915989416446684
-3.9905505326279 0.917015552107147
-3.97427979505572 0.917932216447613
-3.95849054119231 0.918706956544914
-3.94313238637881 0.919344127337897
-3.92812658428076 0.919858598806354
-3.91340324251746 0.920242796030136
-3.89888953783259 0.920499205537186
-3.88453111578002 0.920608695751802
-3.87023248335046 0.920557690062407
-3.85604274595126 0.920337560753672
-3.84182752454859 0.919943168221223
-3.82739770275147 0.919370303037794
-3.81257849756405 0.918614972494518
-3.79746595392073 0.917706513053125
-3.78203947265306 0.916617949242287
-3.76630972525109 0.915290152042853
-3.75032042174863 0.913686027306287
-3.73408842099348 0.91175383490022
-3.71779154194402 0.90948094057665
-3.70141169514024 0.906859584569571
-3.68490001618655 0.903885565534696
-3.66816862676969 0.900557528844556
-3.65132813205102 0.896908812398227
-3.63430259444818 0.892918533503667
-3.61681676275006 0.888538439190599
-3.59882064362145 0.8837810360106
-3.58048107722207 0.878720013586117
-3.56173893931955 0.873376013323927
-3.54258507193354 0.867787690229116
-3.52300830953432 0.862013186251053
-3.50299358301491 0.856078813218651
-3.4827548374614 0.850078775377752
-3.46230014223084 0.844033670546712
-3.44165168443006 0.837986878509435
-3.42081585145046 0.831981416292286
-3.40001976787225 0.826097498636435
-3.37924498113759 0.820375203450466
-3.35845980796119 0.814850013220395
-3.33766445040587 0.809529944419748
-3.31682375881324 0.804456018423727
-3.29591178253546 0.79964742894253
-3.27492420219265 0.795130516748229
-3.25386078649029 0.790924667992555
-3.23269173914634 0.787039740854329
-3.21168900653562 0.78353974645707
-3.19058073679042 0.780366503819586
-3.16911095054205 0.777484999574493
-3.14751780557388 0.774909547714781
-3.12580007919248 0.772643702712314
-3.103935574377 0.770680855673786
-3.08187740110301 0.7689855285368
-3.0595797101028 0.767552148642818
-3.03724224032434 0.766388293914873
-3.01477214395718 0.765454266682961
-2.99210470295565 0.764718823240329
-2.96917562111547 0.764120146463773
-2.94616247545688 0.763587463244757
-2.9230100711005 0.763093212960003
-2.89969021802008 0.762618016516142
-2.87619701221478 0.762174401530107
-2.85249536902922 0.761719481366113
-2.82883466512078 0.761238834116813
-2.80516844647088 0.760713690354175
-2.78144933457594 0.760117763619993
-2.75758741537888 0.759412808713369
-2.7337714507464 0.758606285130934
-2.70994568324944 0.757654741168923
-2.68605269885073 0.756523975113909
-2.66205740927399 0.755179879519579
-2.63792432879592 0.753582678220111
-2.61389749444865 0.751649832846408
-2.58995292839532 0.749400722305735
-2.56606367905341 0.746829974923748
-2.54222892392811 0.743941847801268
-2.51847382159027 0.740718907769785
-2.49478648945969 0.737180263361279
-2.471099178401 0.73332672296325
-2.4473609580249 0.72916439962686
-2.42378140951081 0.72477315394982
-2.40029410602428 0.720161541876646
-2.37688177762317 0.715335270426281
-2.35348931776665 0.710294163965546
-2.33004814469525 0.705032128302481
-2.30675056174115 0.699600289399008
-2.28349350585634 0.693984776569958
-2.26023117753157 0.688209140459554
-2.23691689802454 0.682269845312705
-2.21378114309662 0.676227477468397
-2.19074807625242 0.670088321983041
-2.16779278347913 0.663803421364609
-2.14485298271811 0.657346642218253
-2.12188581792025 0.650692572055439
-2.09912972205987 0.643908651586418
-2.07652230430835 0.636965473789358
-2.05402618833875 0.629835862142249
-2.03155546218759 0.622444594358782
-2.00929756446068 0.614827950340858
-1.98718985004793 0.606954775005609
-1.96516131662579 0.598770685039285
-1.94313746901956 0.590212211629359
-1.92131367783968 0.581336770374071
-1.89963308525415 0.572106563464043
-1.87806293043063 0.562495843587265
-1.85657221973497 0.552481167506333
-1.83512651234695 0.542035201140088
-1.81391756626758 0.531278070553442
-1.79290790623451 0.520211867276018
-1.77207677316406 0.508817251071433
-1.75139981896661 0.497054362161967
-1.73113090392343 0.485035854251113
-1.71123938428376 0.472677195966881
-1.69167404030671 0.4599303368726
-1.67233764283624 0.44686901915543
-1.65313785061396 0.433544613524804
-1.63427439161319 0.420149419690925
-1.61571769798624 0.406647627631035
-1.59742255478765 0.393017306101812
-1.57935412749356 0.379215135465665
-1.56148210115936 0.365232434075367
-1.54374193940633 0.351028814423815
-1.5260744798454 0.336588697412592
-1.50846643071503 0.321918719852811
-1.4910826727828 0.307205262489153
-1.47387357050021 0.292409476092745
-1.45678524249183 0.277510428753727
-1.43991867272192 0.26268633560188
-1.42319353563796 0.247911554253377
-1.40651967681895 0.233168648217297
-1.39000055826412 0.218631862344841
-1.37356387447836 0.204291747001828
-1.35713437291886 0.1901416267516
-1.34065483067156 0.176144871792961
-1.32407641929787 0.162258759074423
-1.30753548303459 0.148613740717326
-1.29098967972876 0.135164045210076
-1.27441266196286 0.121896450816518
-1.25755420852807 0.108641864554363
-1.24057080533777 0.0955442407122812
-1.2234129216493 0.0825865874492921
-1.20605211172281 0.0697448337994801
-1.18844260094145 0.0570160820065213
-1.17053039762681 0.0443824974417552
-1.15247964875889 0.0319670976706628
-1.13426301247998 0.0197482777208736
-1.11585886880789 0.00772794750140929
-1.09721403520335 -0.00408532561668544
-1.07850391293271 -0.0155864394241123
-1.05967428357267 -0.0268032606732795
-1.04066763780061 -0.0377808783075061
-1.021447128063 -0.0485538393844161
-1.00220899283757 -0.0590320105258311
-0.982911065027751 -0.0692573645801219
-0.963509525624403 -0.0792331813788972
-0.943978431425334 -0.0889828409208454
-0.924271722039895 -0.0985216566482265
-0.90454418076153 -0.107705989071205
-0.884731818468253 -0.116549734498921
-0.864769831543923 -0.125073902481668
-0.8446018717077 -0.133305793267675
-0.824414412688805 -0.141162517479582
-0.804141752275908 -0.14865351465535
-0.783720664526487 -0.155790646614566
-0.763062948156878 -0.162616695048399
-0.742350020075004 -0.169062779883601
-0.721506790775443 -0.17518150013225
-0.700481647513021 -0.180972642832875
-0.679256273795342 -0.186409776886443
-0.658056144983934 -0.191409221260456
-0.636846948453905 -0.195969219387015
-0.615609662042061 -0.200076458300675
-0.594318874182491 -0.20373380667808
-0.572926620155032 -0.206962456284258
-0.551616032148617 -0.209712259991162
-0.530334799808746 -0.212008146326535
-0.509001260937087 -0.213862743359337
-0.48752374758026 -0.21530510559362
-0.466101815010982 -0.216328969798979
-0.444594436824813 -0.216949955712679
-0.422994848104757 -0.217148258453932
-0.401339279648213 -0.216925259654514
-0.379725785060824 -0.216305421496885
-0.358157219130953 -0.215310403942115
-0.336589608139036 -0.213959340996244
-0.31495136357929 -0.212305306336871
-0.293182477577485 -0.210382574100994
-0.271462935398754 -0.208242276087894
-0.249737943481536 -0.205899339438797
-0.227965018163145 -0.203364924033397
-0.206080501753399 -0.200679897135315
-0.184290602973089 -0.197923298062903
-0.162527099472667 -0.19513063846151
-0.140742698879474 -0.192331429181695
-0.11887006340657 -0.18958570870187
-0.0970875590293888 -0.186962416448692
-0.0753289896277367 -0.184519247368337
-0.053516536498294 -0.182282299390554
-0.0318385448439214 -0.180311987263316
-0.0102305593129401 -0.178651525131767
0.0113755137628449 -0.177332453577133
0.0330474580880989 -0.176351598783171
0.0545888855767283 -0.175732111886685
0.0760875304115521 -0.17546242969234
0.0975933597512619 -0.175497674940005
0.119157529804793 -0.175843712711722
0.140824954393555 -0.17652817302595
0.162368315729514 -0.177510196110961
0.183844830106802 -0.178768489359036
0.205300627112334 -0.180296112583692
0.226780062412078 -0.18206937317334
0.248313166727835 -0.184103790431922
0.269696778344303 -0.186368481918185
0.290991485915111 -0.188869220550527
0.312259707933504 -0.191619441873967
0.333574445222081 -0.194614190484745
0.354772601263194 -0.197825967216367
0.375930939875088 -0.201263388108016
0.397119258112043 -0.204942664758604
0.418399491900491 -0.208858684541273
0.439831952252367 -0.213014113263672
0.461192945203307 -0.217389885483696
0.482530190113895 -0.221971984998015
0.503884456915741 -0.2267807301345
0.525269221453642 -0.231814247970376
0.546441372915209 -0.237013216998835
0.567506717788182 -0.242347512163756
0.588505239452153 -0.24780786371447
0.609529099756664 -0.253376025252858
0.630420356696353 -0.258990119867035
0.651230371193042 -0.264687438651684
0.672035863908277 -0.270489168449071
0.692650922860616 -0.276372299968587
0.713117703256217 -0.282388938784569
0.733453966709275 -0.288613409071238
0.753702101413073 -0.295123002393576
0.773904795959361 -0.301980886068865
0.793884569632469 -0.309123472850645
0.813684359443795 -0.316564460391996
0.83357624958424 -0.324430481671806
0.853362815033096 -0.332639330155566
0.873086444979555 -0.341229088108776
0.892793986981235 -0.350208035644135
0.912331796790301 -0.359493876744427
0.93179355601554 -0.369085917508051
0.951273590775743 -0.378972913972045
0.970626889800416 -0.389067940454331
0.989946376486952 -0.399411354834475
1.00933423097409 -0.409963790836178
1.02862859192984 -0.420587403457305
1.04787968216377 -0.431285990603631
1.06711410630064 -0.442047701536108
1.08614139320516 -0.452731853933983
1.10500068871096 -0.46338191775449
1.12375288616571 -0.474038553074373
1.14221040332077 -0.484581220362239
1.16049975170425 -0.494994017426893
1.17870321506777 -0.505279309423515
1.19662512834496 -0.515334036129482
1.21430365538913 -0.525123745447108
1.23201707844237 -0.534732319981693
1.24963727827517 -0.543981345256378
1.26725156645811 -0.552860956834671
1.28489682066348 -0.56142587454969
1.30261502746219 -0.56971241506668
1.32047669259703 -0.577778639887105
1.33855949968403 -0.585639052638553
1.35668085263142 -0.593210312317499
1.37486235202252 -0.600486284128094
1.39288822372058 -0.607382192123607
1.41099512894592 -0.613994714154719
1.42925174512438 -0.620344651992725
1.44753711997297 -0.626407140927994
1.46591639550009 -0.632233103444007
1.48445371843534 -0.637866139008702
1.5029755210198 -0.643285216760088
1.52151999935978 -0.648559086508607
1.54011260397232 -0.653730492655504
1.55853942880737 -0.658794866403428
1.5768065148814 -0.663823503508068
1.59495950277169 -0.66888175753888
1.61282737372891 -0.673977924331277
1.63045586140665 -0.679124978125712
1.64791715959887 -0.684346094255224
1.66504155769494 -0.689551232806814
1.68188404536324 -0.694679502444446
1.69842013488118 -0.699688957162216
1.71437696048264 -0.704485922214331
1.72963340168307 -0.709065068559463
1.74401875583446 -0.713478371490497
1.75723511669808 -0.717750021648542
1.7692698575089 -0.722004424564254
1.78011327708883 -0.726456894277375
1.78965808381431 -0.731386078599894
1.7980279093807 -0.737088519201275
1.80497121661991 -0.743586642407657
1.81031440678571 -0.750818436002699
1.81394966383438 -0.758627061139096
1.81583366087173 -0.766610627511042
1.81609581734551 -0.774488677605461
1.81494424086376 -0.781994899890316
1.81271654309871 -0.788800687287018
1.80973769193074 -0.794833600181223
1.80637361405524 -0.799921861320283
1.80304209700528 -0.803923023167595
1.80001792133896 -0.806916174993052
1.79751116099655 -0.809061092608517
1.79561939881699 -0.810531213119797
1.79429436726631 -0.811509190664807
1.79343290312888 -0.81212872512781
1.79287642055585 -0.812523763085394
1.79251663932338 -0.812760487334161
1.79227772595687 -0.812878559830261
1.79210378611824 -0.812929586150296
1.79196612493302 -0.812922115419269
1.79184203720367 -0.812886719710344
1.7917275409565 -0.812817469532092
1.79160374763935 -0.812732912204891
1.79146672641447 -0.812618880915692
1.79131035865409 -0.812488095128413
1.79112533884281 -0.812348792644942
1.79089763490671 -0.812177634591317
1.79058968636764 -0.811977770150497
1.79015069486505 -0.811715714041695
1.78950824869222 -0.811315737013443
1.78852340151004 -0.810710827176449
1.78706667296557 -0.809769489591435
1.78514506921705 -0.808334012461083
1.78293621020043 -0.806132073228223
1.78068573769074 -0.802950532267443
1.77863243105089 -0.798660024483207
1.77693218546185 -0.793164808425771
1.77566724315085 -0.786445352833677
1.77484327180188 -0.778714183341658
1.77443976143773 -0.770168422050384
1.77447258359254 -0.760994336336306
1.7749471536713 -0.751303074768028
1.77587580691391 -0.741430682855079
1.77724686182464 -0.731495261700085
1.77907215423184 -0.721480406910763
1.78137249969372 -0.711358080256355
1.78414815840754 -0.70120450939809
1.78731292894132 -0.691246460544889
1.79087776761775 -0.681401761453336
1.79480421971992 -0.671670881896671
1.79906401612875 -0.662009711060939
1.80355198623094 -0.652447791891991
1.80818410930321 -0.642969286870279
1.81281771083125 -0.633572361494546
1.81729387258335 -0.624300681271188
1.82146067939713 -0.615340761573102
1.8252305940748 -0.606841229619337
1.82851655503079 -0.599056395310862
1.83121761826263 -0.592362097688468
1.83330122378048 -0.587002733735894
1.83483079802837 -0.582982784600875
1.83595972973143 -0.579970483136462
1.83682477963747 -0.577585140356264
1.83756863894446 -0.57545898358185
1.83831400964401 -0.57320401596931
1.83915087030573 -0.570442483884814
1.84016780445652 -0.566837382814282
1.84141715076469 -0.56222022827526
1.84287418411414 -0.556686380305397
1.84453194709964 -0.550237232974392
1.84640010990981 -0.542848523683078
1.84847358674767 -0.534588814300424
1.85071521430429 -0.525722335128124
1.85309908158958 -0.516375599337863
1.85558976401062 -0.506657071172819
1.85816945794025 -0.496652606825875
1.86081855676114 -0.486438144190852
1.86347944174514 -0.476156734161458
1.86616238533742 -0.46582794904372
1.86884832514697 -0.455494936997999
1.87151869192845 -0.445152970164293
1.87418532226317 -0.43478162566742
1.87680997632008 -0.424505318054796
1.87939694952409 -0.414308183619072
1.88194432114737 -0.404159773970601
1.88446012550737 -0.394055160470523
1.88691421686795 -0.384071991748322
1.8892992418048 -0.3741864344016
1.89161294086177 -0.364391028942655
1.89385777721471 -0.35466671950314
1.89600836773378 -0.34512593900808
1.89808972705 -0.335786861986012
1.90011545217472 -0.32667190100986
1.90212293566798 -0.317810148299502
1.90414656030842 -0.309209628825656
1.90623985891724 -0.300953460191241
1.90846785636536 -0.293058971892573
1.91093428242068 -0.285510439172609
1.91379868523998 -0.278256519746678
1.91720382499365 -0.271201708879481
1.92118382172498 -0.264285301375585
1.92562032263699 -0.257251626606668
1.93032189727484 -0.249918688395934
1.93519030187389 -0.242211114244985
1.94015480738929 -0.234205855535415
1.94525945980365 -0.225960280040783
1.95052196110415 -0.21755499987901
1.95595307779167 -0.209074004533739
1.96155841768177 -0.200558730256978
1.96725692169133 -0.19216805104755
1.97303235976686 -0.183926815118156
1.97883172960518 -0.175886403310042
1.98458949258051 -0.16809833008448
1.99011783887444 -0.160750951524252
1.9952570637341 -0.154011117955362
1.99986570923454 -0.148148438124159
2.0036653346276 -0.143608439610858
2.00728269779958 -0.139635589051164
2.01131449366627 -0.135568769455076
2.01642577734439 -0.130731232332204
2.02345966833277 -0.124317867213745
2.03357642621267 -0.115261695309284
2.0412567352271 -0.11049180883945
2.04884623729198 -0.105721559479929
2.05634559873606 -0.100951477052634
2.06375548059223 -0.0961814974710578
2.07107653294476 -0.0914115588105842
2.07830940196175 -0.0866409899460089
2.08545472170299 -0.0818695755818789
2.09251312184734 -0.0770972983433668
2.09948522279361 -0.0723241433620356
2.10637163810735 -0.0675500906293539
2.11317297210425 -0.0627751156693867
2.11988982538461 -0.0579991899098331
2.1265227860736 -0.0532215038678041
2.13307243893852 -0.0484430034227697
2.13953936150947 -0.0436635988865446
2.14592412009987 -0.0388831968654143
2.1522272767328 -0.0341018881092051
2.15844938769588 -0.0293190476725605
2.16459099782123 -0.024534571940029
2.17065264961476 -0.0197483479589206
2.17663487350924 -0.0149602817660697
2.18253819850123 -0.0101702325505301
2.18836314096378 -0.00537804725607605
2.19411021563391 -0.000583552694036008
2.19977992722723 0.00421345002532478
2.20537277473287 0.00901319190207257
2.21088924977467 0.0138159517251464
2.21632983747776 0.0186220728798779
2.22169501617785 0.0234319969791122
2.22698525787513 0.0282463274855576
2.23220102769253 0.0330654176259678
2.23930484555451 0.035972385403042
2.247523098989 0.0376365807105958
2.25646384627511 0.0385204904868922
2.26585432152408 0.0390185943474455
2.27552891594046 0.039430171023097
2.28526996780341 0.0400400759344254
2.29500266873869 0.0410425112978825
2.30466732043928 0.0425116108003668
2.31417651318959 0.0445064733186567
2.32333476137578 0.0470500467736396
2.33208652733351 0.0501314640808443
2.34037517038222 0.053707308206606
2.34816657338011 0.0576928564933234
2.35542702983056 0.0620096575385412
2.36205174334023 0.0664805571053874
2.36802888260955 0.0709838177579342
2.37332223288077 0.0753727090293062
2.37791467521123 0.0794195626749059
2.38178888193516 0.0829007897340712
2.38494094010019 0.0857160009862796
2.38739619811892 0.0878700562840514
2.38919695391255 0.0894433400568839
2.39044240994985 0.090512280806523
2.39132152053632 0.0912238067949395
2.39197929716592 0.0917272733573716
2.39250775599273 0.0921064612793486
2.39299406678216 0.0924244704099432
2.39350196025946 0.0927342199655481
2.3940987310034 0.0930872547098461
2.39486632980395 0.093542321617942
2.39589840156973 0.0941751473856153
2.39734870262105 0.09509103915702
2.39940700547507 0.0964114566202481
2.40216553544997 0.0981707493149695
2.40561769725386 0.100290754424962
2.4098053669244 0.102691763631108
2.41474336808175 0.105278094193888
2.4205373170366 0.107985900778306
2.42715375495957 0.110709981838538
2.43456299833272 0.11331598427387
2.44270127816788 0.115681740649861
2.45141075659968 0.117706698973578
2.46062781687844 0.119326785209012
2.47032310902339 0.120502075259593
2.48036481765645 0.121180198853059
2.49071124343627 0.121350859261758
2.50128851831866 0.121104026802349
2.51207733791379 0.120585003694668
2.52312626809899 0.119892133318879
2.53419705057254 0.119126446993965
2.54520341120233 0.118376858855866
2.55599898689337 0.117764915354202
2.56632021527092 0.117529921257665
2.57605609764824 0.117787530984656
2.58681259346323 0.118580612339519
2.60036401631629 0.120041139309397
2.61312598498968 0.122503677325583
2.62510983833368 0.126161676551594
2.63605018547464 0.131190996567089
2.64559193762577 0.137842789197777
2.65509603744669 0.144494619702911
2.66456332172642 0.151146485538068
2.6739946206429 0.157798390362549
2.6833907654494 0.164450794275922
2.69275258280823 0.171103753058573
2.70208089593611 0.177757246470559
2.71137652439833 0.184411260155561
2.7206402872212 0.191065785556788
2.7298729990657 0.197720818673746
2.73907547209493 0.20437635852544
2.74824851460141 0.211032391029365
2.75739293338583 0.217688467125559
2.76650953385916 0.224344627553665
2.77559911790853 0.231000893276267
2.78466248320902 0.237657266783081
2.79370042839851 0.244313759829453
2.80271374762228 0.250970384732301
2.81170323447174 0.25762715450348
2.82066968008847 0.264284081125232
2.82961387204082 0.270941171199835
2.83853659881107 0.277598451218362
2.84743864263055 0.284255726683941
2.8563207877647 0.290913004498299
2.86518381728596 0.297570274955239
2.87402850692869 0.304227538961426
2.88285563764638 0.310884796671271
2.89166598471823 0.317542014037414
2.9004603239659 0.324199242383001
2.90923942643522 0.330856466946626
2.91800406726882 0.337513691917474
2.92647541640346 0.344342930946432
2.93473192847238 0.351398972057197
2.94270121287096 0.358589632032847
2.9504118812371 0.365875082780226
2.9578498834776 0.373231270155815
2.96502213916342 0.380645911162911
2.97185875784335 0.387963354532053
2.97838286543757 0.395133252417182
2.98464847026828 0.40214393928354
2.99076585731559 0.409050444108129
2.99683735094432 0.415950366038008
3.00305662249053 0.423035061335429
3.00960874711158 0.430493352467133
3.01660181462648 0.438423135851928
3.02411772770684 0.446877626091474
3.03211031150805 0.455717787604012
3.04056256152899 0.464838807348005
3.04945086102775 0.474059470068304
3.0587905508705 0.483245095527886
3.06849007642284 0.492070460033061
3.0785831848841 0.500335032376431
3.08916854676383 0.507992983539516
3.10034359901806 0.515052287823052
3.11190737444298 0.521389686467987
3.12382194431652 0.526945756854492
3.13605778538849 0.531718095424703
3.14855593856137 0.535821916485322
3.16106990051222 0.539445902011515
3.17353712687195 0.542822028249978
3.18590385171878 0.546141024714557
3.19816567926036 0.549584113905255
3.21030090587261 0.553323330649911
3.22215154336434 0.55748241051733
3.23371049460664 0.562165680955688
3.24498849693988 0.567471896315584
3.25599940097713 0.573418744053304
3.26663056758118 0.57997741677858
3.27695401554045 0.587096111547568
3.28705695146916 0.59461897194851
3.2971056134224 0.602375373912409
3.30727377155504 0.610172366038533
3.31759004379914 0.617729763144574
3.32817342613471 0.624905004320524
3.33910454197859 0.631614779593569
3.35042245392473 0.637802567078535
3.36199922069962 0.64341952299625
3.37378257107377 0.648565096840973
3.38567187609302 0.653358168068422
3.39758398683918 0.657938959265111
3.40943951534165 0.662390232008456
3.42100136178032 0.66671235240265
3.43216533457725 0.67091434169225
3.44282755653309 0.674985111416747
3.45290030528338 0.678889945461831
3.46218213787485 0.682569614736544
3.47055546674691 0.685948607456448
3.47790122879149 0.688970741025895
3.48416166736036 0.691582955730501
3.48929783907908 0.69375568509513
3.49326669792902 0.6954795613855
3.49616238000039 0.696763837603079
3.49814929147628 0.697637942770043
3.49948923272262 0.69815475115817
3.50038674544266 0.69843306962054
3.5009733871373 0.698549930843625
3.50134592923027 0.698556901700404
3.50154933531382 0.698517315273492
3.50165013142174 0.698454756899071
3.50166494340378 0.698390629535957
3.50162918843092 0.698314151945436
3.50156994088195 0.698242706883168
3.50149391485701 0.698164282792264
3.50142154175489 0.698097615599527
3.50135737044052 0.698063418457601
3.50130730240373 0.698055941710131
3.50127956453416 0.698073928380827
3.50126958276922 0.69815064273859
3.50127570851411 0.698268741836777
3.50128246832746 0.698448081054363
3.50129097414886 0.698686890523102
3.50130263203858 0.698963157516528
3.50131936264888 0.699229245865663
3.50136041363323 0.699499651635875
3.50143255639169 0.699757673118224
3.50154769109397 0.699982848991408
3.50172480879058 0.700182794745158
3.50197663911944 0.700359205736905
3.50232823572069 0.700541741457204
3.50282110944029 0.700761092128269
3.50352007382807 0.701054138884322
3.50450745224139 0.701470158021506
3.50592962953154 0.702079109989158
3.50798822447797 0.702983861849792
3.51089091560158 0.704306246470205
3.51478678096023 0.706112345145264
3.51965899321449 0.708363072638636
3.52550340072828 0.711000020191992
3.53237729998962 0.714036679257981
3.54027698881559 0.717358866385303
3.54910419457852 0.72077546470496
3.55874877467236 0.724080954383966
3.56912000503456 0.727081411228739
3.58006567895471 0.729564200131439
3.59131401876119 0.731353056848638
3.60277506412794 0.732405964554814
3.61436096885319 0.732711792736908
3.62598799729826 0.732322115986222
3.63761216583766 0.731390236876908
3.64912307721846 0.73019704023266
3.66063894792578 0.729073686723171
3.67222935313062 0.728300780233085
3.68397616491531 0.728091667550997
3.69595407660181 0.728505430695762
3.70809455627939 0.729423153861965
3.72042183215983 0.73065591204924
3.73290851162107 0.731973562508917
3.74555349663471 0.733128267837555
3.75823342515271 0.733814385556912
3.77091312894054 0.733866267217599
3.78354091920274 0.73323032303486
3.79607302871669 0.731861635817294
3.80853233921794 0.729748391115743
3.82076486113377 0.726909894148287
3.83277703010064 0.72333703238615
3.84452245257772 0.719020310383197
3.85609207901194 0.713920814814139
3.86731705683793 0.70811694348079
3.8781695183549 0.701672077559193
3.88859312037493 0.694674477975395
3.89852672508206 0.687288301492677
3.9077789087709 0.679771980904239
3.9162931441588 0.67232565850107
3.92392414932162 0.665145277226148
3.9304468953473 0.658421247339957
3.93560192191725 0.652300964730521
3.93931632099161 0.646661737523936
3.94155982056253 0.641218963712159
3.94232330408 0.635712450162413
3.94165027798117 0.629933866506122
3.9397264807714 0.623879953209669
3.93687754926136 0.617561563429571
3.93339192259909 0.611008843382807
3.92951896190102 0.604206275233118
3.92557602632251 0.597222913805027
3.92180298657044 0.590008496488421
3.91830287523769 0.58259807182853
3.91510833934103 0.575073988597774
3.91220351293425 0.56751017274767
3.90952139607061 0.560037455940962
3.90699875846817 0.552710294161511
3.90459867375824 0.545545842047975
3.90227145129466 0.538561281656705
3.90001240497705 0.531769083683115
3.89781159718009 0.525182193643941
3.89568519017504 0.518786150469436
3.89367810209408 0.512629130171493
3.89187142008641 0.506799715649154
3.89028036957303 0.50128469527559
3.88890628730467 0.496063258818068
3.88767089270275 0.491131838975987
3.88653526865248 0.486596700707734
3.88542749505413 0.482506911360628
3.884329708141 0.478860915340433
3.88319271743677 0.475546220340232
3.88194437677237 0.472352370881992
3.88052714707864 0.469148928096526
3.8789380645325 0.465928662735366
3.87722824731453 0.462711499609488
3.87542890043664 0.459517879753781
3.87358943155403 0.456373826079986
3.87176912036927 0.453280328094467
3.87001391623043 0.450248452259189
3.86834753482952 0.447391043920232
3.86679185879167 0.4446968887881
3.8654859516929 0.442180272136965
3.86470899672224 0.43982511617011
3.8649098571367 0.437424216866472
3.86664923016245 0.434639613778715
3.87070862872882 0.430977973230958
;
Approximate posterior process of with NN drift on GPS data along with the observations.
To further showcase the applicability of in real-world, we experiment with a GPS data set of a moving vehicle (data also from <cit.>). The aim is to model the 2D trajectory of the vehicle recorded from GPS coordinates over time. The data set is 106 minutes long and consists of 6373 observation points. We experiment with two models; both use two independent DPs learnt jointly (one for latitude and one for longitude directions). Similar to the setup in <cit.>, we split the data in chunks of 30 s and perform 10-fold cross-validation. For all the models, Gaussian likelihood is used with σ^2=0.01, which is not optimized.
First, we experiment with with a linear OU DP process in which prior is incorporated in terms of the OU process, which is initialized with θ=1.0, Q_c=0.1. After evaluating the data, prior on the initial state is set to (0, 0.1). For optimizing sites, the learning rate ρ is set to 0.5, and the prior DP parameter θ is optimized using Adam optimizer with a 0.01 learning rate. The model gives NLPD -0.67 ± 0.19 / RMSE 0.13 ± 0.04.
Next, similar to the finance example (<ref>), to give more flexibility, we experiment with with a neural network (NN) drift DP f_p. The drift of the DP is initialized to be a NN with one hidden layer with three nodes followed by a ReLU activation function, and Q_c is set to 0.1. The parameters of the NN are initialized from a unit Gaussian, and after evaluating the data, prior on the initial state is set to (0, 0.1). The learning rate ρ is set to 0.5 and the prior DP parameter is optimized using Adam optimizer with 10^-2 learning rate. The model gives NLPD -0.82 ± 0.43 / RMSE 0.06 ± 0.03.
with a NN drift gives better NLPD and RMSE value primarily because of the flexibility of the DP to model the trajectory and possibility to adapt to the non-stationary behaviour in the state space. The posterior for with NN drift is shown in <ref>. The behaviour of the vehicle is different at different points in the input space (perhaps due to faster driving on highways and slower on smaller streets), which the model is able to capture by learning the parameters.
§.§ Comparison with NeuralSDEs
We also compare against the recently popular class of NeuralSDE methods <cit.>. These methods are variational inference algorithms with a broader scope than : the posterior process q is not restricted to be a linear DP but is characterized by a neural network drift. However, they rely on sample-based simulation methods for estimation of the ELBO gradient and thus incur a large computational cost. Also, the convergence of optimization via stochastic gradient descent is often slow.-1
Empirically, we find that these methods need a delicate learning-rate scheduler to converge. In comparison, has a deterministic objective and intrinsically has an adaptive learning rate (setup similar to various natural gradient descent algorithms). Thus, is fast and does not need fine-tuning. <ref> showcases the posterior of NeuralSDE along with and (similar to <ref>). From the figure, it can be noted that in fewer iterations, the posterior is similar to and with more iterations, the posterior gets closer to what gets. For the experiment, we use 1000 training samples for each iteration step and Adam optimizer with a 0.1 learning rate and an exponential learning-rate scheduler. The implementation is based on the code-base of <cit.>.-1
<ref> shows that while the NeuralSDE approach is general and eventually converges to the same optima as , in the particular case where it is possible to formalize the problem under the /setting, using for inference and learning is orders of magnitude faster.
§ AUTHOR CONTRIBUTIONS
The initial idea and motivation of this work was conceived by VA in discussion with PV. PV had the main responsibility of implementing the method and conducting the experiments with help from AS and VA. The first draft was written by VA and PV. All authors contributed to finalizing the manuscript.
|
http://arxiv.org/abs/2306.17763v1
|
20230630161336
|
The dynamics of crack front waves in 3D material failure
|
[
"Sanhita Das",
"Yuri Lubomirsky",
"Eran Bouchbinder"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci",
"cond-mat.soft",
"nlin.PS",
"physics.class-ph",
"physics.comp-ph"
] |
Contributed equally
Contributed equally
[email protected]
Chemical and Biological Physics Department, Weizmann Institute of Science, Rehovot 7610001, Israel
Crack front waves (FWs) are dynamic objects that propagate along moving crack fronts in 3D materials. We study FW dynamics in the framework of a 3D phase-field framework that features a rate-dependent fracture energy Γ(v) (v is the crack propagation velocity) and intrinsic lengthscales, and quantitatively reproduces the high-speed oscillatory instability in the quasi-2D limit. We show that in-plane FWs feature a rather weak time dependence, with decay rate that increases with dΓ(v)/dv>0, and largely retain their properties upon FW-FW interactions, similarly to a related experimentally-observed solitonic behavior. Driving in-plane FWs into the nonlinear regime, we find that they propagate slower than predicted by a linear perturbation theory. Finally, by introducing small out-of-plane symmetry-breaking perturbations, coupled in- and out-of-plane FWs are excited, but the out-of-plane component decays under pure tensile loading. Yet, including a small anti-plane loading component gives rise to persistent coupled in- and out-of-plane FWs.
The dynamics of crack front waves in 3D material failure
Eran Bouchbinder
July 31, 2023
========================================================
Introduction.—Material failure is a highly complex phenomenon, involving multiple scales, strong spatial localization and nonlinear dissipation. It is mediated by the propagation of cracks, which feature nearly singular stresses near their edges <cit.>. In brittle materials, they reach velocities comparable to elastic wave-speeds, hence also experience strong inertial effects. In thin, quasi-2D samples, a crack is viewed as a nearly singular point that propagates in a 2D plane and leaves behind it a broken line. In thick, fully-3D samples, a crack is a nearly singular front (line) that evolves in a 3D space and leaves behind it a broken surface. While significant recent progress has been made in understanding dynamic fracture in 2D <cit.>, our general understanding of dynamic fracture in 3D remains incomplete <cit.>.
A qualitative feature that distinguishes 2D from 3D material failure is the emergence of crack front waves (FWs) in the latter. FWs are compact objects that persistently propagate along crack fronts <cit.>. In the most general case, FWs feature both a component in the main crack plane and an out-of-plane component <cit.>. A linear perturbation theory of singular tensile cracks, featuring no intrinsic lengthscales and rate-independent fracture-related dissipation, predicts the existence of non-dispersive in-plane FWs, whose velocity is close to the Rayleigh wave-speed c__ R <cit.>. An extended linear perturbation theory also predicts the existence of non-dispersive out-of-plane FWs in the same velocity range <cit.>, albeit to linear order the in- and out-of-plane components are decoupled.
Here, we study FWs in a 3D theoretical-computational framework that has recently quantitatively predicted the high-speed oscillatory instability in 2D <cit.>. It is based on a phase-field approach to fracture <cit.>, where large scale elastic deformations — described by an elastic energy density e( u) (here u( x,t) is the displacement field) — are coupled on smaller scales near the crack edge to an auxiliary scalar field — the phase-field ϕ( x,t) — that mathematically mimics material breakage. The main merit of the approach is that the dissipative dynamics of ϕ( x,t) spontaneously generate the traction-free boundary conditions defining a crack, and consequently select its trajectory and velocity v. Moreover, it also incorporates intrinsic lengthscales near the crack edge — most notably a dissipation length ξ (sometimes termed the “process zone” size <cit.>) and possibly a nonlinear elastic length ℓ_ nl (embodied in e( u) <cit.>) — absent in singular crack models, and a rate-dependent fracture energy Γ(v) that accompanies the regularization of the edge singularity.
The theoretical-computational framework and the quasi-2D limit.— We consider a homogeneous elastic material in 3D, where L_z is the thickness in the z direction, L_y is the height in the tensile loading y direction and x is the crack propagation direction (we employ a treadmill procedure to obtain very long propagation distances using a finite simulation box length L_x <cit.>). We use a constitutively-linear energy density e( u)%̄s̄/̄%̄s̄12λ tr^2( E)+ μ tr( E), with Lamé coefficients λ and μ (shear modulus), and where E12[∇ u+(∇ u)^ T+(∇ u)^ T∇ u] is the Green-Lagrange metric strain tensor. The latter ensures rotational invariance, yet it introduces geometric nonlinearities (last term on the right-hand-side). However, the associated nonlinear elastic lengthscale ℓ_ nl remains small (unless otherwise stated <cit.>), such that we essentially consider a linear elastic material and the dissipation length ξ is the only relevant intrinsic lengthscale. The latter emerges once e( u) is coupled to the phase-field ϕ( x,t) <cit.>.
Applying this framework in 2D, L_z0̄, the high-speed oscillatory instability — upon which a straight crack loses stability in favor of an oscillatory crack when surpassing a critical velocity close to c__ R — was predicted, in quantitative agreement with thin-sample experiments <cit.>. In Fig. <ref>a, we present a high-speed oscillatory instability in a thin 3D material, L_z>0, where all quantities — including the wavelength of oscillations — agree with their 2D counterparts. These results support the validity of the 3D framework as it features the correct quasi-2D limit.
Next, we aim at exciting FWs and studying their dynamics. We consider thick systems (with L_z/ξ≫1 and periodic boundary conditions along z), see Fig. <ref>b. Loading boundary conditions u_i(x,y0̄,z) and u_i(x,yL̄_y,z) are applied. In most, but not all, cases (see below), we apply tensile boundary conditions u_y(x,y0̄,z)-̄u_y(x,yL̄_y,z)δ̄/2, resulting in mode I cracks initially located at the yL̄_y/2 plane. The tensile strain δ/L_y translates into a crack driving force G (energy release rate) <cit.>, which is balanced by a rate-dependent fracture energy Γ(v). The latter features dΓ(v)/dv>0, whose magnitude depends on the relxation/dissipation timescale τ of the phase-field ϕ <cit.>, through the dimensionless parameter β≡τ c_ s/ξ (where c_ s is the shear wave-speed). The entire theoretical-computational framework depends on two dimensionless parameters, β and e_ c/μ, where e_ c is the onset of dissipation energy density <cit.>.
FWs are excited by allowing a steady-state crack front to interact with tough spherical asperities (one or more), see Fig. <ref>b. Each spherical asperity is characterized by a radius R and a dimensionless fracture energy contrast δΓ≡ΔΓ/Γ_0>0, where Γ_0≡Γ(v→0). The position of the asperities with respect to the crack plane, yL̄_y/2, determines the type of perturbation induced, i.e. in-plane or coupled in- and out-of-plane perturbations. The resulting perturbed crack front is then described by an evolving line f(z,t)(̄f_x(z,t),f_y(z,t)) parameterized by the z coordinate and time t (assuming no topological changes take place). Here, f_x(z,t) is the in-plane component and f_y(z,t) is the out-of-plane component, and an unperturbed tensile crack corresponds to f(z,t)(̄vt,0).
The dynamics of in-plane FWs.—In-plane FWs are excited by placing a single asperity whose center coincides with the crack plane, yL̄_y/2 (cf. Fig. <ref>b). The tough asperity locally retards the crack front, leading to a local increase in the front curvature and G <cit.>. The front then breaks the asperity (cf. Fig. <ref>b), leading to a subsequent velocity overshoot Δv_ os(t) ahead of the asperity (cf. Fig. <ref>a). To quantify in-plane FWs dynamics, we employ v_x(z,t)≡∂_t f_x(z,t), typically with respect to ⟨ v_x(z,t)⟩≈v, where ⟨·⟩ corresponds to an average along z (unless otherwise stated). Strictly speaking, the physically relevant quantity is the normal front velocity, v__⊥(z,t)v̄_x(z,t)/√(1+(∂_z f_x(z,t))^2). However, for our purposes here v_x(z,t) itself is sufficient.
After Δv_ os(t) reaches a maximum, it decays to zero (cf. Fig. <ref>b) and a pair of in-plane FWs is generated. Each FW features an amplitude Δv_x(t) (defined as the crest-to-trough difference), a width Δz(t) (the corresponding crest-to-trough z distance) and a propagation velocity c__ FW (in the laboratory frame of reference), all marked in Fig. <ref>a. The dimensionless FW amplitude Δv_x(t)/⟨ v_x(z,t)⟩ is plotted in Fig. <ref>b. The FW inherits its scale from R, as shown in <cit.>.
A linear perturbation theory <cit.>, developed to leading order in |∂_z f_x(z,t)|≪1, predicted the existence of non-dispersive in-plane FWs, in the absence of intrinsic lengthscales (ξ→0) and for a rate-independent fracture energy (dΓ(v)/dv0̄). The theory predicts 0.94<c__ FW(v)/c__ R<1 (when v varies between 0 and c__ R). These predictions have been subsequently supported by boundary-integral method simulations of a rate-independent cohesive crack model <cit.>. In <cit.>, an effective crack propagation equation of motion has been conjectured for the dΓ(v)/dv0 case, suggesting that for dΓ(v)/dv>0 in-plane FWs undergo some form of attenuation during propagation.
As materials feature a rate-dependent fracture energy Γ(v), it is important to shed light on this physical issue. Our framework naturally enables it as dΓ(v)/dv is directly controlled by β. The evolution of the FW amplitude Δv_x(t)/⟨ v_x(z,t)⟩ presented in Fig. <ref> corresponds to very weak rate dependence, shown in Fig. <ref>a for β0̄.28. Such a flat Γ(v) is characteristic of nearly ideally brittle materials such as silica glass (cf. the experimental data in Fig. 2b of <cit.>). Δv_x(t)/⟨ v_x(z,t)⟩ in this case, presented again in the inset of Fig. <ref>, reveals a weak linear attenuation proportional to 1-(t-t_0)/T, where c_ sT/ξ≃1210. However, while our system width L_z is large enough to resolve FW propagation distances several times larger than their characteristic width Δz (cf. Fig. <ref>a), the overall propagation time Δt prior to FW-FW interaction (through the periodic boundary condition, to be discussed below) is Δt∼ O(100) (cf. Fig. <ref>b), implying Δt≪T. Consequently, the presented results cannot tell apart an exponential decay from a linear one as exp[-Δt/T]≃1-Δt/T for Δt≪T.
To address this point, and more generally the effect of the magnitude of dΓ(v)/dv on in-plane FW dynamics, we increased β by an order of magnitude, setting it to β2̄.8. The resulting Γ(v), shown in Fig. <ref> (previously reported for our model in 2D <cit.>), indeed reveals a significantly larger dΓ(v)/dv, nearly a factor 5 larger than that for β0̄.28. The emerging dΓ(v)/dv is similar to the one observed in brittle polymers (e.g., PMMA, cf. Fig. 2a in <cit.>) and in brittle elastomers (e.g., polyacrylamide, cf. Fig. 2B in <cit.>). The corresponding Δv_x(t)/⟨ v_x(z,t)⟩ is shown in the inset of Fig. <ref>, again following a linear attenuation proportional to 1-(t-t_0)/T, this time with c_ sT/ξ≃208. Since in this case Δt is comparable to T, the results support a linear decay, in turn implying that in-plane FWs may propagate many times their characteristic width Δz even in materials with a finite dΓ(v)/dv. Moreover, we note that the decay rate 1/T varies between the two β values by a factor that is comparable to the corresponding variability in dΓ(v)/dv, indeed suggesting a relation between these two physical quantities <cit.>.
We next consider the FW velocity c__ FW and the possible effect of Δv_x(t)/⟨ v_x(z,t)⟩ on it. As explained above, the linear perturbation theory of <cit.> predicts 0.94<c__ FW/c__ R<1. Consequently, we expect our excited in-plane FWs to feature c__ FW/c__ R within this range when Δv_x(t)/⟨ v_x(z,t)⟩ is small. This is indeed the case in Fig. <ref>, where the dimensionless FW amplitude is controlled by systematically varying v, and the asperity parameters R and δΓ (in fact, we find that the amplitude varies linearly with δΓ for fixed v and R <cit.>). However, when the amplitude is no longer small, apparently beyond the linear perturbation regime, we find that c__ FW/c__ R decreases below 0.94, indicating that nonlinear effects tend to slow down in-plane FWs.
Finally, we take advantage of the z-periodic boundary conditions to study FW-FW interactions. In Fig. <ref>a, we present the interaction dynamics between the in-plane FWs previously shown in Fig. <ref>a. It is observed that the FWs retain their overall shape after the interaction, yet during the interaction they do not feature a linear superposition. This behavior is quantified in Fig. <ref>b, where Δv_x(t)/⟨ v_x(z,t)⟩ is plotted before, during and after FW-FW interaction (before and after the interaction it is identical for the two non-interacting FWs). In this case, it is observed that before and after the FW-FW interaction, each FW follows the very same weak linear decay previously presented in Fig. <ref>b (see superimposed dashed line) and nearly drops to zero during the interaction. This soliton-like behavior is reminiscent of similar experimental observations made in relation to coupled in- and out-of-plane FWs <cit.>, which are discussed next.
Coupled in- and out-of-plane FWs.—Experimentally, FWs have been observed through their fractographic signature on postmortem fracture surfaces <cit.>, i.e. the observed FWs featured nonlinearly coupled in- and out-of-plane components, where both f_x(z,t) and f_y(z,t) are non-zero and apparently propagate at the same c__ FW. FWs in the experiments were excited by huge perturbations, 3-4 orders of magnitude larger than the out-of-plane component of the generated FWs <cit.>, which in itself was comparable to the fracture dissipation length ξ. For example, asperity sizes of 100-1000μm gave rise to FWs with an out-of-plane component of 0.1μm in silica glass <cit.>, whose fracture dissipation (process zone) size is estimated to be in the tens of nanometers range <cit.>. Coupled in- and out-of-plane FWs are also spontaneously triggered by micro-branching events <cit.>, likely to be “large perturbations” as well.
Due to computational limitations — most notably on the magnitude of L_y — we are not able to resolve this huge span in scales between the triggering perturbation and the resulting out-of-plane component. Consequently, the out-of-plane perturbations accessible to us are rather small. In particular, we perturbed the initially planar crack by a pair of adjacent asperities, one slightly shifted above the crack plane and one below, breaking the up-down symmetry. Such perturbations excite both in- and out-of-plane crack front components, but the latter decays after a short transient (while the former persists <cit.>).
To understand if the latter observation is exclusively due to computational limitations (in resolving finite perturbations and the associated scale separation) or whether other physical factors are at play, we considered the recent experiments of <cit.>. It was shown therein that out-of-plane crack surface structures — most notably surface steps <cit.> — might crucially depend on the existence of small, weakly experimentally controlled, anti-plane loading component (mode III, anti-symmetric loading in the z direction, e.g., due to small misalignment between the crack plane and the tensile axis). To test the possibility that a small amount of mode-mixity (mode III/I) might play a role in generating persistent coupled in- and out-of-plane FWs, we introduced a mode-mixity level of 3%, i.e. u_z(x,y0̄,z)-̄u_z(x,yL̄_y,z)0̄.03 |u_y(x,yL̄_y,z)| into the above-described calculations. The results are presented in Fig. <ref>, revealing persistent propagation of a pair of coupled in- and out-of-plane FWs, featuring non-zero f_x(z,t) and f_y(z,t) that propagate at c__ FW0̄.961c__ R.
The amplitude of f_y(z,t) is tiny, a small fraction of ξ (yet it varies systematically with mode-mixity <cit.>). Moreover, it is an order of magnitude small than that of f_x(z,t) (notice the two y axis labels in Fig. <ref>). Interestingly, this observation is consistent with experimental estimates <cit.> that suggest that ∂_t f_y(z,t) is much smaller than ∂_t f_x(z,t) (estimated using real-time measurements of in-plane crack velocity fluctuations at z0̄ and zL̄_z <cit.>). Overall, the observed coupled in- and out-of-plane FWs propagating at c__ FW0̄.961c__ R with a small out-of-plane component, which also persist through FW-FW interactions, is reminiscent of several key experimental findings <cit.>. It remains to be seen whether a small mode-mixity, which is physically realistic, is an essential ingredient. One manifestation of it, which can be tested experimentally, is that the out-of-plane amplitude of the pair of FWs has opposite signs, see Fig. <ref>.
Summary and outlook.—Our results demonstrate that the same framework that quantitatively predicts the high-speed oscillatory instability in thin materials, also provides deep insight into FW dynamics in thick, fully 3D materials. The effect of realistic rate-dependent fracture energy dΓ(v)/dv>0 on the propagation of in-plane FWs is elucidated, as well as their solitonic nature and the effect of nonlinear amplitudes on their velocity. Persistent coupled in- and out-of-plane FWs, similar to experimental observations, are demonstrated once a small anti-plane (mode III) loading component is added to the dominant tensile (mode I) loading component.
Our findings give rise to pressing questions and subsequent investigation directions, most notably in relation to out-of-plane crack structures such as micro-branching events and surface faceting <cit.>. The roles of mode-mixity fluctuations in nominally tensile failure and of realistic material disorder/heterogeneity (we focused on homogeneous materials, discrete asperities were just introduced to generate FWs) should be particularly considered. In addition, improved computational capabilities (e.g. based on multi-GPU implementations) should be developed in order to obtain better scale separation, which in turn may allow to understand the effect of finite out-of-plane perturbations on 3D crack dynamics.
Acknowledgements This work has been supported by the United States-Israel Binational Science Foundation (BSF, grant no. 2018603). E.B. acknowledges support from the Ben May Center for Chemical Theory and Computation, and the Harold Perlman Family.
Supplemental Materials for:
“The dynamics of crack front waves in 3D material failure”
The goal here is to provide some technical details regarding the 3D computational framework employed in the manuscript and to offer some additional supporting data.
§.§ The 3D phase-field model and its numerical implementation
The 3D theoretical-computational framework we employed is identical to the 2D phase-field model presented in great detail in <cit.>, extended to 3D. To the best of our knowledge, this framework is the only one that quantitatively predicted the high-speed oscillatory and tip-splitting instabilities in 2D dynamic fracture <cit.>, and hence should serve as a basis for a 3D theory of material failure. For completeness, we briefly write down here the model's defining equations, and provide some details about the employed boundary conditions and numerical implementation in 3D.
The starting point is the Lagrangian LT̄-U, where the potential energy U and kinetic energy T are given as
U = ∫[1/2κ(∇ϕ)^2+ g(ϕ) e( u) + w(ϕ) e_ c]dV ,
T = ∫1/2f(ϕ) ρ(∂_t u)^2 dV ,
in terms of the displacement vector field u( x,t) and the scalar phase-field ϕ( x,t). dV is a volume differential and the integration extends over the entire system. An intact/unbroken material corresponds to ϕ1̄, for which g(1)f̄(1)1̄ and w(1)0̄. It describes a non-dissipative, elastic response characterized by an energy density e( u) on large lengthscales away from a crack edge (we use in this document `crack edge', which includes both `crack tip' in 2D and `crack front' in 3D).
The crack edge is accompanied by a large concentration of elastic energy, eventually leading to material failure, i.e. to the loss of load-bearing capacity. This process is mathematically accounted for in the phase-field approach by the field ϕ( x,t), which smoothly varies from ϕ1̄ (intact/unbroken material) to ϕ0̄ (fully broken material), and by the degradation functions g(ϕ), f(ϕ) and w(ϕ) that depend on it. The onset of dissipation is related to the strain energy density threshold e_ c in Eq. (<ref>). As ϕ decreases from unity, g(ϕ) is chosen such that it decreases towards zero and w(ϕ) is chosen such that it increases towards unity. This process mimics the conversion of elastic strain energy into fracture energy, where the broken ϕ0̄ phase/state becomes energetically favorable from the perspective of minimizing U in Eq. (<ref>). Throughout this work, we operationally define the crack faces, and hence also the crack front, based on the ϕ( x,t)=1/2 iso-surface.
For ϕ0̄, the material lost its load-bearing capacity and traction-free boundary conditions are achieved. This process is associated with a lengthscale, which emerges from the combination of the energetic penalty of developing ϕ gradients, as accounted for by the first contribution to U in Eq. (<ref>) that is proportional to κ, and the ϕ-dependent elastic energy density threshold for failure (1-w(ϕ))e_ c. Consequently, the characteristic length scale is ξ≡√(κ/2e_ c), setting the size of the dissipation zone near the crack edge. The degradation functions we employed, following <cit.>, are f(ϕ)ḡ(ϕ)ϕ̄^4 and w(ϕ)1̄-ϕ. Note that the choice f(ϕ)ḡ(ϕ), where f(ϕ) appears in the kinetic energy of Eq. (<ref>), ensures that elastic wave-speeds inside the dissipation zone remain constant, as extensively discussed in <cit.>.
To account for fracture-related dissipation, the Lagrangian LT̄-U of Eqs. (<ref>)-(<ref>) is supplemented with the following dissipation function (directly related to the phase-field ϕ( x,t))
D ≡1/2χ∫(∂_t ϕ)^2dV ,
where χ is a dissipation rate coefficient that determines the rate-dependence of the fracture energy Γ(v). The quasi-static fracture energy, Γ_0Γ̄(v→0), is proportional to e_ cξ <cit.>. The evolution of ϕ( x,t) and u( x,t) is derived from Lagrange's equations
∂/∂ t[δ L/δ(∂ψ/∂ t)]-δ L/δψ
+δ D/δ(∂ψ/∂ t)=0 ,
where ψ(̄ϕ,u_x,u_y,u_z), i.e. u(̄u_x,u_y,u_z) are the components of the displacement vector field.
As explained in the manuscript, we employed the following constitutively-linear elastic energy density
e( u)=1/2λ tr^2( E) + μ tr( E) ,
where E12[∇ u+(∇ u)^ T+(∇ u)^ T∇ u] is the Green-Lagrange metric strain tensor, and λ and μ (shear modulus) are the Lamé coefficients. We set λ2̄μ in all of our calculations. Using Eqs. (<ref>)-(<ref>), with Eq. (<ref>), inside Eq. (<ref>) fully defines our field equations in 3D (that should be solved in a given 3D domain, and supplemented with proper initial and boundary conditions, as described below). The resulting equations are nondimensionalized by expressing length in units of ξ, time in units of ξ/c_ s, energy density in units of μ and the mass density ρ in units of μ/c_ s^2 (c_ s is the shear wave-speed). Once done, the dimensionless set of equations depends on two dimensionless parameters: e_ c/μ (the ratio between the dissipation onset threshold e_ c and a characteristic elastic modulus) and on βτ̄ c_ s/ξ (where we defined τ≡(2χ e_ c)^-1), which controls the v-dependence of the fracture energy, Γ(v), as discussed in the manuscript.
As discussed extensively in <cit.>, near crack edge elastic nonlinearity — embodied in Eq. (<ref>) in the Green-Lagrange strain tensor E — gives rise to a nonlinear elastic lengthscale ℓ_ nl that scales as ℓ_ nl/ξ∼e_ c/μ. In the calculations in the context of the high-speed oscillatory instability, cf. Fig. 1a in the manuscript, we set e_ c/μ0̄.02. The latter leads to a sizable nonlinear elastic lengthscale ℓ_ nl in the ultra-high crack propagation velocities regime considered therein (v→c__ R), which controls the wavelength of oscillations (note, though, that it was shown <cit.> that the high-speed oscillatory instability persists also in the limit ℓ_ nl/ξ→0, where the wavelength is controlled by ξ). In the rest of our calculations, where the dynamics of crack front waves (FWs) were of interest, we focused on a linear elastic behavior, where ℓ_ nl is negligibly small. The latter is ensured by setting e_ c/μ0̄.005 and considering v≤0.7c_ s. Consequently, as stated in the manuscript, in all of our FW-related calculations, the material is essentially linear elastic and the only relevant intrinsic lengthscale is the dissipation length ξ. The rate of dissipation parameter β was varied between β0̄.28 and β2̄.8, as discussed in the manuscript.
Our calculations were performed in boxes of length L_x in the crack propagation direction x, height L_y in the loading direction y and L_z in the thickness direction z. In all of our calculations, we set L_x=150ξ. However, we employed a treadmill procedure (as explained in <cit.>), which allows to simulate very large crack propagation distances. Consequently, our system is effectively infinite in the crack propagation direction. In Fig. 2a in the manuscript, where our focus was on testing the reproducibility of the high-speed oscillatory instability in the thin, quasi-2D limit, we used L_z6̄ξ and a large L_y. This calculation also employed traction-free boundary conditions at z0̄ and zL̄_z. In the rest of our calculations, which focused on FW dynamics, we were interested in thick systems. To that aim, we used L_z3̄50ξ (note that in the illustrative Fig. 1b in the manuscript, we showed a smaller L_z for visual clarity) and periodic boundary conditions in z. Due to the enormous computational cost involved in our large-scale calculations, employing such a large L_z implies that L_y is rather constrained. In all of the FW calculations we used L_y1̄50ξ. The loading conditions at y0̄ and yL̄_y are discussed in the manuscript. Note that the crack propagation velocity v is set by controlling the crack driving force G (through the loading conditions), following energy balance Γ(v)Ḡ.
The resulting field equations corresponding to Eqs. (<ref>), cf. Eqs. (A.1)-(A.3) in <cit.>, are spatially discretized in 3D on a cubic grid with a discretization size ΔxΔ̄yΔ̄z0̄.25ξ, following the same spatial discretization scheme described in <cit.>, straightforwardly extended from 2D to 3D. The temporal discretization (at any spatial grid point) involves different schemes for the scalar phase-field ϕ and the vectorial displacement field u. For the former, we employ a simple forward Euler scheme ϕ_n+1ϕ̄_n+ϕ̇_nΔt as in <cit.>, where the subscript n refers to the current time step, t_nn̄Δt, with Δt being the discrete time step size.
For u, we developed a specifically-adapted Velocity Verlet scheme. As in the conventional Velocity Verlet scheme <cit.>, the displacement u_n+1 is given to second order in Δt as u_n+1ū_n+ v_nΔt+12 a_nΔt^2, in terms of u_n, the velocity v_n and the acceleration a_n. The appearance of the degradation function f(ϕ) in the kinetic energy in Eq. (<ref>) implies that a_n+1 depends on v_n+1 itself (cf. Eq. (A.3) in <cit.>), and hence the conventional Velocity Verlet <cit.> expression for v_n+1, i.e. v_n+1v̄_n+12( a_n+ a_n+1)Δt, cannot be used (since, as explained, a_n+1 depends on v_n+1). Instead, we defined an auxiliary acceleration ã_n+1 that was estimated using an auxiliary velocity ṽ_n+1v̄_n+ a_nΔt, from which we estimated v_n+1 according to v_n+1v̄_n+12( a_n+ã_n+1)Δt.
This specifically-adapted Velocity Verlet scheme involved the estimation of the auxiliary acceleration ã_n+1, which entails the computation of the divergence of the stress tensor (cf. Eq. (A.3) in <cit.>). The latter, whose computation is a serious bottleneck, was reused to evaluate a_n+1 at the next time step. This reuse of the divergence of the stress gives rise to more than a two-fold speedup in run-times compared to the temporal discretization scheme used in <cit.>, which is essential for the very demanding 3D computations. Finally, the time step size Δt is set according to the β parameter, taking into account the associated stability condition of the diffusion-like ϕ̇ equation (Δt of course also satisfies the CFL condition, which is less stringent in our case).
All of our calculations are perform on a single GPU (NVIDIA TeslaV100_SXM2, QuadroRTX8000 or QuadroRTX6000) available on WEXAC (Weizmann EXAscale Cluster), which is a large-scale supercomputing resource at Weizmann Institute of Science. Our computations are very demanding in terms of memory, typically involving ∼40GB of memory per simulation. Consequently, all data analysis has to be performed on the fly, as it is simply not practical to save snapshots of the fields. To that end, we used Matlab's C++ engine that enables to execute Matlab scripts during run-time. In order to maximize performance, our computational platform is entirely implemented using C/C++ and CUDA, with typical simulation times of a few days per simulation, depending on the parameters.
A. FWs generation and discrete heterogeneities/asperities
As explained in the manuscript, FWs generation involves 3 parameters, the steady-state crack front velocity v, the asperity radius R and its dimensionless fracture energy contrast δΓ≡ΔΓ/Γ_0. To obtain a steadily propagating crack, we first introduced a planar crack and iteratively relaxed the elastic fields until reaching a mechanical equilibrium state under a prescribed loading. The latter corresponds to a given crack driving force G. Then, the crack was allowed to propagate until reaching a steady-state according to energy balance Γ(v)Ḡ, as explained above.
FWs are excited by allowing the steadily propagating planar crack to interact with discrete heterogeneities in the form of tough spherical asperities. To generate asperities, we introduce an auxiliary static (quenched) “noise field” ζ( x), which can be coupled to any physical parameter in the fracture problem. This coupling is achieved by transforming an originally spatially uniform parameter α_0 into a field of the form α( x)ᾱ_0[1+α__ζζ( x)], where α__ζ is a coupling coefficient.
We applied this formulation to the fracture energy, whose quasi-static value scales as Γ_0∼e_ cξ∼√(κ e_ c), by simultaneously coupling κ, e_ c and χ to ζ( x), while keeping ξ∼√(κ e_ c) and τ∼(χ e_ c)^-1 fixed. This choice ensures that βτ̄c_ s/ξ∼(χ e_ cξ)^-1 remains fixed, i.e. the asperities feature an overall dimensionless fracture energy contrast δΓ≡ΔΓ/Γ_0 (controlled by κ e_ c) compared to the homogeneous surrounding material, but the very same fracture rate dependence dΓ(v)/dv (controlled by β).
Finally, discrete spherical asperities are obtained by choosing ζ( x) with a compact support in the form ζ( x)(̄1-| x- x_0|/R)^5 for | x- x_0|≤R and ζ( x)0̄ elsewhere. Here x_0 is the location of the center of the asperity and R is its radius, as defined in the manuscript. Asperities are allowed to overlap by simply summing the contributions of the individual asperities to the noise field.
§.§ Additional supporting results
In this section, we provide additional supporting results that are referred to in the manuscript. First, in Fig. <ref> we show that in-plane FWs approximately inherit their scale, both amplitude and width, from the asperity size R. This is similar to experimental findings reported in relation to the out-of-plane component of FWs <cit.>.
In Fig. 2 in the manuscript, we showed that FW generation is accompanied by an initial velocity overshoot Δv__ os(t) that develops ahead of the asperity, after the latter is broken. We found that the maximal velocity overshoot, max[Δv__ os], controls the amplitude Δv_x of the generated FW. We also found that Δv__ os varies approximately linearly with δΓ for fixed v and R (not shown). In Fig. <ref>, we show that Δv_x varies predominantly linearly with max[Δv__ os], when the latter is varied by varying δΓ for fixed v and R.
§.§ Supporting movies
A major merit of the employed 3D computational framework is that it enables tracking crack evolution in 3D in real (computer) time. Consequently, we supplement the results presented in the manuscript with movies of the corresponding 3D dynamics. The Supplemental Materials include 6 movies, which can be downloaded from this link: https://www.weizmann.ac.il/chembiophys/bouchbinder/sites/chemphys.bouchbinder/files/uploads/SupMat/front_wave_vids/Movies_SM.rarDownload Supplementary Movies, described as follows:
* MovieS1: A movie that shows FW generation and propagation prior to FW-FW interaction, following Fig. 2a in the manuscript. In the latter, equal time interval snapshots were presented. The snapshots therein were shifted according to 0.006 c_ st/ξ to demonstrate FW propagation.
* MovieS2: The same calculation as in MovieS1 and Fig. 2 in the manuscript, here showing the phase-field ϕ( x,t)=1/2 iso-surface. Note the different scales of the axes.
* MovieS3: A movie that corresponds to the FW-FW interaction shown in Fig. 5a in the manuscript. In the latter, equal time interval snapshots were presented. The snapshots therein were shifted according to 0.004c_ s(t-t_0)/ξ to demonstrate FW propagation.
* MovieS4: A movie that corresponds the coupled in- and out-of-plane perturbation induced by two asperities as in Fig. 6 in the manuscript, albeit under pure mode I (no mode III). The movie shows that coupled in- and out-of-plane components are generated by the perturbation, but that the out-of-plane component decays, while the in-plane persistently propagates.
* MovieS5: A movie that corresponds to Fig. 6 in the manuscript, i.e. it is identical to MovieS4, but with a mode-mixity (mode III/I) of 3%. Note that in Fig. 6 in the manuscript, snapshots corresponding to the left y axis were shifted according to 0.05×0.4 c_ s(t-t_0)/ξ, while those corresponding to the right y axis were shifted according to 0.4 c_ s(t-t_0)/ξ.
* MovieS6: The same as MovieS5, but with a mode-mixity (mode III/I) of 5%. The resulting coupled in- and out-of-plane FW features an out-of-plane component that approximately scales with the level of mode-mixity.
52
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Freund(1990)]freund
author author L. B. Freund, @noop title Dynamic Fracture
Mechanics (publisher Cambridge University Press, address Cambridge, year 1990)NoStop
[Broberg(1999)]99bro
author author K. R. Broberg, @noop title Cracks and Fracture (publisher Academic Press, address New York, year 1999)NoStop
[Bouchbinder et al.(2014)Bouchbinder, Goldman, and Fineberg]bouchbinder.14
author author E. Bouchbinder, author T. Goldman, and author J. Fineberg, title title The dynamics of rapid
fracture: instabilities, nonlinearities and length scales, @noop
journal journal Rep. Prog. Phys. volume 77, pages 046501 (year 2014)NoStop
[Chen et al.(2017)Chen,
Bouchbinder, and Karma]Chen2017
author author C.-H. Chen, author E. Bouchbinder, and author A. Karma, title title Instability in dynamic fracture and the failure
of the classical theory of cracks, @noop journal
journal Nat. Phys. volume 13, pages 1186 (year 2017)NoStop
[Lubomirsky et al.(2018)Lubomirsky, Chen, Karma, and Bouchbinder]Lubomirsky2018
author author Y. Lubomirsky, author C.-H. Chen, author A. Karma, and author E. Bouchbinder, title title Universality and stability phase
diagram of two-dimensional brittle fracture, @noop journal journal Phys. Rev. Lett. volume 121, pages 134301 (year
2018)NoStop
[Vasudevan et al.(2021)Vasudevan, Lubomirsky, Chen, Bouchbinder, and Karma]vasudevan2021oscillatory
author author A. Vasudevan, author Y. Lubomirsky, author C.-H. Chen, author E. Bouchbinder, and author A. Karma, title title Oscillatory and tip-splitting instabilities in 2D
dynamic fracture: The roles of intrinsic material length and time scales, @noop journal journal J. Mech. Phys. Solids volume 151, pages 104372 (year 2021)NoStop
[Rice(1985)]rice1985first
author author J. Rice, title title First-order variation in
elastic fields due to variation in location of a planar crack front, @noop journal journal J. Appl. Mech. volume 52, pages 571 (year
1985)NoStop
[Willis and Movchan(1997)]willis1997three
author author J. Willis and author A. Movchan, title title Three-dimensional dynamic
perturbation of a propagating crack, @noop journal
journal J. Mech. Phys. Solids volume 45, pages 591 (year
1997)NoStop
[Ramanathan and Fisher(1997)]ramanathan.97
author author S. Ramanathan and author D. S. Fisher, title title Dynamics and instabilities
of planar tensile cracks in heterogeneous media, @noop journal journal Phys. Rev. Lett. volume 79, pages 877 (year 1997)NoStop
[Morrissey and Rice(1998)]morrissey1998
author author J. W. Morrissey and author J. R. Rice, title title Crack front waves, @noop journal journal J. Mech. Phys. Solids volume 46, pages 467 (year 1998)NoStop
[Morrissey and Rice(2000)]morrissey2000perturbative
author author J. W. Morrissey and author J. R. Rice, title title Perturbative simulations of
crack front waves, @noop journal journal J. Mech. Phys. Solids volume 48, pages 1229 (year
2000)NoStop
[Sharon et al.(2001)Sharon,
Cohen, and Fineberg]sharon2001propagating
author author E. Sharon, author G. Cohen, and author J. Fineberg, title title Propagating solitary waves along a rapidly moving
crack front, @noop journal journal
Nature volume 410, pages 68
(year 2001)NoStop
[Sharon et al.(2002)Sharon,
Cohen, and Fineberg]sharon2002crack
author author E. Sharon, author G. Cohen, and author J. Fineberg, title title Crack front waves and the dynamics of a rapidly
moving crack, @noop journal journal
Phys. Rev. Lett. volume 88, pages 085503 (year 2002)NoStop
[Fineberg et al.(2003)Fineberg, Sharon, and Cohen]fineberg2003crack
author author J. Fineberg, author E. Sharon, and author G. Cohen, title title Crack front waves in dynamic fracture, @noop journal journal Int. J. Fract. volume 121, pages
55 (year 2003)NoStop
[Livne et al.(2005)Livne,
Cohen, and Fineberg]livne2005universality
author author A. Livne, author G. Cohen, and author J. Fineberg, title title Universality and hysteretic dynamics in rapid
fracture, @noop journal journal
Phys. Rev. Lett. volume 94, pages 224301 (year 2005)NoStop
[Ravi-Chandar(1998)]ravi1998dynamic
author author K. Ravi-Chandar, title title Dynamic fracture of
nominally brittle materials, @noop journal journal Int. J. Fract. volume
90, pages 83 (year 1998)NoStop
[Fineberg and Marder(1999)]fineberg.99
author author J. Fineberg and author M. Marder, title title Instability in dynamic
fracture, @noop journal journal
Phys. Rep. volume 313, pages 1
(year 1999)NoStop
[Bonamy and Ravi-Chandar(2003)]bonamy2003interaction
author author D. Bonamy and author K. Ravi-Chandar, title title Interaction of shear
waves and propagating cracks, @noop journal
journal Phys. Rev. Lett. volume
91, pages 235502 (year 2003)NoStop
[Bonamy and Ravi-Chandar(2005)]bonamy2005dynamic
author author D. Bonamy and author K. Ravi-Chandar, title title Dynamic crack
response to a localized shear pulse perturbation in brittle amorphous
materials: on crack surface roughening, @noop journal journal Int. J. Fract. volume 134, pages 1 (year
2005)NoStop
[Baumberger et al.(2008)Baumberger, Caroli, Martina, and Ronsin]baumberger2008magic
author author T. Baumberger, author C. Caroli,
author D. Martina, and author O. Ronsin, title
title Magic angles and cross-hatching instability in hydrogel
fracture, @noop journal journal
Phys. Rev. Lett. volume 100, pages 178303 (year 2008)NoStop
[Henry(2010)]henry2010study
author author H. Henry, title title Study of three-dimensional
crack fronts under plane stress using a phase field model, @noop
journal journal EPL volume 92, pages 46002 (year
2010)NoStop
[Pons and Karma(2010)]pons2010helical
author author A. Pons and author A. Karma, title title Helical crack-front instability in
mixed-mode fracture, @noop journal journal Nature volume 464, pages
85 (year 2010)NoStop
[Henry and Adda-Bedia(2013)]henry2013fractographic
author author H. Henry and author M. Adda-Bedia, title title Fractographic aspects
of crack branching instability using a phase-field model, @noop
journal journal Phys. Rev. E volume 88, pages 060401 (year
2013)NoStop
[Willis(2013)]willis2013crack
author author J. Willis, title title Crack front perturbations
revisited, @noop journal journal
Int. J. Fract. volume 184, pages 17 (year 2013)NoStop
[Adda-Bedia et al.(2013)Adda-Bedia, Arias, Bouchbinder, and Katzav]adda-bedia.13
author author M. Adda-Bedia, author R. E. Arias, author E. Bouchbinder, and author E. Katzav, title title Dynamic stability of crack fronts:
Out-of-plane corrugations, @noop journal journal Phys. Rev. Lett. volume 110, pages 014302 (year 2013)NoStop
[Chen et al.(2015)Chen,
Cambonie, Lazarus, Nicoli,
Pons, and Karma]Chen2015
author author C.-H. Chen, author T. Cambonie,
author V. Lazarus, author M. Nicoli, author
A. J. Pons, and author
A. Karma, title title Crack front segmentation and facet coarsening in mixed-mode
fracture, @noop journal journal Phys.
Rev. Lett. volume 115, pages 265503
(year 2015)NoStop
[Kolvin et al.(2015)Kolvin,
Cohen, and Fineberg]kolvin2015crack
author author I. Kolvin, author G. Cohen, and author J. Fineberg, title title Crack front dynamics: the interplay of singular
geometry and crack instabilities, @noop journal
journal Phys. Rev. Lett. volume
114, pages 175501 (year 2015)NoStop
[Bleyer et al.(2017)Bleyer,
Roux-Langlois, and Molinari]bleyer2017dynamic
author author J. Bleyer, author C. Roux-Langlois, and author J.-F. Molinari, title title Dynamic
crack propagation with a variational phase-field model: limiting speed, crack
branching and velocity-toughening mechanisms, @noop journal journal Int. J. Fract. volume 204, pages 79 (year
2017)NoStop
[Bleyer and Molinari(2017)]bleyer2017microbranching
author author J. Bleyer and author J.-F. Molinari, title title Microbranching
instability in phase-field modelling of dynamic brittle fracture, @noop journal journal Appl. Phys. Lett. volume 110, pages 151903
(year 2017)NoStop
[Kolvin et al.(2017)Kolvin,
Fineberg, and Adda-Bedia]kolvin2017nonlinear
author author I. Kolvin, author J. Fineberg, and author M. Adda-Bedia, title title Nonlinear focusing in dynamic crack
fronts and the microbranching transition, @noop journal journal Phys. Rev. Lett. volume 119, pages 215505 (year
2017)NoStop
[Kolvin et al.(2018)Kolvin,
Cohen, and Fineberg]kolvin2018topological
author author I. Kolvin, author G. Cohen, and author J. Fineberg, title title Topological defects govern crack front motion and
facet formation on broken surfaces, @noop journal
journal Nat. Mater. volume 17, pages 140 (year 2018)NoStop
[Fekak et al.(2020)Fekak,
Barras, Dubois, Spielmann,
Bonamy, Geubelle, and Molinari]fekak2020crack
author author F. Fekak, author F. Barras,
author A. Dubois, author D. Spielmann, author
D. Bonamy, author P. Geubelle, and author J. Molinari, title title Crack
front waves: A 3D dynamic response to a local perturbation of tensile and
shear cracks, @noop journal journal
J. Mech. Phys. Solids volume
135, pages 103806 (year 2020)NoStop
[Roch et al.(2022)Roch,
Lebihain, and Molinari]roch2022dynamic
author author T. Roch, author M. Lebihain, and author J.-F. Molinari, title title Dynamic crack front deformations in
cohesive materials, @noop journal journal arXiv preprint arXiv:2206.04588 (year
2022)NoStop
[Steinhardt and Rubinstein(2022)]steinhardt2022material
author author W. Steinhardt and author S. M. Rubinstein, title title How material
heterogeneity creates rough fractures, @noop journal
journal Phys. Rev. Lett. volume
129, pages 128001 (year 2022)NoStop
[Wang et al.(2022)Wang,
Adda-Bedia, Kolinski, and Fineberg]wang2022hidden
author author M. Wang, author M. Adda-Bedia,
author J. M. Kolinski, and author J. Fineberg, title title How hidden 3D structure within crack fronts
reveals energy balance, @noop journal journal J. Mech. Phys. Solids volume 161, pages 104795 (year
2022)NoStop
[Wang et al.(2023)Wang,
Adda-Bedia, and Fineberg]wang2023dynamics
author author M. Wang, author M. Adda-Bedia, and author J. Fineberg, title title Dynamics of three-dimensional stepped
cracks, bistability, and their transition to simple cracks, @noop
journal journal Phys. Rev. Res. volume 5, pages L012001 (year 2023)NoStop
[Karma et al.(2001)Karma,
Kessler, and Levine]karma2001phase
author author A. Karma, author D. Kessler, and author H. Levine, title title Phase-field model of mode III dynamic
fracture, @noop journal journal
Phys. Rev. Lett. volume 87, pages 45501 (year 2001)NoStop
[Karma and Lobkovsky(2004)]Karma2004
author author A. Karma and author A. E. Lobkovsky, title title Unsteady crack motion
and branching in a phase-field model of brittle fracture, @noop
journal journal Phys. Rev. Lett. volume 92, pages 245510 (year 2004)NoStop
[Henry and Levine(2004)]Henry2004
author author H. Henry and author H. Levine, title title Dynamic instabilities of fracture
under biaxial strain using a phase field model,
journal journal Phys. Rev. Lett. volume 93, pages 105504 (year 2004)NoStop
[Hakim and Karma(2005)]hakim2005crack
author author V. Hakim and author A. Karma, title title Crack path prediction in anisotropic
brittle materials, @noop journal journal Phys. Rev. Lett. volume 95, pages 235501 (year 2005)NoStop
[Henry(2008)]Henry.08
author author H. Henry, title title Study of the branching
instability using a phase field model of inplane crack propagation, @noop journal journal EPL volume 83, pages 16004
(year 2008)NoStop
[Hakim and Karma(2009)]Hakim.09
author author V. Hakim and author A. Karma, title title Laws of crack motion and phase-field
models of fracture, @noop journal journal J. Mech. Phys. Solids volume 57, pages 342 (year
2009)NoStop
[Aranson et al.(2000)Aranson, Kalatsky, and Vinokur]aranson2000continuum
author author I. Aranson, author V. Kalatsky, and author V. Vinokur, title title Continuum field description of crack
propagation, @noop journal journal
Phys. Rev. Lett. volume 85, pages 118 (year 2000)NoStop
[Eastgate et al.(2002)Eastgate, Sethna, Rauscher, Cretegny, Chen, and Myers]eastgate2002fracture
author author L. Eastgate, author J. Sethna,
author M. Rauscher, author T. Cretegny, author
C. Chen, and author
C. Myers, title title Fracture in mode I using a conserved phase-field model, @noop journal journal Phys. Rev.
E volume 65, pages 036117 (year 2002)NoStop
[SM()]SM
@noop title See Supplemental Materials in this document (pages 6-10). The Supplementary Movies can be downloaded from this link: https://www.weizmann.ac.il/chembiophys/bouchbinder/sites/chemphys.bouchbinder/files/uploads/SupMat/front_wave_vids/Movies_SM.rarDownload Supplementary MoviesNoStop
[Livne et al.(2007)Livne,
Ben-David, and Fineberg]Livne.07
author author A. Livne, author O. Ben-David, and author J. Fineberg, title title Oscillations in rapid fracture, @noop journal journal Phys. Rev. Lett. volume 98, pages 124301
(year 2007)NoStop
[Bouchbinder(2009)]bouchbinder.09b
author author E. Bouchbinder, title title Dynamic crack tip
equation of motion: High-speed oscillatory instability,
journal journal Phys. Rev. Lett. volume
103, pages 164301 (year 2009)NoStop
[Goldman et al.(2012)Goldman, Harpaz, Bouchbinder, and Fineberg]Goldman2012
author author T. Goldman, author R. Harpaz,
author E. Bouchbinder, and author J. Fineberg, title title Intrinsic nonlinear scale governs oscillations in
rapid fracture, @noop journal journal
Phys. Rev. Lett. volume 108, pages
104303 (year 2012)NoStop
[Sharon and Fineberg(1999)]sharon.99
author author E. Sharon and author J. Fineberg, title title Confirming the continuum
theory of dynamic brittle fracture for fast cracks, @noop
journal journal Nature volume 397, pages 333 (year
1999)NoStop
[Livne et al.(2010)Livne,
Bouchbinder, Svetlizky, and Fineberg]livne2010
author author A. Livne, author E. Bouchbinder,
author I. Svetlizky, and author J. Fineberg, title title The near-tip fields of fast cracks, @noop
journal journal Science volume 327, pages 1359 (year
2010)NoStop
[Célarié et al.(2003)Célarié, Prades, Bonamy,
Ferrero, Bouchaud, Guillot, and Marliere]celarie2003glass
author author F. Célarié, author S. Prades, author D. Bonamy,
author L. Ferrero, author E. Bouchaud, author
C. Guillot, and author
C. Marliere, title title Glass breaks like metal, but at the nanometer scale, @noop
journal journal Phys. Rev. Lett. volume 90, pages 075504 (year 2003)NoStop
[Verlet(1967)]verlet1967computer
author author L. Verlet, title title Computer “experiments”
on classical fluids. I. Thermodynamical properties of Lennard-Jones
molecules, @noop journal journal
Phys. Rev. volume 159, pages 98
(year 1967)NoStop
|
http://arxiv.org/abs/2306.11553v1
|
20230620141519
|
Polytope: An Algorithm for Efficient Feature Extraction on Hypercubes
|
[
"Mathilde Leuridan",
"James Hawkes",
"Simon Smart",
"Emanuele Danovaro",
"Tiago Quintino"
] |
cs.IR
|
[
"cs.IR",
"cs.CG",
"68P20",
"E.1; H.3.1; H.3.3; F.2.2; G.4; J.2; J.3"
] |
Geometric particle-in-cell methods for Vlasov–Poisson equations with Maxwell–Boltzmann electrons
[
================================================================================================
§ INTRODUCTION
§ INTRODUCTION
In
the past century, fields in science and technology have entered a new era - the era of “big data”. From weather forecasting to medicine, scientific advances have led to a surge in the quantity of data produced daily.
Indeed, scientific data has been steadily growing in the past decades and in recent years especially, it has experienced exponential growth.
Whilst this new era holds many promises for major scientific developments in the years to come, the question arises of how to efficiently use this wealth of data.
The scientific data collected nowadays often depends on a number of different variables and can thus be represented as a multidimensional array, or datacube <cit.>. Organising data inside such datacubes has attracted a lot of interest in the past few years, with many tools now available to work on such data representations. Most modern software architectures provide support to handle such data structures, from Matlab <cit.> to Python <cit.> and C++ <cit.>.
However, in each of these software, datacubes can only access data “orthogonally” to their “axes" by selecting specific values or ranges along given dimensions <cit.>.
Such limited data access mechanisms in the form of bounding boxes are non-optimal for a wide range of applications.
Consider for example the case where a user wants to access temperature data over a country. For this particular example, the bounding box data extraction approach proves to be quite inconvenient as country shapes are not well-represented by bounding boxes.
This then not only implies that much more data than is necessary is read and returned from the datacube, thereby consuming more I/O resources, but it also places the additional burden of post-processing on the user after retrieval.
To address this issue, we introduce a new alternative way of accessing datacubes. Our extraction algorithm, Polytope[<https://github.com/ecmwf/polytope>], enables users to efficiently query arbitrary high-dimensional shapes from a datacube, slicing non-orthogonally along the datacube's axes. This is much less restrictive than the popular bounding box approach described above and constitutes a major improvement compared to existing data extraction methods.
Indeed, as discussed above in the country slicing example, traditional bounding-box extraction methods are insufficient for handling such complicated requests. The Polytope algorithm however was designed especially with these requests in mind and is able to directly extract such shapes from very large data hypercubes. Because our algorithm computes the exact bytes that users are interested in and only reads those from the datacube, it scales well to large high-dimensional request shapes unlike the traditional bounding-box extraction techniques which scale with the tensor product of each dimension. The Polytope algorithm will thus enable scientists to efficiently make use of their ever increasing data, whilst improving the efficiency of their I/O system.
In this paper, we first introduce the idea behind the Polytope algorithm, before describing its inner mechanism in detail. We then expose some of its possible applications in different scientific fields before finally performing a first analysis of the algorithm's performance.
§ CONCEPT
Before diving into a technical description of our software, let us first explain in more detail the conceptual approach we take.
With the Polytope algorithm, we developed a data extraction algorithm which supports the retrieval of arbitrary high-dimensional request shapes, called features, from arbitrary data hypercubes. Our algorithm is not restricted to any particular request shapes or application field and is in fact intended to be generic and work seamlessly in any scientific application involving datacubes.
Rather than pre-defining a set of shapes which can be extracted from the datacube, the Polytope algorithm takes n-dimensional polytope shapes as input, giving the algorithm its name. In computational geometry, a polytope is defined as the convex hull of a given point set 𝒫 = {p_1, …, p_n} <cit.>.
Note that polytopes are convex by definition. Polytopes can in fact be thought of as high-dimensional convex polygons.
In the Polytope software, we use polytopes because any arbitrary high-dimensional shape, even a concave shape, can be either approximated by or decomposed into simpler convex polytopes. Indeed, polytopes can be seen as the building blocks of high-dimensional geometry. They form the basis of most modern meshing softwares and are used daily in computer graphics to model intricate objects <cit.>. We thus see that by formulating data requests as polytopes, users will in theory be able to request almost any feature of interest to them from a datacube.
The underlying idea behind the Polytope algorithm can be visualised in Figure <ref>.
§ POLYTOPE EXTRACTION ALGORITHM
We now introduce the Polytope feature extraction algorithm, highlighting in particular the way in which it achieves polytope-based feature extraction on datacubes.
Note that the Polytope algorithm only works on a subset of datacubes which possess particular properties. We thus first discuss some of these datacube properties before describing the complete mechanism behind the feature extraction algorithm.
§.§ Datacube
Datacubes can be thought of as multi-dimensional arrays.
In particular, they store data points along different datacube dimensions. Each datacube dimension has an associated “axis” metadata with a discrete set of indices stored on it. A data point is then located at each of these indices, forming a datacube.
Each datacube structure is unique however and the Polytope datacube component specifies querying mechanisms on each of these different structures. Moreover, it also describes essential features of the underlying datacube, such as its axes. This helps construct a common framework for treating various types of datacube structures.
§.§.§ Axes
Axes in a datacube refer to the dimensions along which the data is stored. Values along these axes are called indices. In the Polytope extraction algorithm, we differentiate between two main types of axes, the ordered and unordered categorical axes. These two types of axes cannot be treated in the same way within the slicing step of the algorithm, which leads to their distinction here.
*Ordered Axes These axes only accept sets of comparable indices which can be ordered. In particular here, this means that values on ordered axes need to be comparable to each other, such that they must meaningfully support comparison operators (==, <, ≤, >, ≥). This property then directly implies an ordering between indices on ordered axes. Importantly, note that indices on ordered axes do not have to be integers, but can in fact be any countable type that supports a comparison operation, such as time entities, floating point numbers and of course integers. For such axes, it is possible to query ranges of indices as well as individual axis values.
*Categorical Axes The other type of axes which can be handled by our algorithm are categorical axes. These axes only support distinct indices which are not comparable to each other, such as string indices for example. In this case, unlike for ordered axes, it does not make sense to query ranges of indices. Instead, the only possible queries on categorical axes are specific index selections.
Note that, in practice, indices on a datacube will always have some gap between them, even if it is just a small tolerance. This implies that the set of indices on a datacube axis will always be discrete. All ordered axes are thus countable axes, for which indices can be ordered and numbered using natural numbers.
Note also that the indices on ordered axes do not have to be uniformly spaced. In particular, the datacube axes can be irregular and sparse in their indices.
Lastly, observe that ordered axes can exhibit special behaviours, such as cyclicity along their indices. We thus further subdivide the ordered axis class with as many special subclasses as required to capture all possible axis behaviours.
All axes within either of these axis classes can be treated in the same fashion. This allows us to take a common approach towards extracting indices on those axes and thus facilitates the data extraction algorithm.
§.§.§ Datacube Structure
The datacube can be viewed as a possibly non-regular imbalanced tree. This can be seen in Figure <ref> and we now explain each of these two datacube properties, non-regularity and imbalance, in more detail with the help of an example.
Note that the datacube does not necessarily have the same dimensionality in all directions. On some axes, it is possible to have axis indices which give rise to different subsequent axes or axis values. Consider for example the datacube in Figure <ref> with the ax2 axis with indices val4 and val5. If we pick index val5, the other axes in the datacube are u and v, whereas if we pick index val4 instead, the other axes in the datacube are x, y and z. This phenomenon can be viewed as a non-regular branching of the datacube axes. This is an important feature of the datacube, which we should take into consideration when thinking about the datacube structure.
In particular, this suggests that there is a natural ordering of the axes, which we should follow when extracting data.
The imbalance in the datacube tree comes from the fact that some datacube axis can possibly have many more indices than others. In our example datacube above, imagine for instance that the u and v axis each only have 2 index values, whereas the x, y and z axis each have 10 index values. This implies that the val4 index has many more children than then val5 index and makes this particular datacube very imbalanced. Again, this is a feature of the datacube which is important to remember in order to understand the complete datacube structure.
§.§ Slicer
The core of the Polytope feature extraction algorithm is the slicer, which contains a novel slicing step on the datacube indices. The slicing algorithm introduced here is of particular relevance, as it supports non-orthogonal slicing across arbitrary ordered axes.
This is in contrast to most state of practice data extraction techniques, which only support range selections on individual axes <cit.>. Indeed, current state of practice data extraction techniques often only cut boxes of data, whereas our slicing algorithm has the capability of cutting polytopes of data. It is also important to note that the slicing algorithm introduced here works on all ordered axes, without any specific constraints about the type of indices stored on these axes. Moreover, as the algorithm is able to handle shapes of arbitrary dimensions, it can be used to extract various low- and high-dimensional queries, making it a highly versatile technique.
§.§.§ Concept
The slicing algorithm used in Polytope differs from others as it is capable of slicing non-orthogonally along datacube axes. By leveraging results in the field of computational geometry, it can extract any convex polytope from the original datacube.
The underlying concept is that we successively slice the requested polytope along each axis in the natural axis ordering using hyperplanes, reducing the dimensionality of the polytope at each step until we are left with a list of all points contained in this polytope.
§.§.§ Ordered vs Categorical Axes
As mentioned earlier, the slicer handles ordered and categorical axes slightly differently.
In particular, categorical axes do not support range queries and thus we can only ask for specific values on these axes, instead of polytopes. For categorical axes, the algorithm therefore only has to check whether the queried indices exist in the datacube, as would happen in every other traditional extraction algorithm.
The true innovation of the Polytope extraction technique is its ability to handle arbitrary polytope requests, which it achieves by introducing a new slicing step along the ordered axes.
Note however that this slicing technique only works on ordered axes for two reasons.
Firstly, since it is only possible to define and request ranges on ordered axes, it also only makes sense to define polytopes along such axes. Secondly, the slicing step introduced below only works on indices which can be interpolated. As we now explain, these are in fact precisely the ordered axes' indices. Indeed, note that, for the purposes of our algorithm, we assume that all of the ordered axes are measurable and linear axes, which can have continuous index values. We make this assumption even for ordered axes which are only truly countable with gaps between their indices. Because all ordered axes have some comparison operation, this is a valid assumption. This then implies that we can perform interpolation on all of the ordered axes' indices. The slicing step thus works on all ordered axes, but not on the categorical axes.
§.§.§ Slicing Step
The actual slicing step is quite straightforward with the slicing mechanism merely consists of finding the intersection of a polytope with a hyperplane along a datacube axis. We first separate all vertices in the polytope into two separate groups, each group consisting of points on either side of the hyperplane. We then linearly interpolate between each pair of vertices where one vertex comes from one vertex group and the other from the other. We linearly interpolate these pairs to find the interpolated point which lies on the slice plane. Once we have done this for all pairs, we obtain a lower-dimensional polytope on the slice plane, which is in fact just the intersection of the original polytope with the slice plane, as wanted. This can be seen in Figure <ref> for some 2D and 3D examples.
As the original polytope is convex, this new intersection polytope is trivially also convex. As an optimisation step, we can thus take the convex hull of the intersection points at the end, using the QuickHull algorithm <cit.> for example. This does not change the lower-dimensional polytope because it is convex, but removes all interior vertices in its definition. As we slice high-dimensional polytopes, this can lead to major performance improvements. Indeed, without this last step, the number of vertex points in the polytope definition grows quadratically with each slice, which would considerably slow down the algorithm.
§.§.§ Index Tree Construction
To ensure that we slice through all the requested polytopes defined on different axes of the datacube, we need to carefully keep track of which step in the extraction we are in. The way we achieve this in the Polytope extraction technique is to iteratively build an index tree.
We build the index tree by slicing along successive axes on the datacube one after another. For each axis of the datacube, we first find the polytopes defined on that axis. We then find the discrete indices on that axis contained within the extents of those polytopes and add them as children to the index tree. Next, we slice the necessary polytopes along each of the discrete datacube indices to obtain lower-dimensional polytopes. As shown in Figure <ref>, these lower-dimensional polytopes are the intersection of the higher-dimensional polytopes with each of the axis indices slice hyperplanes. These new polytopes are the next polytopes we would like to now extract from the datacube. The algorithm therefore continues as before on these lower-dimensional polytopes if they exist. This process is re-iterated in Algorithm <ref>.
Note that this works well on ordered axes. On categorical axes however, the slicing step is ill-defined as interpolation between indices is not possible. Nevertheless, recall that polytopes defined on categorical axes are in fact 1D points and thus instead of slicing, we only need to check whether those points exist in the datacube. Indeed, slicing does not matter in this case as the points are 1 dimensional and slicing, if it were well-defined, would therefore not produce any lower-dimensional polytopes anyway. We thus conclude that the process for constructing index trees presented in Algorithm <ref> does in fact work well for categorical axes as well.
Algorithm <ref> implies that we construct the index tree breath-first (layer by layer), instead of depth-first (constructing branches one after the other). This approach ensures that the algorithm does not loose track of what values inside the requested polytopes have already been found. It thus ensures that users get back all the points that are contained in the shape they requested.
[ht!]
Polytope Slicing Algorithm
§ APPLICATIONS
The Polytope data extraction algorithm has a wide range of interesting applications, from meteorology to healthcare. In this section, we first introduce the different Polytope interface levels before discussing some Polytope applications, describing specific examples and how Polytope has improved access to data in those cases.
§.§ Interface
To facilitate interaction with the Polytope feature extraction algorithm, which only accepts polytopes as input, different interfaces can be implemented.
The Polytope interfaces serve as platforms for the users to interact with the extraction algorithm.
In particular, users will submit their request shapes and, after the algorithm has run, retrieve their desired data to and from these interfaces.
To accommodate different types of users, several interface levels exist, which let users request a wide range of request shapes, from the low-level generic convex polytope to higher-level specialised requests. The two in-built low- and high-level Polytope interfaces are shown in Figure <ref>. It is also possible to build domain-specific interfaces on top of these built-in interfaces, also shown in Figure <ref>. Each level is built on top of another with the domain-specific interfaces using shapes from the high-level interface, which itself depends on the low-level interface.
These distinct interface levels are useful because depending on specific needs and familiarity with the Polytope extraction technique, users might want to request different types of shapes from our algorithm.
Through the domain-specific interfaces, users can request domain-specific functions. For example, a meteorological interface could be built to facilitate access to time-series, trajectories or country extraction, similar to the OGC EDR<cit.> standard.
Through the built-in high-level interface, users can request primitive shapes, such as disks or boxes, and then use constructive geometry operations, such as taking unions or sweeping along a path, to build more complicated shapes. Finally, through the low-level interface, users can directly provide a list of convex n-dimensional polytopes, specified by a list of their vertices.
Each level is built on top of its lower-level counterpart, so that shapes in a higher level are always defined by shapes in one of the lower levels. This implies that shapes in any of the interface levels are in fact always defined as a combination of convex low-level polytopes. These low-level polytopes are the building blocks of all possible Polytope requests. The interface is responsible for decomposing all user request shapes into these base convex polytopes. In the rest of the software, we can then work only on these convex polytopes and take a unified approach towards slicing any user request shape.
§.§ Polytope Use Cases
We now describe some examples of how the Polytope feature extraction algorithm can be used in the fields of meteorology and healthcare.
§.§.§ Meteorology
At the European Center for Medium-Range Weather Forecasts (ECMWF), about 300 TiB of numerical weather prediction (NWP) data are produced daily. This data is very high-dimensional and is usually represented as a datacube of 7 or 8 dimensions depending on the forecast type.
Over the next few years, following the pioneering work of the Destination Earth initiative <cit.> with planned resolution increases in the weather model, data production will grow to about a petabyte of data a day.
The current data extraction mechanism implemented at ECMWF is one of the traditional bounding box approaches. When a user wants to extract data on a country for example, they would need to send a request for a bounding box around that country. Moreover, the current extraction technique requires either full data fields or at very best bounding boxes of data fields to be read from the system even when users only request a smaller portion of data. With future petabyte-scale datacubes, this approach will become impractical, especially when trying to accommodate for thousands of users.
The Polytope extraction technique helps alleviate many of the challenges faced by the system in this case.
It makes returning data to users much more efficient because only the required bytes are read from the I/O system.
Below, we provide a few practical examples and use cases where Polytope might help meteorological data users extract data more efficiently.
*Timeseries
Imagine a user interested in extracting the temperature over Italy for the next two weeks.
She would currently have to transpose the temperature fields along the time axis to be able to then individually extract each temperature field at a given timestep. For each timestep, she would have to cut the shape of Italy from the bounding box she retrieved before finally getting the exact data she wanted. With the Polytope extraction technique, she can instead directly request the timeseries over Italy and get back only the precise bytes she is interested in, as shown in Figure <ref>. Note that compared to the 3D bounding box the user would currently retrieve, we see a data reduction of more than 73% when using Polytope. Furthermore, note that meteorological data users are usually more interested in extracting data over particular cities or specific points in space rather than whole regions. However, since users currently first have to transpose their data and then retrieve bounding boxes around their locations of interest, in most cases they directly extract data over broader regions than just the specific locations they would like to access. With the Polytope algorithm, complicated pre-processing manipulations before extraction are not needed anymore and users only retrieve the relevant timeseries data from the datacube.
*Flight Path
Now, imagine a user interested in the flight conditions over his plane journey from Paris to New York. Using the current extraction technique, he would get back a 4 dimensional box over 3D space and time, containing much more data than what he is interested in. With the Polytope extraction technique, he will instead only get back the specific points he is interested in in the datacube without any need for post-processing, as shown in Figure <ref>b. Note that compared to the 4D box the user would currently get back, with Polytope, we experience a data reduction of more than 99.99%.
In both cases, the requested shapes are not axis-aligned and are therefore also not well approximated by bounding boxes. We thus see a significant data reduction when using the Polytope extraction technique compared to the traditional bounding box approach. Importantly, we observe that I/O is reduced when using the Polytope extraction algorithm. Moreover, using the Polytope algorithm is particularly useful for the users, who do not need to do any post-processing to their data in order to get their requested shape.
§.§.§ Healthcare
Similarly to the weather forecasting industry, the healthcare industry faces complex data handling challenges. Already in 2019, <cit.> estimated hospitals to generate tens of petabytes of data a year. As discussed in the previous example, working on this amount of data is extremely difficult and a tool like the Polytope algorithm could significantly help alleviate much of the difficulty involved.
A particular example of how Polytope can be used in the healthcare field is provided below.
*MRI Blood Vessel Detection
A clinically relevant application of MRI is the detection and characterization of plaque formation in (potential) stroke patients. This requires high-resolution scans using multiple MRI contrast weighting to comprehensively characterize the size and composition of plaque components. Using current extraction techniques, a clinician would have to download multiple entire MRI scan and then manually extract and compare the relevant data of interest from each of those scans. With the Polytope extraction technique however, it is possible to directly extract the required multi-contrast blood vessel data without further delay or expensive post-processing work, as is shown in Figure <ref> for a single high resolution black-blood vessel wall MRI dataset <cit.>.
§ PERFORMANCE AND SCALABILITY
Polytope is predicted to considerably decrease the computational cost of extracting non-orthogonal data from hypercubes. In this section, we justify this claim by first analysing the performance of the Polytope algorithm and then investigating the data reductions achieved on practical use cases when using the Polytope algorithm instead of traditional extraction methods. We conclude this section by discussing these results and their significance.
§.§ Performance
There are many factors impacting the performance of the Polytope algorithm. To characterise it better, we identified some of the key features affecting how long the Polytope algorithm takes to extract points from datacubes: the number of extracted points, the dimension of the input shape and its geometry or how the input shape was constructed by the user.
Consider two new time quantities, the total slicing time and the algorithm run time. The slicing time is the total accumulated time spent just slicing, without constructing the index tree. The total algorithm run time however is the time it takes to perform all of Algorithm <ref>, including both the slicing time as well the time spent constructing the whole index tree.
In Figure <ref>, we have plotted both of these time quantities in different settings. In each subplot, we have varied one of the previously identified feature and kept all others constant. This lets us gain an understanding of how each individual feature influences the performance of the Polytope algorithm. Note also that we have not included the time taken in I/O to fetch the data from storage as this depends on the storage medium we use and is not strictly part of the Polytope algorithm.
In Figure <ref>, we first plot both the slicing time as well as the total algorithm run time for request shapes of different dimensions. We observe that the dimension of the shape does not significantly impact the algorithm's performance. This is due to the fact that, even when slicing higher-dimensional shapes, the Polytope algorithm spends most of its time slicing lower-dimensional polytopes. In fact, note that the number of polytopes to process at each step in the algorithm quickly grows every time we slice to a lower dimension.
For example, imagine we slice a 4D box which contains two indices on each dimension. We first have to perform 2 4D slices. This then gives us 2 3D boxes, which we now have to slice. Each of these 3D boxes still has two indices on each dimension along which we need to slice. For each 3D box, we thus have to perform 2 3D slices. Considering there are 2 3D boxes, this implies we have to perform 4 3D slices in total. If we continue this logic, at the end of the algorithm, we will have performed 2 4D slices, 4 3D slices, 8 2D slices and 16 1D slices. This illustrates why the lower-dimensional slices do in fact take up most of the algorithm's slicing time.
Furthermore, we note in Figure <ref> that the slicing time is much lower than the total algorithm run time.
This is because Polytope is currently using the XArray <cit.> library for datacube implementation, and relies on XArray to look up discrete axis indices on the datacube. This is a step we believe still needs to be optimised by developing more efficient datacube look-up mechanisms and alternative datacube implementations. Meanwhile, in Figures <ref>-<ref>, we use the slicing time rather than the total algorithm run time to estimate Polytope's performance, thus excluding this dependency.
Figure <ref> shows the behaviour of the slicing time in more detail. In particular, we notice that like the total algorithm run time in Figure <ref>, the slicing time does not seem to depend on the dimension of the input shape. We also observe that the slicing time grows linearly with the number of datacube points the algorithm finds in the input shape. As discussed before, this is due to the fact that most of the slicing time is spent performing 1D slices. Indeed, increasing the number of points contained in the shape is effectively equivalent to increasing the number of 1D slices to perform to find those points. As it is those slices that make up most of the slicing time anyway, it is thus natural that the performance of the algorithm grows linearly with the number of points contained in the shape.
In Figures <ref> and <ref>, we now investigate the impact of the input shape's geometry and how it was constructed by the user.
In Figure <ref>, we first study how constructing a shape by taking a union of smaller sub-shapes affects the performance of the algorithm compared to directly specifying the input shape as one single object in 2 dimensions. We see that the performance when the shape is constructed using unions is worse than when the shape is specified as one single object. This is due to the fact that when we request shapes as unions of sub-shapes, we first slice each sub-shape individually in the algorithm before combining the results of these steps into one single output. Because we first slice each sub-shape individually, we in fact slice along all the sub-shapes edges. As the sub-shapes touch along their edges, we thus end up slicing along the edges several times, which increases the slicing time compared to when this does not happen in the non-union shape case. This is relevant where the input geometry has been produced via triangulation or mesh generation.
In Figure <ref>, we finally analyse how Polytope's different 2 dimensional high-level API primitive shapes: the box, disk and polygon shapes, influence the algorithm's performance. Here, we observe that the box and polygon shapes perform similarly while the disk shape has a slightly worse performance than the other two shapes. Because the polygon shape we inputted is actually a square, we can conclude from this observation that the algorithm's performance is mostly impacted by the number of vertices of the shape, as well as how “non-orthogonal" it is. In particular, if the shape has many edges cutting across some axes, then it is less likely that there will be duplicates when we compute the intersection points in the slicing step. We will thus then have to perform more slicing later on in the algorithm. The same logic holds if there are more vertices in the shape, which also increases the number of intersection points computed in the slicing step.
§.§ Bound on Number of Slices
As we just discussed, the time quantity of interest to us in evaluating the performance of the Polytope algorithm is the slicing time. The slicing time largely depends on a related quantity which is the number of slices performed during the algorithm. In this subsection, we quickly determine a theoretical upper bound to this related quantity.
Suppose we query an m dimensional shape using Polytope and let n_i be the maximal number of discrete indices stored along each of the i=1, …, m axes of the datacube which are contained within the requested shape. Since we do not know a priori exactly how convex or non-box-like the requested shape is, we have to assume the worst-case scenario that the shape is in fact a box.
To find the datacube points within that worst-case box shape, the Polytope algorithm now first has to slice n_1 times along the first axis dimension. This produces n_1 (m-1)-dimensional box shapes. We then have to slice each of these lower-dimensional box shapes n_2 times along the second dimension. This creates an additional n_1 × n_2 slices.
If we continue this process up to the last 1D slices, we finally see that the number of slices performed during the Polytope algorithm, N_slices is bounded by
N_slices≤ n_1 + n_1 × n_2 + … + n_1×…× n_m
= ∑_i=1^m ∏_j=1^i n_j .
We see that, as expected, this upper bound is dominated by the number ∏_i=1^m n_i of 1D slices, which will take up most of the slicing time.
As we saw in the previous subsection however, 1D slices are relatively inexpensive to perform and in all of the examples in Figure <ref>, the slicing time remains under a second.
§.§ Data Reductions
Although the Polytope algorithm represents an additional step to perform before extracting the data and it might thus at first glance seem like it has a much higher time complexity than traditional extraction approaches, it is important to remember the true purpose of the Polytope algorithm.
Polytope is a tool which computes the precise bytes of data a user wants to access. Using this tool therefore implies that users only extract exactly the data points they need, which significantly reduces the number of points to be read from the I/O system compared to the alternative "bounding box" extraction techniques. The exact data reduction statistics for the examples mentioned in the previous section are shown in Table <ref>.
In Table <ref>, the first 3 columns show the number of bytes retrieved when using different extraction techniques. In particular, note the clear distinction between the first two columns which differentiate the bounding box approach described earlier from the state of practice extraction methods taken in the fields of meteorology and healthcare respectively, which are even less optimal than the bounding box approach. Indeed, it is important to note here that one of the widely used extraction approaches in the field of meteorology for example is to extract whole fields, which are 2D arrays of latitude and longitude around the whole globe, from datacubes. Similarly, in the field of healthcare, MRI scans are currently stored as 3D images. The bounding box approach is thus already a clear improvement compared to these approaches. As we see in Table <ref> however, Polytope performs even better than the bounding box approach. This can be clearly observed in the fifth column, where we provide the reduction factor of the data retrieved when using the Polytope algorithm compared to the bounding box approach. The sixth column shows the total reduction of the data retrieved when using the Polytope algorithm compared to the state of practice extraction methods taken in the meteorology and healthcare fields. The final two columns then show the two slicing and total algorithm run times discussed above for each of our example shapes.
Along the rows, we differentiate between different types of shapes. On the first 3 rows, we first test the Polytope algorithm on shapes that are defined orthogonally along their axis and which could be directly extracted using the bounding box approach. For these 3 rows, as we see in the fifth column, using the Polytope algorithm instead of the bounding box approach does not reduce the size of the retrieved data further. Note however that running the Polytope algorithm in these three examples does not take significant time. In the latter 4 rows, we then experiment using the Polytope algorithm to retrieve more complicated non-orthogonal or axis-aligned shapes. Already for country shapes in 2D, we see that there is a significant data reduction when using the Polytope algorithm compared to the bounding box approach, with a reduction factor of up to 6 times in some cases. When considering higher dimensional shapes, and especially “path"-like shapes such as flight paths, we experience an even higher reduction factor. Indeed, in the 4D case of the flight path from Paris to New York mentioned above, about 350 times less data is returned to the users when using the Polytope algorithm instead of the bounding box approach. Again, note here that in most examples, the Polytope algorithm takes below a second to run whilst reducing the retrieved data size by a factor of at least 1000 compared to the traditional approaches.
Importantly here, note that Polytope is able to perform the exact same orthogonal extractions as the bounding box approach in minimal time, whilst significantly outperforming the bounding box approach when extracting more complicated shapes. This suggests that the Polytope algorithm performs at least as well as the bounding box approach and thus makes it a strong competitor to this approach.
§.§ Discussion
Note that in subsection 5.1 above, in Figure <ref>, we did not include the time spent extracting data from the datacube. This is because this data extraction time is very dependent on the storage medium on which the algorithm is run. We expect that the cost of performing the Polytope algorithm will be significantly less than the savings made by retrieving less data. In particular, we suspect that hardware that supports high performance random-read, such as flash based devices for example, will benefit massively from the Polytope algorithm.
Compared to traditional methods, the Polytope algorithm is an additional step in the extraction process which takes time to run. However, as we saw in this section, and in Figure <ref> especially, the Polytope algorithm is efficient and scalable, being able to locate more than a million points in less than half a second. Moreover, as already mentioned, when discussing performance of the algorithm, it is especially important to also consider its wider role in the total extraction process. As we saw in Table <ref>, the Polytope algorithm allows users to extract much less data than they would have done using a more traditional approach. As reading and returning data is usually a costly operation, it implies that, when incorporated in a complete extraction pipeline, the Polytope algorithm will make data extraction more efficient than the current state of practice. The slightly more expensive slicing mechanism inside the Polytope algorithm will thus be outweighted by the actual performance improvement of the whole data extraction pipeline.
Before being able to quantify the true performance and benefits of the Polytope algorithm, we will therefore need to perform a more in-depth analysis of its behaviour within a complete data extraction framework. This more detailed analysis is the subject of ongoing work and material for a future communication.
§ CONCLUSION
In this paper, we introduced a new data extraction algorithm called Polytope, which has the capability of extracting arbitrary geometrical shapes from a datacube. This new technique allows users to directly compute the precise bytes of interest to them before requesting these bytes from a datacube. This approach leads to many benefits, both for the users and the data providers. For data providers, much less I/O is needed whereas for users, the need for further post-processing after extraction is alleviated. We described the structure of this novel extraction algorithm and explained in more detail some of its key features. We then showed a few use cases of the Polytope extraction technique before finally analysing the performance of this method.
Future steps include performing a more in-depth analysis of the algorithm performance, as well as a rigorous discussion of Polytope's use cases in different scientific fields.
§ ACKNOWLEDGMENTS
tocsectionAcknowledgment
This work is an important contribution to, and is funded by, the EU’s Destination Earth initiative. The authors would also like to thank Matthijs de Buck and the Nuffield Department of Clinical Neurosciences at the University of Oxford for providing data for the MRI Blood Vessel Detection use case.
IEEEtran
|
http://arxiv.org/abs/2307.00104v1
|
20230630194543
|
Obscured Wildfire Flame Detection By Temporal Analysis of Smoke Patterns Captured by Unmanned Aerial Systems
|
[
"Uma Meleti",
"Abolfazl Razi"
] |
cs.CV
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] |
Obscured Wildfire Flame Detection By Temporal Analysis of Smoke Patterns Captured by Unmanned Aerial Systems
Uma Meleti
School of Computing
Clemson University, Clemson, SC, USA
[email protected]
Abolfazl Razi
School of Computing
Clemson University,Clemson, SC, USA
[email protected]
============================================================================================================================================================================================
This research paper addresses the challenge of detecting obscured wildfires (when the fire flames are covered by trees, smoke, clouds, and other natural barriers) in real-time using drones equipped only with RGB cameras. We propose a novel methodology that employs semantic segmentation based on the temporal analysis of smoke patterns in video sequences. Our approach utilizes an encoder-decoder architecture based on deep convolutional neural network architecture with a pre-trained CNN encoder and 3D convolutions for decoding while using sequential stacking of features to exploit temporal variations. The predicted fire locations can assist drones in effectively combating forest fires and pinpoint fire retardant chemical drop on exact flame locations.
We applied our method to a curated dataset derived from the FLAME2 dataset that includes RGB video along with IR video to determine the ground truth.
Our proposed method has a unique property of detecting obscured fire and achieves a Dice score of 85.88%, while achieving a high precision of 92.47% and classification accuracy of 90.67% on test data showing promising results when inspected visually.
Indeed, our method outperforms other methods by a significant margin in terms of video-level fire classification as we obtained about 100% accuracy using MobileNet+CBAM as the encoder backbone.
Wildfire Monitoring, Obscured Fire Detection, Unmanned Aerial Vehicles, Temporal Video Analysis
§ INTRODUCTION
Wildfires have become prevalent and destructive in many parts of the world. Regardless of the cause, wildfires can have severe consequences, including the loss of human lives, destruction of property, disruption of wildlife, food production, and crop supply chain as well as significant environmental damage. Once a wildfire starts, there are various ways to monitor and control these wildfires, including observation towers, direct human intervention, satellite imaging, and manned aircraft. Using drones is one of the most efficient ways of fire monitoring for its low operation cost, customizable sensing and imaging, flexible operation, and ease of use in harsh environments by advanced flight control features and partial autonomy (e.g., safe auto-landing).
One of the most significant advantages of using drones in wildfire fighting is the ability to gather real-time data about the fire’s behavior. Equipped with cameras and other sensors, drones can fly over the fire and capture relevant information. Drones can also be equipped with water tanks or fire extinguishers containing fire-retardant chemicals, such as carbon dioxide (CO2), potassium bicarbonate (KHCO3), or evaporating like bromochlorodifluoromethane (CF2ClBr) to be dropped on hot spots to create fire breaks. The efficient utilization of drones will significantly advance controlling these fires.
Targeting places where actual burning is happening, instead of blindly spraying fire retardant gases everywhere, will help set these fires off quickly and efficiently. Finding out these burning places in real-time is very difficult, especially when fire flames are obscured by thick smoke. An infrared camera can help find these hidden fires, but IR cameras are expensive.
We devised a methodology that uses an RGB camera feed, analyzes the video frames sequentially, and detects the obscured fires using temporal features, such as smoke patterns, extracted from video frames. We show that such features can be indicative of
the fire's exact location and temporal behavior. We have structured the problem as semantic segmentation of the obscured
fire by analyzing the sequence of frames in the video. A deep convolutional neural network (CNN) is designed that uses a pre-trained CNN architecture as an encoder to extract features of video frames and passes sequentially stacked features to the decoding stage that uses 3D convolutions <cit.>. to analyze these features and predict burning location. UAVs and Drones can use this predicted information for more guided and informed fire monitoring and control.
Specifically, we propose a novel method for obscured fire detection that can be used for other applications beyond forest fire management.
To this end, we curated a dataset from the existing FLAME2 <cit.> dataset
by selecting part of video frames from the original data where the drone is stationary and there is high synchronization with the IR images to avoid misalignment errors. We also use the corresponding IR videos to extract ground truth for our task by performing a series of Image Processing operations. We have visually verified the correspondence of RGB video with the processed IR video for synchronization.
We highlight the unique features of our method compared to previous methods, then proceed by elaborating on the details of the generated dataset and its preparation method. We then elucidate the details of the proposed deep learning (DL) architecture and analyze the obtained results.
§ RELATED WORK
The previous works on fire detection are mainly based on image-based techniques such as classification, object detection, and semantic segmentation of visible fire imagery. Wonjae Lee et al. proposed a wildfire detection system that classifies the presence of fire in images and evaluated the performance of AlexNet, GoogLeNet, and VGG, along with their modified variants <cit.>. Zhentian et al. have trained YOLOV3 for fire detection and reported a recognition rate of 91% <cit.>. An ensemble-based object detection method using YOLOV5 EfficientDet is proposed in <cit.> for detection and EfficientNet to capture global information about the fire. Their study showed a decrease in false positive rate by 51.3% on three public datasets. Yo Zhao et al. have proposed a deep learning architecture, called Fire-Net by stacking convolutional and pooling layers for fast localization and segmentation of fire in aerial images with an accuracy of 98 % on standard 'UAVFire' dataset <cit.>. A similar method is proposed in <cit.> and <cit.> by using a deep learning architecture to classify frames into "fire with smoke", "fire with no smoke", and "no fire" using FLAME datasets <cit.>. However, most of these methods are limited to images, where real-time data for fire detection will be mostly in terms of video feed. Further, video feeds contain temporal patterns of smoke that facilitate locating the origin of smoke which is the fire location, and distinguishing them from clouds and other white patterns. This concept is used as a key idea in our method to detect obscured fire positions.
A few works take a slightly different approach and analyze fire images path by patch instead of one-shot analysis of the entire image. For instance, a CNN-based deep learning method is proposed in <cit.> which classifies the image first and then performs a patch-level analysis to offer more detailed information. They applied their method to video frames to perform patch-wise detection and reported a 97 % detection accuracy on their own dataset. Still, this method does not consider the temporal relationship for fire classification since it treats video frames as still images. In a similar work, Gwangsu Kim et al. proposed an algorithm that collects features of video frames using a pre-trained VGG and stacks them together to be passed through a series of fully connected layers to classify the presence of fire in the video clips. However, this method restricts fire classification only during a visible fire deemed inefficient in capturing obscured fire.
Anshuman et al. <cit.> have proposed SmokeyNet - a deep learning algorithm that offers stacking CNN, LSTM, and Vision Transformer to detect smoke in the video feed captured by stand-alone cameras. However, this method is not directly applicable to aerial imagery.
Some other research works take advantage of Infrared (IR) cameras for more accurate fire positioning.
For instance, Chi Yuan et al. proposed an algorithm that uses brightness and motion clues with histogram-based segmentation and optical flow to segment fire in IR images <cit.>. Another example is Norsuzial et al.'s work that offers an Image processing-based approach to convert IR images to YCbCr color and use a wavelet analyzer to detect fire <cit.>. Also, a DL architecture is proposed in <cit.> that analyzes dual-feed imagery captured by side-by-side RGB and IR cameras for precise fire positioning.
Although these methods yield high accuracy taking advantage of thermal information captured by IR cameras, they incur an extra monitoring cost for their reliance on pricy IR cameras. Also, they are not suitable for processing existing drone-based and satellite-based datasets that include only RGB imagery.
In contrast to all the above, our method uses only RGB videos for detecting both visible and obscured fire flames in an economical way.
§ DATA PREPARATION
In this study, we have used the publicly available FLAME2 dataset <cit.>, which consists of 7 video pairs of RGB and corresponding infrared heat maps. Out of those, we have employed five relevant videos in our simulations because these videos consist of both visible fire and obscure fire, appropriate for our test. The videos were taken in a planned burning region, consisting of information on forest burning with smoke. The drone move around the place, covering different parts of the woods. For experimentation purposes, we have carefully cropped the parts of the video where the camera is relatively stationary and there is a high alignment between the RGB and IR camera viewpoints. The selected video segments are split into
clips of 20 consecutive frames to train our deep neural network, where each clip is considered a training sample.
§ METHOD
§.§ Data Pre-Processing: Using IR to Label RGB images
The IR images consist of heat maps corresponding to the temperature of different regions on the image. The place where the fire is present generally has a high temperature, and the pixel values in the heat map are closer to the maximum. We have extracted the ground truth for training data where the fire is present by processing the IR image with a series of hand-crafted image processing methods. The set of operations performed on the IR image is shown below.
IR Image → Smooth Image (5 × 5) → Hard Thresholding → Dilation (5 × 5, 2 times) → Fill (flood fill) → Erosion (5 × 5, 1 time) → Remove small objects (200 px) → Ground Truth.
We initially smooth out the image using a low-pass filter to remove noise, then use hard thresholding to select the regions of high-temperature values corresponding to fire. The resulting image is dilated to fill the small spaces and to make the fire boundary smooth; the spaces not filled in the previous operations are filled using flood fill that will result in a complete blob of fire location. The image is eroded to reverse the effect of dilation applied earlier. Small blobs that are likely to be representative of noise are removed.
Note that we use IR images to identify ground truth fire locations and train the model, but in runtime (new monitoring tasks), we only use RGB images, so our method does not require expensive IR cameras on site.
§.§ Ground Truth Approximation
The main goal is to take a sequence of input frames and predict where the fire hides. Since we obtain ground truth from the IR camera feed, every video frame has a pixel-wise map representing the ground truth. However, we need to define a single ground truth for each sample (i.e. 20-frame clip). To this end, we have approximated the ground truth by applying majority voting to the pixel labels obtained from the 20 IR video frames.
More specifically, we have
Final Label(L_i,j^*) = majority_class(L_i,j^1, L_i,j^2, …, L_i,j^seq_len),
where (i,j) determines the pixel location, and the postscript is the frame number within the clip. In our case, the label is binary, so the class is either 0 or 1.
§.§ Network Architecture
We have presented the overview of our architecture in Fig 3. The architecture consists of a pre-trained encoder that encodes features from the video frames along with a 3D decoder that decodes information from the volume of features.
We have used VGG16 <cit.> as the encoder and pass a sequence of video frames, then collect each frame's features at different resolutions of the encoder and stack them to pass it to the 3D Decoder, which processes these volumes of features to predict the segmentation map of the hidden fire.
The Decoder consists of two parts; the first part decodes information in both the image and time axis, but more emphasis is put on summarizing the semantic information of the image that will be used by the second part. This enforces the decoder to focus more on capturing the relationship between the semantic features between the frames in the time axis.
§.§ Decoder: Part-1
The design of the first part of the Decoder is inspired by the U-Net architecture<cit.>, where features from multiple resolutions are merged with the Decoder. We extracted features of each frame at resolutions of (HxW)/2, (HxW)/4, (HxW)/8, (HxW)/16, and (HxW)/32, where H, W are the height and width of the input frame. The extracted features are stacked for a sequence of frames. At each resolution, the volume of components is processed by a 3x3x3 convolution block followed by an attention block, which learns the most informative feature representations while reducing the feature space. This architecture retains the dimension of the input
in both the feature domain and time axis.
A deconvolution layer is applied to the bottleneck of the encoder with a (1x2x2) Transposed convolution, which upsamples the feature map and increases the resolution by a factor of 2. The upsampled feature map is then concatenated and fed into the convolution, attention, and deconvolution layers. This process is repeated until the output resolution becomes exactly equal to the input resolution. The ultimate output dimension of this block is (batchsize, nclasses, seqlength, H x W).
§.§ Decoder: Part 2
The Decoder2 consists of a series of Time blocks and a final convolution layer. The Time block consists of consecutive operations of 3D Convolution, Batch Normalization<cit.>, and ReLU Activation. We have chosen a kernel size of (4x1x1) for the convolution so that every block captures information of 4 consecutive frames.
Here we have used a 1x1 kernel size in the feature space and a size of 4 in time-space; the idea is to give more emphasis in time-space than feature space.
In our experiments, we have considered a sequence length of 20 frames as input to predict the output; we added 6 Time blocks that will reduce the feature space in the time dimension and reach a resolution of (2 x classes x H x W) and a final convolution layer is added to reduce to the final resolution of 1 x classes x H x W.
§.§ Loss Function
We have used Dice loss to measure the alignment between the ground truth and the detected fire regions by the architecture to train the network. The Dice coefficient measures the alignment (similarity)
between two corresponding segments by computing the overlap coefficient. The overlap coefficient ranges from 0 to 1, with 1 indicating a perfect match between the two segments. The Dice loss function is defined as one minus the Dice coefficient, with the objective of minimizing the loss during training. More specifically, we have
DiceLoss = 1- 2 ×Intersection/Predicted + Ground Truth
=1 - 2 ∑_i=1^np_i g_i + ϵ/∑_i=1^np_i^2 + ∑_i=1^ng_i^2 + ϵ,
where p_i and g_i are the predicted and ground truth segmentation masks, respectively, for the i-th pixel in the image. The summations are taken over all n pixels in the image. The ϵ term is a small constant added to the denominator to prevent division by zero.
§ EXPERIMENTS
In this section, we present the simulation results using the Flame2 dataset, which consists of RGB and IR images; we have trained with pre-processed videos as explained in the Data Preparation step. We have used 354 videos for training and 155 videos for testing. All the training and testing videos are independent and non-overlapping frames.
§.§ Training
Our model is implemented in PyTorch <cit.> using a Linux machine with Tesla A-100 40 GB GPU.
The models were tuned for the best hyperparameters. We used a step-learning rate with an initial learning rate of 1e-2 with Adam Optimizer <cit.>. The models were trained for 300 epochs with a batch size of 5.
§.§ Inference
The inference in real-time, where videos are lengthy, is made by taking frames at a window size of 20 and sliding the window over the video at a stride of one.
§.§ Evaluation Metrics
We are using a set of metrics to evaluate the quality of fire detection. We used the Dice score (presented in (<ref>)) to assess the alignment quality between ground truth and the detected fire region.
This assessment is particularly important when part of the fire is obscured by thick smoke to demonstrate to what extent our model is capable of detecting such fire regions.
Another metric we used is blob-wise precision to ensure the correctness of our predictions; it is calculated by taking each blob in the ground truth and prediction, and if the intersection area is greater than 30%, considering it as a True positive, else a false positive. Precision is calculated by using the formula.
Precision = True Positives/True Positives + False Negatives
And also, calculated clip-level classification accuracy to evaluate the video with fire is being classified as fire or not fire. This is calculated by counting the number of fire spots in the video and comparing it with the prediction; if more than 30% spots are predicted we will classify the video as fire, else non-fire.
§.§ Quantitative Results
The Quantitative results are shown in Table <ref>. We examined four different types of backbones (VGG16<cit.>, ResNet<cit.>, EfficinetNet<cit.>, MobileNet<cit.>) and two different types of attention modules. One is Spatial and Channel Squeeze & Excitation Blocks (ScSE) <cit.> and another is Convolutional Block Attention Module (CBAM) <cit.>. On overall backbones, VGG16 has shown the highest performance in terms of fire region detection alignment (dice), but Efficinetb0 with ScSE has shown a similar dice score and also has superior performance in blob-wise precision. ResNet18+CBAM and MobileNet+CBAM has shown 100% percent classification accuracy. This shows that our architecture is flexible and different types of pre-trained architectures can be employed as the encoder backbone.
§.§ Qualitative Results
Fig. <ref> is a sample output of our model applied on consecutive video frames (left to right); the top row presents the IR images from where our ground truth is extracted (annotated in white), and the middle row corresponds to prediction (annotated in red) and last row is zoomed at the annotated region (yellow). at T=1, fire is slowly starting under the tree and the volume of smoke grows gradually in the next frames. Initially, the model does not detect the obscured fire. However, as time passes, the temporal analysis of the growing fire flames enables the model to detect the obscured flame (shown by red colors).
Fig. <ref> is the output snapped at a particular frame, the left image corresponds to RGB Input and the middle is IR, and the right includes the ground truth fire region (geen line) and the predicted fire region (red line). This image demonstrates that our model detects both visible fire and obscured fire with near-accurate boundaries.
§ DISCUSSION
The quantitative and qualitative performance of our model yields promising results using various backbones exhibiting that the proposed architecture is flexible in adopting various existing and future pre-trained backbones.
We used IR images offered in the Flame2 dataset to determine the ground truth fire regions. It is noteworthy that the temperature values of IR images are calibrated within the frame values and do not reflect the exact temperature, which should be taken into account in the labeling process.
This study can trigger multiple future works. For instance, further research can focus on refining and expanding our methodology,
considering other environmental factors that may affect fire behavior. The application of our approach in practical scenarios, developing onboard processing software, and integration with existing wildfire management systems can provide valuable insights for future developments.
§ CONCLUSION
In this paper, a novel approach is proposed for detecting obscured fires in real-time using video feeds captured by drones
equipped only with RGB cameras. The key idea was training a model that treats a video clip as a single sample and processes its video frames sequentially to identify temporal smoke patterns that can be indicative of obscured fires. To this end, we introduced a new deep-learning architecture that leverages pre-trained CNN architectures and 3D convolutions to create a temporal feature map and use attention modules to predict fire regions by the sequential analysis of video frames.
We evaluated our method on a curated FLAME2 dataset where the IR videos are used to discover the ground truth fire regions and showed that our method not only improves the fire detection accuracy compared to the state-of-the-art (by achieving near 100% accuracy), but also demonstrates great success in detecting invisible and covered fire region borders (about 85% in Dice score) even when they are obscured by trees and smoke patterns.
This methodology allows utilizing firefighter drones to combat wildfires more efficiently by targeting visible and invisible fire hotspots.
Also, our method helps detect fire regions precisely without the need for IR cameras (in the test phase) which significantly reduces fire monitoring costs.
ieeetr
|
http://arxiv.org/abs/2306.03180v1
|
20230605184118
|
A presentation of symplectic Steinberg modules and cohomology of $\operatorname{Sp}_{2n}(\mathbb{Z})$
|
[
"Benjamin Brück",
"Peter Patzt",
"Robin J. Sroka"
] |
math.AT
|
[
"math.AT",
"math.AG",
"math.GR",
"math.GT",
"math.NT",
"11F75, 20E42, 55U10"
] |
Thermodynamic Uncertainty Limits to the Precision of Loosely Coupled Molecular Motors
Todd R. Gingrich
July 31, 2023
=====================================================================================
Borel–Serre proved that the integral symplectic group 2n is a virtual duality group of dimension n^2 and that the symplectic Steinberg module ^ω_n() is its dualising module. This module is the top-dimensional homology of the Tits building associated to 2n.
We find a presentation of this Steinberg module and use it to show that the codimension-1 rational cohomology of 2n vanishes for n ≥ 2, H^n^2 -1(2n;) ≅ 0.
Equivalently, the rational cohomology of the moduli stack 𝒜_n of principally polarised abelian varieties of dimension 2n vanishes in the same degree.
Our findings suggest a vanishing pattern for high-dimensional cohomology in degree n^2-i, similar to the one conjectured by Church–Farb–Putman for special linear groups.
§ INTRODUCTION
Let R be a commutative ring and (e⃗_1,f⃗_1, …, e⃗_n,f⃗_n) an ordered basis of R^2n. The (standard) symplectic form ω on R^2n is the antisymmetric bilinear form given by
ω(e⃗_i,e⃗_j) = ω(f⃗_i,f⃗_j) = 0 and ω(e⃗_i,f⃗_j) = δ_ij,
where δ_ij is the Kronecker delta. The group of invertible 2n× 2n matrices A with entries in R with the property
ω(Av⃗,Aw⃗) = ω(v⃗,w⃗)
for all v⃗,w⃗∈ R^2n is called the symplectic group of R and is denoted by 2nR.
These groups show up in many areas of mathematics. In particular, the simple Lie group 2nℝ and its lattice 2n are ubiquitous in geometry and topology. In this paper, we consider the group cohomology of 2n with trivial rational coefficients.
Borel–Serre <cit.> proved that 2n has virtual cohomological
dimension (2n) = n^2, which implies that
H^j(2n; ) ≅ 0 for j>n^2.
It follows from results of Gunnells <cit.> (see Brück–Patzt–Sroka <cit.> and Brück–Santo Rego–Sroka <cit.>) that the “top-dimensional” rational cohomology of 2n is trivial,
H^n^2(2n; ) ≅ 0 for n≥1.
In this work, we show a vanishing result in “codimension-1”:
The rational cohomology of 2n vanishes in degree n^2-1,
H^n^2-1(2n; ) ≅ 0 , if n≥ 2.
Previously, this was only known for n≤ 4 by work of Igusa <cit.>, Hain <cit.> and Hulek-Tommasi <cit.> (see <ref>); our proof applies for n≥ 3.
Our findings suggest a more general vanishing pattern in the high-dimensional rational cohomology of symplectic groups, see <ref>.
Since the rational cohomology of 2n is isomorphic to that of the moduli stack 𝒜_n of principally polarised abelian varieties of dimension 2n, <ref> furthermore implies that
H^n^2-1(_n; ) ≅ H^n^2-1(2n; ) ≅ 0 if n ≥ 2.
This connection to algebraic geometry and number theory is discussed in <ref>. The next two subsections explain how <ref> is related to a question of Putman <cit.>, which our main result answers (see <ref>).
§.§ Borel–Serre duality
The method we use to get a handle on this high-dimensional cohomology of 2n is Borel–Serre duality.
Borel–Serre <cit.> proved that
H^n^2 -i( 2n; M) ≅ H_i(2n; ^ω_n()⊗ M)
for all rational coefficient modules M and all codimensions i. Here ^ω_n() is the
symplectic Steinberg module that is defined in
the following way.
Let T^ω_n = T^ω_n() be the simplicial complex whose vertices are the nonzero isotropic subspaces V of ^2n (the subspaces on which the form ω vanishes), and whose k-simplices are flags V_0 ⊊ V_1 ⊊…⊊ V_k.
This is the rational Tits building of type 𝙲_n.
It was proved to be (n-1)-spherical by Solomon–Tits <cit.>,
so in particular, its reduced homology is concentrated in dimension n-1. This homology group is what we call the symplectic Steinberg module
^ω_n = ^ω_n() H_n-1(T^ω_n;).
The result of Borel–Serre allows one to compute the codimension-1 cohomology of 2n by using the isomorphism
H^n^2-1(2n;) ≅ H_1(2n; ^ω_n⊗).
In order to compute the right hand side of this equation, one needs a good understanding of the 2n-module ^ω_n. We obtain this by finding a presentation of this module.
§.§ Presentation of St"005E"1D714
Let (v⃗_1,v⃗_1̅, …, v⃗_n,v⃗_n̅) be an ordered
symplectic basis of ^2n, i.e. the sequence of columns of a matrix in 2n.[This is equivalent to saying that v⃗_1,v⃗_1̅, …, v⃗_n,v⃗_n̅ form a basis of ^2n and ω(v⃗_i,v⃗_i̅) = 1 for
all i and all other pairings are 0.]
Consider the full subcomplex of T^ω_n on the isotropic subspaces
spanned by nonempty subsets of {v⃗_1,v⃗_1̅, …, v⃗_n,v⃗_n̅} as illustrated in <ref> for n = 2.
This subcomplex is isomorphic to the barycentric subdivision of the boundary of an n-dimensional cross-polytope and called a (symplectic) apartment of the building T^ω_n.
It induces a nonzero homology class in ^ω_n = H_n(T^ω_n;) that we denote by [v_1, v_1̅, …, v_n,v_n̅] and call a symplectic apartment class. Solomon–Tits <cit.> proved that the apartment classes generate ^ω_n.
We call an apartment that comes from an integral basis (v⃗_1,v⃗_1̅, …, v⃗_n,v⃗_n̅), i.e. the columns of a matrix in 2n, an integral symplectic apartment. Gunnells <cit.> showed that ^ω_n is generated by its integral symplectic apartment classes[In fact, Gunnells shows more generally that ^ω_n(K) is generated by integral apartments, where K is a number field with Euclidean ring of integers .]; see <cit.> for an alternative proof.
Our following <ref> gives a presentation of ^ω_n in terms of this generating set.
To state it, let n{ 1, 1̅, …, n, n̅}, where a̅̅̅ = a. We denote by _n the group of all bijections πn→n such that π(a̅)= π(a). This is the group of signed permutations, the Weyl group associated to Sp_2n. It comes with a standard generating set of simple reflections and we write len_(π) for the word length of π∈_n with respect to (for more details, see <ref> et seq.).
The symplectic Steinberg module _n() has
the following presentation as an abelian group:
Generators: Formal symbols [v_1, v_1̅, …, v_n,v_n̅], where (v_1, v_1̅, …, v_n,v_n̅) is an ordered set of lines in
^2n such that, for some choice of v⃗_i∈^2n with v_i = ⟨v⃗_i ⟩, i ∈n, the tuple
(v⃗_1,v⃗_1̅, …, v⃗_n, v⃗_n̅) is a symplectic basis of ^2n.
Relations: For each symplectic basis (v⃗_1, v⃗_1̅, …, v⃗_n, v⃗_n̅) of ^2n:
* [v_1, v_1̅, …, v_n, v_n̅] =
(-1)^len_(π)·
[v_π(1),v_π(1̅),…, v_π(n), v_π(n̅)] ∀π∈_n;
* [v_1, v_1̅, …, v_n, v_n̅] = [v_1, ⟨v⃗_1 +v⃗_1̅⟩, …, v_n, v_n̅]+[⟨v⃗_1+v⃗_1̅⟩,
v_1̅, …, v_n, v_n̅];
* [v_1, v_1̅, …, v_n, v_n̅] = [v_1,⟨v⃗_1̅ -
v⃗_2̅⟩, ⟨v⃗_1 + v⃗_2 ⟩, v_2̅, …,
v_n, v_n̅]
[v_1, v_1̅, …, v_n, v_n̅] =+ [⟨v⃗_1̅-v⃗_2̅⟩ , v_2, ⟨v⃗_1 + v⃗_2⟩, v_1̅, …,
v_n, v_n̅].
The action of 2n is given by · [v_1, v_1̅, …, v_n, v_n̅] = [(v_1),
(v_1̅), …, (v_n), (v_n̅)].
<ref> is the main result of this article and, using Borel–Serre duality, <ref> is a rather direct consequence of it. <ref> answers a question of Putman <cit.> by establishing a “Bykovskiĭ presentation” for the symplectic Steinberg module: Relation (a) and (b) of the presentation are visibly analogous to the relations in Bykovskiĭ's presentation for the Steinberg module of n <cit.> (see also <cit.>). However, a new type of relation – Relation (c) – shows up in the symplectic setting. Ongoing work of Brück–Santos Rego–Sroka investigates these similarities and differences in the broader framework of Chevalley groups.
§.§ Connection to other work
§.§.§ Church–Farb–Putman Conjecture for SL("2124)
Church–Farb–Putman <cit.> conjectured that
H^ n2-i( n;) ≅ 0 for n≥ i+2,
where n2 = (n).
This conjecture proposes a generalisation of a result by Lee–Szczarba <cit.> that states
H^ n2( n;) ≅ 0 for n≥ 2.
Later the conjecture was also proved for i=1 by Church–Putman <cit.> and for i=2 by Brück–Miller–Patzt–Sroka–Wilson <cit.>.
Motivated by this conjecture, there have been numerous results that have significantly improved the understanding of the high-dimensional cohomology of nR for different rings R <cit.>. The motivation for this article is to investigate the following 2n-analogue of Church–Farb–Putman's conjecture.
Does it hold that H^n^2-i(2n; ) ≅ 0 , if n ≥ i + 1?
In a lot of ways, the story for n is parallel to the one of 2n: The duality result of Borel–Serre also applies to this group and yields an isomorphism
H^ n2 - i( n;) ≅ H_i( n; _n()⊗).
This gives access to high-dimensional cohomology via a partial resolution of the Steinberg module _n() associated to the special linear group, that is the top reduced homology of the corresponding Tits building of type 𝙰_n-1 over .
Lee–Szczarba's result <cit.> about the vanishing of H^ n2( n;) can also be deduced from the fact that
_n() is generated by integral apartments, which was proved by
Ash–Rudolph <cit.>.
Church–Putman <cit.> used a particularly nice presentation of _n()
to show that H^ n2-1( n;) ≅ 0. This presentation was first established by Bykovskiĭ <cit.>; Church–Putman <cit.> gave a new proof using simplicial complexes. Similarly, Brück–Miller–Patzt–Sroka–Wilson <cit.> obtained a three term resolution of _n() to show the vanishing result in codimension i = 2.
In that sense, our result can be seen as an Sp_2n-counterpart of <cit.>.
In technical terms however, the different combinatorics of special linear and symplectic groups lead to a significantly higher complexity in the present article (see <ref>).
§.§.§ Computations for small n, and the moduli spaces A"005Fn and M"005Fn
For small n, the rational cohomology of 2n was previously computed. We summarise in <ref> what is known for n≤ 4.
In particular, the answer to <ref> is known to be affirmative in this case,
H^n^2-i(2n;) ≅ 0 for n≥ i+1 if n≤ 4.
Using methods different from the ones used in this work, the rational cohomology groups H^*(2n;) can be studied by recognising them as the rational cohomology groups of a finite dimensional quasiprojective variety: The coarse moduli space of principally polarised abelian varieties of dimension 2n is the quotient 𝒜_n^coarse :=2n\ℍ^n
of the Siegel upper half plane
ℍ^n = { A ∈Mat_n× n(ℂ) | A^T = A and A >0}
of symmetric complex n× n matrices whose imaginary part is positive definite.
As 2n acts on ℍ^n properly discontinuously with finite stabiliser and ℍ^n
is contractible, there is an isomorphism
H^j(𝒜_n^coarse ; ) ≅ H^j(2n;) ≅ H^j( 𝒜_n; ).
The moduli space 𝒜_n^coarse admits a weight filtration.
Brandt–Bruce–Chan–Melo–Moreland–Wolfe <cit.> computed the top-weight rational cohomology for n≤ 7. In particular, they showed that
Gr^W_n^2+nH^n^2-i(𝒜_n;) ≅ 0 for n≥ i+1 if n≤ 7,
which provides some evidence for <ref>, and that
H^n^2-n(𝒜_n;) ≅ H^n^2 - n(2n;) ≇0 if n≤ 7.
This leads us to the following refinement of <ref>.
Is j = n^2-n the largest degree for which H^j(2n;) = H^j(_n;) is nonzero?
<ref> is a variation of <cit.>, which asks about triviality and non-triviality of certain graded pieces of these cohomology groups and relates this to stable cohomology classes in the Satake or Baily–Borel compactification of _n^coarse.
If <ref> had a positive answer, it would show a stark difference between the high-dimensional cohomology of _n
and _n, the moduli space of genus n curves: The rational cohomology of _n is the same as that of the mapping class group MCG(Σ_n) of a genus-n surface and Harer <cit.> showed that its virtual cohomological dimension is given by (MCG(Σ_n)) = 4n-5. Church–Farb–Putman <cit.> conjectured a vanishing pattern similar to <ref> for these cohomology groups. While Church–Farb–Putman <cit.> and Morita–Sakasai–Suzuki <cit.> proved that the rational cohomology of _n vanishes in its virtual cohomological dimension, Chan–Galatius–Payne <cit.> and subsequently Payne–Willwacher <cit.> found many non-trivial classes in degrees close to the vcd[Their results show that H^4n-5-i(_n;) is non-trivial for i∈ 1,3,4,6,7,9,10,11,13,14 and n sufficiently large. See <cit.>.], thereby disproving the conjecture.
From a group theoretic perspective, i.e. comparing 2n to MCG(Σ_n), we note that the virtual cohomological dimension of 2n grows quadratically, (2n) = n^2 <cit.>, while that of MCG(Σ_n), (MCG(Σ_n)) = 4n-5 <cit.>, grows linearly in n. The difference in the behaviour of the codimension-1 cohomology, i.e. comparing <ref> and the non-vanishing result of <cit.>, suggests that cohomologically 2n is “closer” to the arithmetic group n than to the mapping class group MCG(Σ_n). This might not be as surprising, given that n and 2n are both examples of Chevalley groups over Euclidean rings of integers. Brück–Santo Rego–Sroka
studied the top-dimensional rational cohomology of such groups. See
<cit.> for how both the conjecture of
Church–Farb–Putman <ref> and
<ref> might fit into a more
general framework. In codimension-1, this framework is currently investigated by Brück–Santos Rego–Sroka.
§.§ Simplicial complexes
The key step in our proof of <ref> is to consider certain simplicial complexes and show that they are highly connected. These complexes encode the structure of integral apartments in the building T^ω_n and the relations between them. The proof of the following connectivity result takes up a majority of the paper and constitutes its main technical achievement.
For all n ≥ 1, the simplicial complex is n-connected.
We extract the presentation of _n() from this connectivity result.
The construction of is rather involved, but the intuition is that contains “polyhedral cells” that encode the relations in <ref>.
These are depicted in <ref>. To prove <ref>, we build on connectivity results about similar complexes that already appeared in the literature <cit.> (see <ref> for a detailed overview).
§.§.§ Comparison with previously studied complexes and related works
The idea of using highly connected simplicial complexes to study Steinberg modules is due to Church–Farb–Putman <cit.>. We draw inspiration from this and, in particular, from Church–Putman's approach <cit.> to Bykovskiĭ's presentation for the Steinberg modules of special linear groups.
Nevertheless, obtaining the presentation in <ref> from the connectivity result in <ref> is more difficult here than in the setting of special linear groups and uses inductive methods (see <ref>).
Our proof of <ref> relies on connectivity results that have been established in the special linear group setting <cit.>. We use these results by considering embeddings of n and 2 in 2n and of 2n into 2n.[These embeddings e.g. show up in <ref>, <ref>, <ref>, <ref>, <ref>.]
But in comparison to previous works, several new difficulties arise in the context of the present article:
The first difficulty is the shear complexity of our complex . Similar to , the complex _n studied in <cit.> is a complex whose simplices are given by certain admissible sets of lines in ^n (<ref>). However, while _n has only two types of such admissible sets, the complex has twelve (<ref>).
One reason for this is that contains the complex _n as subcomplexes. _n was used in <cit.> to study the codimension-2 cohomology of n; nevertheless only gives access to the codimension-1 cohomology of the symplectic group 2n.
Another difference to the setting of SL_n is that while simplicial structures show up naturally in the type-𝙰 combinatorics of the special linear group, this is less true for the type-𝙲 combinatorics of the symplectic group. While we decided to still work with simplicial complexes in this text, other polyhedral cell types show up naturally in our complexes (see e.g. the “prism” on the right of <ref> and <ref>). Taking this into account leads to another increase in complexity.
A conceptual difficulty for the symplectic group is also the question of how to find suitable highly connected 2n-complexes:
The simplicial complexes _n and _n used in <cit.> and <cit.> to study the codimension-1 and -2 cohomology of n are very similar to Voronoi tesselations of symmetric spaces (see Elbaz-Vincent–Gangle–Soulé <cit.>).
Such tesselations are in general not available for the symplectic group. In low rank however, MacPherson–McConnell <cit.> gave a CW-structure for an equivariant retract of the symmetric space ℍ_2 = 4/U(2).
Our simplicial complex [2] has some similarity to their complex.[In particular, our minimal σ^2 simplex (see <ref> and <ref>) corresponds to their “red square” and our prism (see <ref> and right-hand side <ref>) made from three individual simplices corresponds to their “hexagon”.]
§.§.§ Overview: Constructing simplicial complexes for studying St"005E"1D714
This subsection gives a rough overview of how the simplicial complexes that appear in this article are constructed.
We begin with the following simplicial complex that is defined in detail in <ref>. Let the vertices of be the lines in ^2n that are spanned by primitive vectors v⃗∈^2n (i.e. the greatest common divisor of the entries of v⃗ is 1). A k-simplex is formed by (k+1) such lines if they span a rank-(k+1) direct summand of ^2n that is isotropic with respect to ω. This simplicial complex was first considered by Putman <cit.> and closely related complexes play a key role in work of van der Kallen–Looijenga <cit.>.
The barycentric subdivision of maps simplicially to T^ω_n by taking the -spans of the involved summands of ^2n. This induces a surjective map
H_n-1() ↠H_n-1(T^ω_n) = ^ω_n
because every integral apartment has a preimage in (see <ref>). But has certain (n-1)-dimensional homology classes that are sent to zero in ^ω_n. These actually come from apartment classes of the Steinberg module of the integral special linear group. We fill these homology classes by augmenting with additional simplices to obtain a simplicial complex ^(0) (see <ref>) whose (n-1)-dimensional homology is isomorphic to ^ω_n. The construction of ^(0) is related to certain complexes , _n and _n that were introduced by Putman <cit.>, Church–Putman <cit.>, and Brück–Miller–Patzt–Sroka–Wilson <cit.> respectively, see <ref>.
We then glue “non-isotropic” simplices to ^(0) that are allowed to contain one symplectic pair each. This yields a simplicial complex ^(1) (see <ref>) where all of the apartments from are filled in (see <ref>). In fact, the relative homology H_n(^(1),^(0)) surjects onto ^ω_n, which (re)proves that the integral symplectic apartment classes form a generating set. This uses that ^(1) is (n-1)-connected, whereas ^(0) and are only (n-2)-connected. The complex ^(1) shares many features with a simplicial complex defined by Putman <cit.> and a related complex studied in recent work by Brück–Sroka <cit.>, see <ref>.
To now get a presentation of ^ω_n, we have to understand the different ways in which we filled the apartments (see <ref>) and add additional simplices to ^(1) that encode the relations between them (see <ref>). This finally leads us to the definition of a complex ^(2) (see <ref>), which we prove to be n-connected. <ref> is then extracted from the nested triple of highly connected complexes (^(2), ^(1), ^(0)) by an induction argument.
The complex appearing in <ref> (see <ref>) is a subcomplex of ^(2) that has only those augmentations that we need to prove its high connectivity. We use <ref> to deduce the above connectivity results.
§.§ Structure of the paper
We begin the article with preliminaries on symplectic linear algebra and simplicial complexes in <ref>.
<ref> to <ref> essentially all aim at proving <ref>.
This starts in <ref>, where we define and several other simplicial complexes.
In <ref>, we describe the links, i.e. the local structure of these complexes.
<ref> first gives background material on combinatorial manifolds and then describes the combinatorial structure of the maps that we use to study the connectivity properties of the complexes in this paper.
In <ref>, we explain how to reduce the complexity (measured by a natural number that we call the rank) of maps ϕ S→ from combinatorial spheres to the complex . This involves constructing retractions on certain well-behaved subcomplexes of links of vertices in these complexes. <ref> is a collection of results that show that many subcomplexes of are highly connected. These are mostly drawn from existing literature or straight-forward consequences of such.
Finally, <ref> describes an induction that is the core of the proof of <ref>. It relies on an assumption on the link structure in maps ϕ S→ from combinatorial spheres to .
<ref> shows that this assumption can always be made. It is rather technical and concerned with manipulations of the combinatorial structure of such maps.
In <ref>, we infer further connectivity results from <ref>. These are then used to prove <ref> by obtaining a partial resolution of the symplectic Steinberg module _n.
<ref> shows that, using Borel–Serre duality, <ref> is a straight-forward consequence of the partial resolution given by <ref>.
The appendix contains <ref>, which gives an overview of the different complexes in this work.
§.§.§ Your personal reading guide
Admittedly, this paper is long. The following suggestions provide shortcuts to the main results.
The codimension-0 case (16 pages)
The short article <cit.> gives a geometric proof of Gunnells' result that the symplectic Steinberg module is generated by integral apartment classes. It showcases many of the techniques used in the present article and could be helpful to get a first impression.
Fast track to <ref> (∼6 pages)
If you would like to see as fast as possible why the codimension-1 cohomology of 2n, or equivalently _n, vanishes, you should first glance at the symplectic linear algebra notation in <ref>. Then look at <ref>, which defines the symplectic building T^ω_n. This should already allow you to understand the partial resolution of the Steinberg module _n = H_n-1(T^ω_n;) obtained in <ref> (the relevant modules and morphisms are defined in the text starting after <ref> and ending with <ref>). Accepting that the sequence from <ref> is exact, you can proceed to <ref>, where we show that this partial resolution together with Borel–Serre duality imply the cohomology vanishing. This is a standard homological algebra argument.
Fast track to <ref> (∼23 pages)
This could be the right option if you would like to understand how we obtain a presentation of the Steinberg module _n but avoid the explicit combinatorial topology in the proof of <ref>.
Start by reading the basics about symplectic linear algebra in <ref> and about simplicial complexes in <ref>. Then jump to <ref>, which contains the proof of <ref>. In <ref>, the simplicial complexes ^(i) that were already mentioned in <ref> above are defined. For these definitions, you will need to have a look at the simplex types given in <ref>, <ref> and <ref>, but not more from <ref>. In <ref>, we show that the complexes ^(i) are highly connected. This is where we use the work from [sec_structure_of_links]Sections <ref> to <ref>, in particular <ref>. But if you take these on faith, the rest of <ref> is mostly algebraic.
Fast track to <ref> (∼27 pages)
If you would like to get an idea of what the simplicial complexes in this article look like and why they are highly connected, you could do the following.
Start with the preliminaries on symplectic linear algebra in <ref> and have a look at the conventions about simplicial complexes in <ref>.
Continue with the definition of the simplicial complexes in <ref>; there are quite a few types of augmentations in these, <ref> and <ref> might help you to keep an overview.
Then skim through <ref> and at the very least notice that we define a notion of regularity for maps (<ref>, <ref>).
In <ref>, it is sufficient for now to read the introduction and in particular look at <ref>.
From there, proceed to <ref>, which contains an induction that is the heart of the connectivity argument proving <ref>. The induction beginning is in <ref>. It uses the fact that 2 = 2 and relies on the connectivity of complexes _n, _n and _n that were studied in the context of the special linear group. The induction step is in <ref> and is a “cut out” argument for bad simplices. The regularity notions are crucial here, so you might need to go back to <ref> to get a better understanding of this regularity.
The induction step also relies on the fact that the relevant homotopy classes can be represented by maps having an “isolation” property (<ref>). The proof that this is true requires some technical work in <ref>, but if you are willing to take it on faith, you have already reached your goal.
§.§ Acknowledgements
We would like to thank Paul Gunnells, Jeremy Miller, Andrew Putman and Dan Yasaki for helpful conversations.
We are grateful about comments on a preliminary version by Jeremy Miller.
We thank Matthew Cordes for language advise and Samir Canning for comments about the connection to _g. Robin J. Sroka would like to thank his PhD advisor Nathalie Wahl for many fruitful and clarifying conversations.
All three authors were supported by the Danish National Research Foundation (DNRF92, DNRF151).
Peter Patzt was supported in part by a Simons collaboration grant and the European Research Council under the European Union’s Seventh Framework Programme ERC Grant agreement ERC StG 716424 - CASe, PI Karim Adiprasito.
Robin J. Sroka was supported by the European Research Council (ERC grant agreement No.772960), by NSERC Discovery Grant A4000 as a Postdoctoral Fellow at McMaster University, and by the Swedish Research Council under grant no. 2016-06596 while the author was in residence at Institut Mittag-Leffler in Djursholm, Sweden during the semester Higher algebraic structures in algebra, topology and geometry.
§ PRELIMINARIES
§.§ Symplectic linear algebra
We begin with setting up the notation for linear algebra over the integers and recalling some basic facts about symplectic forms in this context. More details can be found in textbooks such as <cit.>.
Let ^2n be equipped with the standard symplectic form ω = ω_n. We denote its standard symplectic basis by {e⃗_1, … , e⃗_n, f⃗_1, …, f⃗_n} = {e⃗_1, f⃗_1,… , e⃗_n,f⃗_n}. That is, ^2n = ⟨e⃗_1, f⃗_1,… , e⃗_n,f⃗_n⟩ and for i,j ∈{1, …, n}, we have
ω(e⃗_i, e⃗_j) = ω(f⃗_i, f⃗_j) = 0 , ω(e⃗_i, f⃗_j) =ω(f⃗_j, e⃗_i) = 0 for i ≠ j, ω(e⃗_i, f⃗_i) = - ω(f⃗_i, e⃗_i) = 1.
More generally, we call a set {v⃗_1, w⃗_1,… , v⃗_n,w⃗_n}⊂^2n a symplectic basis if there is a symplectic (i.e. form-preserving) automorphism ^2n→^2n that sends v⃗_i to e⃗_i and w⃗_i to f⃗_i for all 1≤ i ≤ n.
If V ⊆^2n is a submodule, we define the symplectic complement of V as
V^⊥ = v⃗' ∈^2n|ω(v⃗', v⃗) = 0 for all v∈ V .
Note that ^2n is canonically embedded in ^2n and that the form ω on ^2n is the restriction of a corresponding symplectic form on ^2n. Almost all notions defined in this section have counterparts in the rational setting. However, we mostly restrict ourselves to the setting of ^2n in this article. In particular, if we talk about the “span” ⟨v⃗_1, …, v⃗_m⟩ of a set of vectors, we mean its -span if we do not explicitly say something else.
Let V ⊆^2n be a submodule. Then we call V
* a (direct) summand if ^2n = V ⊕ W for some submodule W⊆^2n;
* an isotropic summand if V is a direct summand and V⊆ V^⊥, that is ω(v⃗_1, v⃗_2) = 0 for all v⃗_1, v⃗_2 ∈ V;
* a symplectic summand if V is a direct summand and there is a form-preserving isomorphism (V,ω|_V)≅ (^2k,ω_k), where ω_k is the standard symplectic form on ^2k. The number k is then called the genus of the symplectic summand.
The following two lemmas contain standard facts about direct summands. Their (elementary) proofs can e.g. be found in <cit.>.
If V,W⊆^2n are direct summands and V⊆ W, then V is a direct summand of W.
Let V be an isotropic subspace of ^2n. Then V ∩^2n⊆^2n is an isotropic summand of ^2n.
In fact, in the definition of symplectic summands, the condition that V be a direct summand is redundant. This follows from the following lemma, which is a consequence of <cit.>.
Let V⊆^2n be a submodule such that (V,ω|_V)≅ (^2k,ω_k). Then V is a (symplectic) summand. Furthermore, V^⊥ is a symplectic summand of genus n-k and there is a symplectic isomorphism
(V⊕ V^⊥, ω|_V⊕ω|_V^⊥) → (^2n, ω).
In this article, rank-1 summands of ^2n, which we sometimes refer to as lines, are especially important. They form the vertices of the simplicial complexes that we consider later on.
A vector v⃗∈^2n is called primitive if its -span is a rank-1 summand of ^2n.[If we express v⃗ as a row vector v⃗ = (x_1, …, x_2n), this is equivalent to saying that gcd(x_1, …, x_2n ) = 1.] In this case, we write v = ≪v⃗ for this summand. Similarly, given a rank-1 summand v of ^2n, we write v⃗ for some choice of primitive vector in v. There are only two choices for this. The other choice is -v⃗.
Using this notation, we write
ω(v_1,v_2) = ± r
if r = ω(v⃗_1, v⃗_2). If r is 0 here, we also drop the ±-sign.
The following is a consequence of <ref>, see <cit.>.
Let v⃗, w⃗∈^2n. If ω(v⃗, w⃗)∈ -1, 1, then both v⃗ and w⃗ are primitive and ⟨v⃗, w⃗⟩ is a symplectic summand of ^2n.
We call vectors v⃗, w⃗ satisfying the conditions of <ref> a symplectic pair. We also use this formulation for the pair of corresponding lines v, w.
Roughly speaking, the next lemma says that a set of vectors that spans a direct summand and looks like a partial symplectic basis can actually be extended to a symplectic basis. It is a standard fact that can e.g. be proved using <cit.>.
Let V⊆^2n be a direct summand and
B = v⃗_1, …, v⃗_m, v⃗_m+1, w⃗_m+1, …, v⃗_m+k, w⃗_m+k
a basis of V such that for all i,j∈ 1, …, n
ω(v⃗_i, v⃗_j) = ω(w⃗_i, w⃗_j) = 0 , ω(v⃗_i, w⃗_j) =ω(w⃗_j, v⃗_i) = 0 for i ≠ j, ω(v⃗_i, w⃗_i) = - ω(w⃗_i, v⃗_i) =1.
Then B can be extended to a symplectic basis of ^2n.
We in particular use <ref> for the case where B is either a symplectic pair or the basis of an isotropic summand V. We also need the following strengthening of <ref>.
Let e⃗∈^2(m+n) such that e⃗∉⟨e⃗_1, …, e⃗_m ⟩ and ω(e⃗,e⃗_i) = 0 for all 1≤ i ≤ m. Then there are e⃗' ∈^2(m+n) and a ∈ such that
* ω(e⃗',e⃗_i) = 0 for all 1≤ i ≤ m,
* e⃗_1, …, e⃗_m, e⃗' can be extended to a symplectic basis of ^2(m+n) and
* for all v⃗∈^2(m+n) such that ω(v⃗, e⃗_i) = 0 for all 1≤ i ≤ m, we have ω(e⃗,v⃗) = a ω(e⃗', v⃗).
As e⃗∉⟨e⃗_1, …, e⃗_m ⟩ and ω(e,e_i) = 0 for all 1≤ i ≤ m, the -span ⟨e⃗_1, …,e⃗_m, e⃗⟩_ is an (m+1)-dimensional isotropic subspace of ^2(m+n). It contains ⟨e⃗_1, …,e⃗_m ⟩_ as an m-dimensional subspace. This implies that
V ⟨e⃗_1, …,e⃗_m, e⃗⟩_∩^2(m+n)
is a rank-(m+1) isotropic direct summand of ^2(m+n) that contains
⟨e⃗_1, …, e⃗_m ⟩_∩^2(m+n) = ⟨e⃗_1, …, e⃗_m ⟩⊆ V
as a direct summand of rank m.
Hence, we can choose e⃗' ∈ V such that e⃗_1, …,e⃗_m, e⃗' is basis (over ) for V. We claim that e⃗' satisfies the desired properties.
To see this, first observe that e⃗_1, …,e⃗_m, e⃗' is a basis for the isotropic direct summand V⊆^2(m+n). Hence by <ref>, it can be extended to a symplectic basis of ^2(m+n).
Now let v⃗∈^2(m+n) such that ω(v⃗, e⃗_i) = 0 for all 1≤ i ≤ m.
As e⃗ is contained in V, we can write it as
e⃗ = ∑_i=1^m a_i e⃗_i +a e⃗'.
Using that ω(v⃗, e⃗_i) = 0 for all 1≤ i ≤ m, we have
ω(e⃗, v⃗) = ω(∑_i=1^m a_i e⃗_i +a e⃗', v⃗) = ∑_i=1^m a_i ω(e⃗_i, v⃗) + a ω(e⃗', v⃗) = a ω(e⃗', v⃗).
§.§ Simplicial complexes
Most of this article is concerned with studying connectivity properties of simplicial complexes.
Usually, we consider a simplicial complex X as a collection of subsets, the set of simplices, of a set (X), the vertex set of X, such that the set of simplices is closed under passing to subsets.
However, we often do not distinguish between X and its geometric realisation |X| if what is meant seems clear from the context. In particular, we associate topological properties to simplicial complexes and e.g. say that X is n-connected if |X| is, i.e. if π_k(|X|) is trivial for all k≤ n. We use the convention that (-1)-connected means that X is non-empty.
If X and Y are simplicial complexes, we denote their simplicial join by X∗ Y.
If Δ is a simplex of X, we write _X(Δ) for the link of Δ in X, i.e. the complex consisting of all simplices Θ in X such that Θ∩Δ = ∅ and Θ∪Δ is a simplex in X. We write _X(Δ) = Δ∗_X(Δ) for the star of Δ.
We say that Θ is a face of the simplex Δ if Θ⊆Δ; note that this containment need not be proper, so in particular every simplex is a face of itself.
If X is a simplicial complex and Y⊆ X is a subcomplex, then we denote by X∖ Y the set of simplices of X that are not contained in Y. Note that this set is a priori not a simplicial complex.
A simplicial complex X is (homotopy) Cohen–Macaulay of dimension n if it has dimension n, is (n-1)-connected and for every simplex Θ, the link _X(Θ) is (n-(Θ)-2)-connected.
§.§.§ Standard link argument
Recall that a map f X → Y between topological spaces is n-connected if the induced map on homotopy groups π_k (f)π_k(X) →π_k(Y)
is an isomorphism for all k< n and a surjection for k=n.
The following results are contained in <cit.> and are frequently used in this article. Let X be a simplicial complex and let Y ⊆ X be a subcomplex.
A set of simplices B ⊂ X∖ Y is a set of bad simplices if the following two conditions hold for any simplex Δ of X:
* If no face of Δ is contained in B, then Δ is contained in Y.
* If two faces Θ_1 and Θ_2 of Δ are in B, then Θ_1 ∪Θ_2 is also in B.
Recall that also in the above definition, a “face” need not be proper.
Let B be a set of bad simplices and assume that Δ∈ B is a bad simplex. Then we let ^good_X(Δ) denote the subcomplex of _X(Δ) containing all simplices Θ∈_X(Δ) that satisfy the following condition:
Any bad face of Θ∗Δ is contained in Δ.
Let X be a simplicial complex and let Y ⊆ X be a subcomplex. Assume that X∖ Y has a set of bad simplices B. Then:
* If Y is n-connected and ^good_X(Δ) is (n-(Δ)-1)-connected for all Δ∈ B, then X is n-connected.
* If ^good_X(Δ) is (n-(Δ)-1)-connected for all Δ∈ B, then the inclusion Y↪ X is n-connected.
<ref> is exactly <cit.>.
<ref> is just a slightly stronger version of <cit.> and also quickly follows from the results of Hatcher and Vogtmann: By <cit.>, the assumptions here imply that the relative homotopy groups π_k(X,Y) vanish for all k≤ n. The claim then follows from the long exact sequence of homotopy groups.
§.§.§ Zeeman's relative simplicial approximation theorem
We use the following version of simplicial approximation, which is due to Zeeman.
Let K, M be finite simplicial complexes and L a subcomplex of K. Let f |K| → |M| be a continuous map such that the restriction f|_L is a simplicial map from L to M. Then there exists a subdivision K' of K containing L as a subcomplex and a simplicial map g K' → M such that g|_L = f|_L and g is homotopic to f keeping L fixed.
§ DEFINITION OF COMPLEXES AND AUGMENTATIONS
The goal of this section is to define the complex appearing in the main technical result of this work, <ref>. We start by collecting the definitions of the simplicial complexes _n, _n and _n, that play an important role in recent work on special linear groups <cit.>, as well as the simplicial complexes , and , that have been used to study Torelli and symplectic groups <cit.>. We then introduce the new complex . Lastly, we define relative versions of all these spaces, e.g. _n^m, , and , that depend on two non-negative integers m and n.
<ref> gives an overview of the definitions and <ref> shows the connectivity properties of these complexes that are studied in later sections.
§.§ The complexes I, IA, B, and BA
Let n be the set
n v ⊆^2n | v is a rank-1 summand of ^2n.
This is already the vertex set of the complex that we want to show to be highly connected in <ref>. We now start working towards its definition.
Given a line v ∈n, recall that we denote by v⃗ the choice of one of the two primitive vectors {v⃗, -v⃗} contained in v.
Let n be as in <ref>. A subset Δ = {v_0, …, v_k}⊂n
of k+1 lines[Note that Δ is a set without a preferred ordering, so all the statements here as well as in <ref> and <ref> are to be understood for an appropriate choice of indices for the v_i.] v_i = ⟨v⃗_i ⟩ is called
* a standard simplex, if ⟨v⃗_i | 0 ≤ i ≤ k ⟩ is an isotropic rank-(k+1) summand of ^2n;
* a 2-additive simplex, if k ≥ 2, v⃗_0 = ±v⃗_1 ±v⃗_2,[Here as well as in <ref> and <ref>, the ± in these equations are to be understood as “for some choice of signs”.]
and Δ∖{v_0} is a standard simplex;
* a σ simplex, if k ≥ 1, ω(v_k, v_k-1) = ± 1, ω(v_k, v_i) = 0 for i ≠ k-1
and Δ∖{v_k} is a standard simplex;
* a mixed simplex, if k ≥ 3, Δ∖{v_0} is a σ simplex and Δ∖{v_k} is a 2-additive simplex.
Let {e⃗_1, …, e⃗_n, f⃗_1, …, f⃗_n } be the standard symplectic basis of ^2n and let 1≤≤ n.
* e_1,…, e_ is a standard simplex;
* ⟨e⃗_1 + e⃗_2 ⟩, e_1, e_2, …, e_ is a 2-additive simplex if ≥ 2;
* e_1, …, e_, f_ is a σ simplex;
* ⟨e⃗_1 + e⃗_2 ⟩, e_1, e_2, …, e_, f_ is a mixed simplex if ≥ 3.
We are now ready to introduce the simplicial complexes , , and . The first three of these were used by Putman to study the Torelli group <cit.>. A complex that is closely related to also appears in work of van der Kallen–Looijenga <cit.>. The complex was used in <cit.> and <cit.> to study the cohomology of symplectic groups.
The simplicial complexes , , and have n as their vertex set and
* the simplices of are all standard;
* the simplices of are all either standard or 2-additive;
* the simplices of are all either standard, 2-additive or σ;
* the simplices of are all either standard, 2-additive, σ or mixed.
If ⊆^2n is a symplectic summand and X ∈{[], [], [], []}, we let X() be the full subcomplex of X_n on the set of rank-1 summands of that we denote by n∩.
Next, we introduce the complexes _n and _n. These complexes were defined and studied by Church–Putman in <cit.>. Complexes that are closely related to _n appear in <cit.>.
Let V be an isotropic summand of ^2n.
* Let (V) be the simplicial complex with vertex set
n∩ V = v ⊆ V | v is a rank-1 summand of V
and in which all simplices are standard in the sense of <ref>.
* Let (V) be the simplicial complex on the same vertex set as (V) and in which all simplices are standard or 2-additive in the sense of <ref>.
If V = ⟨e⃗_1, …, e⃗_n ⟩⊆^2n, we write _n (V) and _n (V).
§.§ The complex BAA
The complex _n, defined below, was introduced by Brück–Miller–Patzt–Sroka–Wilson <cit.> and used to study the codimension-2 cohomology of n.
Let n be as in <ref>. A subset Δ = {v_0, …, v_k}⊂n of k+1 lines is called
* a 3-additive simplex, if k ≥ 3,
v⃗_0 = ±v⃗_1 ±v⃗_2 ±v⃗_3
and Δ∖{v_0} is a standard simplex;
* a double-triple simplex, if k ≥ 4, v⃗_0 = ±v⃗_2 ±v⃗_3, v⃗_1 = ±v⃗_2 ±v⃗_4
and Δ∖{v_0, v_1} is a standard simplex;
* a double-double simplex, if k ≥ 5, v⃗_0 = ±v⃗_2 ±v⃗_3, v⃗_1 = ±v⃗_4 ±v⃗_5
and Δ∖{v_0, v_1} is a standard simplex.
Let {e⃗_1, …, e⃗_n, f⃗_1, …, f⃗_n } be the standard symplectic basis of ^2n and 1 ≤≤ n.
* ⟨e⃗_1 + e⃗_2 +e⃗_3 ⟩, e_1, e_2, e_3, …, e_ is a 3-additive simplex if ≥ 3;
* ⟨e⃗_1 + e⃗_2 ⟩, ⟨e⃗_1 + e⃗_3 ⟩, e_1, e_2, e_3, …, e_ is a double-triple simplex if ≥ 3;
* ⟨e⃗_1 + e⃗_2 ⟩, ⟨e⃗_3 + e⃗_4 ⟩, e_1, … , e_4, …, e_ is a double-double simplex if ≥ 4.
Let V be an isotropic summand of ^2n. We define (V) to be the simplicial complex with vertex set
n∩ V = {v ⊆ V | v is a rank-1 summand of V }
and in which all simplices are either
* standard or 2-additive simplices in the sense of <ref> or
* 3-additive, double-triple or double-double simplices in the sense of <ref>.
If V = ⟨e⃗_1, …, e⃗_n ⟩⊆^2n, we write _n (V).
§.§ The complex IAA
We now turn to the definition of the new complex that we study in this work and show to be n-connected (<ref>). The next definition describes the new simplices appearing in .
Let n be as in <ref>.
A subset Δ = {v_0, …, v_k}⊂n of k+1 lines is called
* a σ^2 simplex, if k≥ 3, ω(v_k-1, v_k-3) = ω(v_k, v_k-2) = ± 1, ω(v_i, v_j) = 0 otherwise
and Δ∖{v_k-1, v_k} is a standard simplex;
* a skew-additive simplex, if k≥ 2, ω(v_k, v_0) = ω(v_k, v_1) = ± 1, ω(v_i, v_j) = 0 otherwise
and Δ∖{v_k} is a standard simplex;
* a 2-skew-additive simplex, if k≥ 3, v⃗_0 = ±v⃗_1 ±v⃗_2,
ω(v_k, v_0) = ω(v_k, v_1) = ± 1,
ω(v_i, v_j) = 0 otherwise
and Δ∖{v_0, v_k} is a standard simplex;
* a skew-σ^2 simplex if k≥ 3,
ω(v_k-1, v_k-3) = ω(v_k, v_k-3) = ω(v_k, v_k-2) = ± 1,
ω(v_i, v_j) = 0 otherwise
and Δ∖{v_k-1, v_k} is a standard simplex;
* a σ-additive simplex, if k≥ 2, v⃗_k = ±v⃗_k-1±v⃗_k-2,
ω(v_k-2, v_k-1) = ω(v_k-2, v_k) = ω(v_k-1, v_k) = ± 1,
ω(v_i, v_j) = 0 otherwise
and Δ∖{v_k-1, v_k} is a standard simplex.
Let {e⃗_1, …, e⃗_n, f⃗_1, …, f⃗_n } be the standard symplectic basis of ^2n and let 1≤≤ n.
* {e_, …, e_2, e_1, f_2, f_1} is a σ^2 simplex if ≥ 2;
* {e_1, e_2, …, e_, ⟨f⃗_1- f⃗_2⟩} is a skew-additive simplex if ≥ 2;
* {⟨e⃗_1+ e⃗_2 ⟩, e_1, e_2, …, e_, ⟨f⃗_1- f⃗_2 ⟩} is a 2-skew-additive simplex if ≥ 2;
* {e_, …, e_2, e_1, f_2, ⟨f⃗_1 - f⃗_2 ⟩} is a skew-σ^2 simplex if ≥ 2;
* {e_1, …, e_, f_, ⟨e⃗_+ f⃗_⟩} is a σ-additive simplex if ≥ 1.
The simplicial complexes and have n as their vertex sets. The simplices of are the ones introduced in <ref> and <ref>, the simplices of are the ones introduced in <ref>, <ref> and <ref>.
If ⊆^2n is a symplectic summand, we let []() and []() denote the full subcomplexes of and , respectively, on the set n∩ of rank-1 summands of .
The definitions of the complexes introduced in this and the previous two subsections are summarised in <ref>.
§.§ Relative complexes
In this final subsection, we introduce relative versions of the complexes introduced above (and listed in <ref>). These relative versions depend on two non-negative integers m and n, e.g. _n^m, , and and will be used to inductively prove that is n-connected.
Throughout this subsection we let
n,m ≥ 0
Standing assumption
be non-negative integers and consider the symplectic module ^2(m+n) of genus m+n equipped with its symplectic standard basis {e⃗_1, …, e⃗_m+n, f⃗_1, …, f⃗_m+n}.
Let X_m+n denote one of the complexes defined above,
X_m+n∈{_m+n, _m+n, _m+n, [m+n], [m+n], [m+n], [m+n], [m+n], [m+n]}.
* We denote by X_n^m⊆_X_m+n( e_1, …, e_m )
the full subcomplex on the set of vertices v satisfying the following.
* v⃗∉⟨e⃗_1, …, e⃗_m ⟩.
* For 1≤ i ≤ m, we have ω(e_i , v) = 0, i.e. there is no σ edge between v and one of the vertices of {e_1, …, e_m}.
Note that X^0_n = X_n.
* Let Δ = {v_0, …, v_k} be a simplex of X_n^m. We denote by _X_n^m(Δ) ⊆_X_n^m(Δ)
the full subcomplex on the set of vertices v satisfying the following.
* v⃗∉⟨e⃗_1, …, e⃗_m, v⃗_0, …, v⃗_k ⟩
* For 0≤ i ≤ k, we have ω(v_i , v) = 0.
* If is a symplectic summand of ^2(m+n) that contains ⟨e⃗_1, …, e⃗_m ⟩ and X ∉{, , }, we denote by
X^m()⊆_X()( e_1, …, e_m )
the full subcomplex on the set of vertices satisfying <ref> and <ref>.
* If V is an isotropic summand of ^2(m+n) that contains ⟨e⃗_1, …, e⃗_m ⟩ and X ∈{, , }, we denote by
X^m(V)⊆_X(V)( e_1, …, e_m )
the full subcomplex on the set of vertices satisfying <ref> and <ref>.
If X ∈{, , }, then the second condition in the first two items above is trivially satisfied. Indeed, by <ref> and <ref>, any two vertices v, v' ∈ X(V) satisfy ω (v⃗, v⃗') = 0 because these are lines that are contained in the isotropic summand V of ^2(m+n).
Let m+n ≥ 4, n≥ 1, {e⃗_1, …, e⃗_m+n, f⃗_1, …, f⃗_m+n} be the standard symplectic basis of ^2(m+n) and X_m+n as in <ref>.
* ⟨e⃗_2 + e⃗_3 ⟩ is a vertex in X^m_n if 0 ≤ m ≤ 2 but not if m ≥ 3.
* Let X ∉{, , }. Then ⟨f⃗_2 + f⃗_3 ⟩ is a vertex in X^m_n if 0 ≤ m ≤ 1 but not if m ≥ 2.
* Let X ∉{, []}. Then ⟨e⃗_3 + e⃗_4⟩ is a vertex in _X^m_n(e_4) and _X^m_n(e_4) if 0 ≤ m ≤ 2. However, for m = 3 it is a vertex in the complex _X^m_n(e_4) but not in _X^m_n(e_4).
* Let X ∉{[], [], , , }. Then f_m+1 is a vertex in _X^m_n(e_m+1), but not a vertex in _X^m_n(e_m+1).
§.§.§ Simplex types in the relative complexes
Let Δ be a simplex in . In this subsection, we introduce naming conventions that we use throughout this work to refer to simplices in e.g. , _(Δ), _(Δ) and all other relative complexes.
Let Δ' be a simplex of _(Δ) and let
τ∈standard, 2-additive, 3-additive, double-triple, double-double,
mixed, σ, σ^2, skew-additive, 2-skew-additive, skew-σ^2, σ-additive
be one of the simplex types defined the previous subsections. We say that Δ' is a simplex of type τ or a τ simplex in _(Δ) if the underlying simplex {e_1, …, e_m}∪Δ∪Δ'
in [m+n] is a simplex of type τ.
<ref> also makes sense for simplices Δ' in
* using Δ = ∅ and the convention that _(∅) =;
* _(Δ) and any other subcomplex X ⊂;
* ^m_n, ^m_n and ^m_n by the previous item. However, we note that in these three complexes, we can only talk about simplices of type
τ∈{standard, 2-additive, 3-additive, double-triple, double-double}.
“Forgetting” the symplectic form ω and considering ^2(m+n) as an abelian group with trivial form, <ref>, <ref> and <ref> make sense for the isotropic summand V = ^2(m+n) and yield complexes ^m_2n+m, ^m_2n+m and ^m_2n+m containing e.g. , and as subcomplexes. Using the naming convention above, “forgetting” the symplectic form via these inclusions of subcomplexes, e.g.
↪^m_2n+m,
changes the simplex types as shown in <ref>.
This example illustrates <ref> and <ref>. Let n,m ≥ 3 and {e⃗_1, …, e⃗_m+n, f⃗_1, …, f⃗_m+n} be the standard symplectic basis of ^2(m+n).
* {e_m+3, ⟨e⃗_1 + e⃗_2 + e⃗_m+3⟩} is 3-additive in , and {e_m+3, ⟨e⃗_m+1 + e⃗_m+2 + e⃗_m+3⟩} is 3-additive in _({e_m+1, e_m+2});
* ⟨e⃗_1 + e⃗_m+2⟩, ⟨e⃗_1 + e⃗_m+3⟩, e_m+2, e_m+3 is a double-triple simplex in , and
⟨e⃗_m+1 + e⃗_m+2⟩, ⟨e⃗_m+1 + e⃗_m+3⟩, e_m+2, e_m+3 is a double-triple simplex in _(e_m+1);
* {e_m+1, f_m+1, e_m+2, f_m+2} is a σ^2 simplex in , and {e_m+2, f_m+2} is a σ^2 simplex in _({e_m+1, f_m+1});
* {⟨e⃗_m+1 + e⃗_m+2⟩, e_m+1, e_m+2, ⟨f⃗_m+1 - f⃗_m+2⟩} is a 2-skew-additive simplex in , and {e_m+1, e_m+2, ⟨f⃗_m+1 - f⃗_m+2⟩} is a 2-skew-additive simplex in _(⟨e⃗_m+1 + e⃗_m+2⟩).
Let Δ' be a simplex in _(Δ).
* The simplex Δ' is called a minimal simplex of type τ if Δ' is a simplex of type τ in the sense of <ref> and if it does not contain a proper face that also is of this type.
* The augmentation core of Δ' is the (possibly empty) unique minimal face of Δ' that is of the same type as Δ'.
As in <ref>, this defines the notion of minimal simplex and augmentation core for simplices in , _(Δ) and any other subcomplex X ⊂.
Any simplex in <ref> is minimal and hence its own augmentation core. None of the simplices in <ref> are minimal if > 2, their augmentation cores are {e_2, e_1, f_2, f_1}, {e_1, e_2, ⟨f⃗_1- f⃗_2⟩}, {⟨e⃗_1+ e⃗_2 ⟩, e_1, e_2, ⟨f⃗_1- f⃗_2 ⟩}, {e_2, e_1, f_2, ⟨f⃗_1 - f⃗_2 ⟩} and {e_, f_, ⟨e⃗_+ f⃗_⟩}, respectively.
We note that any simplex in the non-relative complex [m+n] has an augmentation core in the sense of <ref>, and that it is the empty set if and only if the simplex is standard.
Let Δ' be a simplex in _(Δ). Let Θ∈[m+n] denote the augmentation core (<ref>) of the underlying simplex {e_1, …, e_m}∪Δ∪Δ'
of Δ' in the non-relative complex [m+n].
* Δ' is called an external simplex if Θ∩ ({e_1, …, e_m}∪Δ) ≠∅.
* Δ' is called a Δ-related simplex if Θ∩Δ≠∅.
* Δ' is called an internal simplex if Θ⊆Δ', i.e. Θ∩ ({e_1, …, e_m}∪Δ) = ∅.
Note that in particular, every Δ-related simplex is external.
As in <ref>, there is an obvious way in which the notion of internal, external and (for subcomplexes of links) Δ-related simplices can be used to refer to simplices of , _(Δ) and other subcomplexes X ⊂.
This example illustrates <ref> by classifying the simplices discussed in <ref>.
* The simplex {e_m+3, ⟨e⃗_1 + e⃗_2 + e⃗_m+3⟩} in is external (but not Δ-related). The simplex {e_m+3, ⟨e⃗_m+1 + e⃗_m+2 + e⃗_m+3⟩} in the complex _({e_m+1, e_m+2}) is {e_m+1, e_m+2}-related (and external).
* The simplex ⟨e⃗_1 + e⃗_m+2⟩, ⟨e⃗_1 + e⃗_m+3⟩, e_m+2, e_m+3 in is an external (but not Δ-related) simplex. ⟨e⃗_m+1 + e⃗_m+2⟩, ⟨e⃗_m+1 + e⃗_m+3⟩, e_m+2, e_m+3 is an e_m-1-related (and external) simplex in the complex _(e_m+1).
* The simplex {e_m+1, f_m+1, e_m+2, f_m+2} in is internal. The simplex {e_m+2, f_m+2} in _({e_m+1, f_m+1}) is {e_m+1, f_m+1}-related (and external).
* The simplex {⟨e⃗_m+1 + e⃗_m+2⟩, e_m+1, e_m+2, ⟨f⃗_m+1 - f⃗_m+2⟩} in is internal. The simplex {e_m+1, e_m+2, ⟨f⃗_m+1 - f⃗_m+2⟩} in _(⟨e⃗_m+1 + e⃗_m+2⟩) is ⟨e⃗_m+1 + e⃗_m+2⟩-related (and external).
§ STRUCTURE OF LINKS
In this section we describe and study the links of various simplices in the complexes , and , which we defined in the previous section.
Throughout, n and m are natural numbers such that n≥ 1 and m≥ 0.
§.§ Links in I and I"005E"03B4
We start by describing the links of simplices in .
Let Δ be a standard simplex in . Then
_(Δ) ≅[n-((Δ)+1)][m+((Δ) + 1)].
If Δ = v_0, …, v_k is a standard simplex, then e⃗_1,…, e⃗_m, v⃗_0, …, v⃗_k spans an isotropic summand of rank m+k+1 of ^2(m+n).
Hence by <ref>, it can be extended to a symplectic basis e⃗_1,…, e⃗_m, v⃗_0, …, v⃗_k ∪ B
of ^2(m+n).
Let ϕ^2(m+n)→^2(m+n) be a symplectic isomorphism such that ϕ(e⃗_i) = e⃗_i for all 1≤ i ≤ m and ϕ(v⃗_j) = e⃗_m+j for all 0≤ i ≤ k.
It is easy to check that ϕ induces the isomorphism _(Δ) ≅[n-(k+1)][m+(k + 1)].
The following lemma is an immediate consequence of the definition of 2-additive simplices (see <ref>).
Let Δ = v_0, …, v_k be a 2-additive simplex in such that v_0 = ⟨v⃗_1 + v⃗_2 ⟩ or v_0 = ⟨e⃗_i + v⃗_1 ⟩ for some 1 ≤ i ≤ m. Then Δ' = v_1, …, v_k is a standard simplex in and
_(Δ) = _(Δ').
§.§ Links in IAA* and IAA
We describe the link of some minimal simplices in and .
Let v be a vertex of .
Then there is a symplectic isomorphism ϕ^2(m+n)→^2(m+n) fixing e_1, …, e_m and mapping v to e_m+1 that induces the following commuting diagram.
_( v) [r]^≅@^(->[d] @^(->[d] [n-1][m+1]
_(v) [r]^≅ [n-1][m+1]
Furthermore, these isomorphisms preserve the type of every simplex, i.e. they send standard simplices to standard simplices, 2-additive simplices to 2-additive simplices etc.
Using <ref>, we can extend e⃗_1,…, e⃗_m, v⃗ to a symplectic basis e⃗_1,…, e⃗_m, v⃗∪ B of ^2(m+n).
Let ϕ^2(m+n)→^2(m+n) be a symplectic isomorphism such that ϕ(e⃗_i) = e⃗_i for all 1≤ i ≤ m and ϕ(v⃗) = e⃗_m+1 . Using <ref>, we find that
_(v) = __[m+n]( e_1, …, e_m )(v) = _[m+n]( e_1, …, e_m,v )
and
[n-1][m+1] = _[m+n]( e_1, …, e_m, e_m+1);
Similar identifications holds for . It is easy to see that ϕ induces the desired isomorphisms of simplicial complexes.
We next want to prove a similar, but slightly stronger statement for links of minimal σ simplices. This requires the following definition:
Let m,n ≥ 0 and consider the symplectic module (^2(m+n), ω) with the symplectic basis {e⃗_1, f⃗_1, …, e⃗_m+n, f⃗_m+n}.
We denote by
(-)^2(m+n) →
v⃗ ↦ω(e⃗_m+n,v⃗)
the projection onto the f⃗_m+n-coordinate.
We denote by v̅∈{v⃗, - v⃗} the choice[This choice is unique if |(v⃗)|>0.] of a primitive vector in v whose f⃗_m+n-coordinate is non-negative, i.e. that satisfies
(v̅) ≥ 0.
If X is a subcomplex of , this induces a function
(-)(X) →
v ↦ |(v⃗)| = (v̅) = |ω(e_m+n,v)|
that sends every vertex of X to the absolute value of the f⃗_m+n-coordinate of some (hence any) primitive vector v⃗∈ v.
* We say that a vertex v ∈(X) has rank (v).
* For R ∈, we denote by X^< R the full subcomplex of X on all vertices v ∈(X) of rank less than R, (v) < R.
* For R∈∪∞, we denote by X^≤ R the full subcomplex of X on all vertices v ∈(X) of rank less than or equal to R, (v) ≤ R. In particular, X^≤∞ X.
Let Δ be a σ edge in and R∈∪∞. Then, for some R'∈∪∞, there is commutative diagram of the following form.
(_(Δ))^≤ R@^(->[d] [r]^<<<<<≅ ([n-1])^≤ R'@^(->[d]
(_(Δ))^≤ R[r]^<<<<<≅ ([n-1])^≤ R'
The isomorphisms in this diagram send σ simplices to standard simplices, mixed simplices to 2-additive simplices and σ^2 simplices to σ simplices.
Let Δ = {v, w} and extend {v⃗, w⃗, e⃗_1,…, e⃗_m } to a symplectic basis {v⃗, w⃗, e⃗_1,…, e⃗_m }∪ B of ^2(m+n) (as before, this is possible by <ref>).
Let ϕ^2(m+n)→^2(m+n) be a symplectic isomorphism fixing e⃗_1, …, e⃗_m, mapping Δ = {v, w} to e_m+n, f_m+n and restricting to an isomorphism
ϕ̅⟨e⃗_1,…, e⃗_m ∪ B ⟩→^2(m+n-1) = ⟨e⃗_1,…, e⃗_m, …, e⃗_m+n-1, f⃗_1, …, f⃗_m+n-1⟩.
We claim that ϕ̅ induces compatible isomorphisms _(Δ) ≅[n-1] and _(Δ) ≅[n-1]: Note that _(Δ) and _(Δ) have the same vertex set. It consists of all vertices of that are contained in the symplectic complement ⟨Δ⟩^⊥ = ⟨{e⃗_1,…, e⃗_m}∪ B ⟩. Using <ref> and <ref>, we see that all simplices in _(Δ) are either σ or mixed simplices and the only additional simplices in _(Δ) are of type σ^2. From this, the isomorphisms _(Δ) ≅[n-1] and _(Δ) ≅[n-1] follow.
We now explain how to determine R', and discuss the two restrictions (_(Δ))^≤ R≅ ([n-1])^≤ R' and (_(Δ))^≤ R≅ ([n-1])^≤ R': Let R ∈∪{∞}. Let z be a vertex of one of the complexes on the left-hand side of <ref>. Then it holds that
(z) = |ω(e⃗_m+n,z⃗)| ≤ R.
The isomorphisms defined in the previous paragraph map the complexes on the left-hand side of <ref> to the full subcomplexes of [n-1] and [n-1], respectively, spanned by all vertices z satisfying
|ω(ϕ(e⃗_m+n), z⃗)| ≤ R.
We need to find a suitable R' and show that these subcomplexes are isomorphic to ([n-1])^≤ R' and ([n-1])^≤ R', respectively.
For this, let ϕ(e⃗_m+n) be the orthogonal projection of ϕ(e⃗_m+n) to ^2(m+n-1) and let ω_^2(m+n-1) denote the restriction of the symplectic form to ^2(m+n-1). For any z ⊆^2(m+n-1), it holds that
|ω(ϕ(e⃗_m+n), z⃗)|≤ R ⟺ |ω_^2(m+n-1)(ϕ(e⃗_m+n), z⃗)|≤ R.
However, ϕ(e⃗_m+n) need not be primitive in ^2(m+n-1); e.g. if ϕ(e⃗_m+n) = e⃗_m+n, then ϕ(e⃗_m+n) = 0⃗.
We consider two cases. Firstly, assume that ϕ(e⃗_m+n)∈⟨e⃗_1, …, e⃗_m ⟩.
Then we have
ω(ϕ(e_m+n), z)=0 for all vertices z of [n-1] and [n-1],
since ϕ(e⃗_i, z⃗) = 0 for all 1≤ i ≤ m. Hence, the claim is true for R' ∞.
If this is not the case, we can apply <ref> to e⃗ = ϕ(e⃗_m+n) because
ω_^2(m+n-1)(ϕ(e⃗_m+n), e⃗_i) = ω(ϕ(e⃗_m+n),e⃗_i) = ω(ϕ(e⃗_m+n), ϕ(e⃗_i))= 0
for 1≤ i ≤ m. This yields e⃗'∈^2(m+n-1) that is isotropic to e⃗_1, …, e⃗_m and such that e⃗_1, …, e⃗_m, e⃗' can be extended to a symplectic basis B' of ^2(m+n-1). Furthermore, there is an a∈ such that for all vertices z ∈[n-1], [n-1], we have
ω_^2(m+n-1)(ϕ(e⃗_m+n), z⃗) = a ω_^2(m+n-1)(e⃗', z⃗).
So we get
|ω_^2(m+n-1)(ϕ(e⃗_m+n), z⃗)|≤ R ⟺ |ω_^2(m+n-1)(e⃗', z⃗)| ≤⌊ R/a⌋ R' .
Let ψ^2(m+n-1)→^2(m+n-1) be an isomorphism that sends B' to the standard basis and such that ψ(e⃗_i) = e⃗_i and ψ(e⃗) = e⃗_m+n-1. Then ψ∘ϕ̅ induces the desired isomorphisms.
Let Δ be a minimal σ^2, skew-σ^2 or σ-additive simplex in . Then
_(Δ) ≅[n-((Δ)-1)][m].
First observe that these types of simplices can only occur as internal simplices (see <ref> and <ref>).
Hence, minimal σ^2 and skew-σ^2 simplices are 3-dimensional of the form v_0, v_1, v_2, v_3 and minimal σ-additive are two dimensional of the form v_0, v_1, v_2. In either case, such a minimal simplex Δ determines a symplectic summand ⟨Δ⟩ of ^2(m+n) of genus (Δ)-1.
The lines e_1, …, e_m are contained in the symplectic complement ⟨Δ⟩^⊥⊆^2(m+n) of this summand and we can, similarly to the proof of <ref>, find a symplectic isomorphism ⟨Δ⟩^⊥→^2(m+n-((Δ)-1)) that restricts to the identity on e_1,…, e_m. This isomorphism of symplectic spaces induces the desired isomorphism of simplicial complexes.
Let Δ be a minimal double-triple or double-double simplex in . Then
_(Δ) = _(Δ) ≅[n-((Δ)-1)][m+((Δ)- 1)].
We start by noting that double-triple or double-double simplices Δ can also occur as external simplices (see <ref>). We can order the vertices of Δ such that Δ = v_0, …, v_k where k = (Δ) ≥ 2 and v_2, …, v_k is a standard simplex in . Similar to the proof of <ref>, we can extend e⃗_1, …, e⃗_m, v⃗_2, …, v⃗_k to a symplectic basis of ^2(m+n) and define a symplectic isomorphism ^2(m+n)→^2(m+n) that maps e⃗_1, …, e⃗_m, v⃗_2, …, v⃗_k to e⃗_1, …, e⃗_m+(k-1). This map then induces the desired isomorphism of simplicial complexes _(Δ) ≅[n-(k-1)][m+(k- 1)].
Lastly, we also describe the links of minimal external 2-additive simplices in . We compare them with links in .
Let v ∈ be a vertex of rank R = (v)>0 and 1≤ i ≤ m.
Then _^<R(v) is a subcomplex of _^<R({v,⟨v⃗±e⃗_i⟩})
and every simplex of _^<R({v,⟨v⃗±e⃗_i⟩}) that is not contained in _^<R(v) is of type double-triple.
The above lemma can be proved by going through all types of simplices in and . This is is not complicated and very similar to <cit.>, so we omit the proof here.
§.§ Linkhats in IA and IAA*
The following observation about can be shown like <ref>.
Let v be a vertex of . Then _(v)≅[n-1][m+1].
Finally, we need the following lemma about .
Let v ∈ be a vertex of rank R = (v)>0.
* If w∈^<R_(v), then v,w is a σ simplex or w∈^<R_(v).
* _^<R( v,⟨v⃗±e⃗_i⟩) = _^<R( v,⟨v⃗±e⃗_i⟩) for all 1≤ i ≤ m.
We first consider <ref>.
Let w∈_^<R( v,⟨v⃗±e⃗_i⟩). It suffices to verify that
w∈_^<R( v,⟨v⃗±e⃗_i⟩)
since _^<R( v,⟨v⃗±e⃗_i⟩) is a full subcomplex of _^<R( v,⟨v⃗±e⃗_i⟩).
Following <ref>, there are two things to check: Firstly, that we have w⃗∉⟨e⃗_1, …, e⃗_m, v⃗⟩ and secondly, that ω(w,v) = ω(w,⟨v⃗±e⃗_i⟩) = 0.
The first condition follows exactly as in <cit.>.
The second condition follows because v and ⟨v⃗±e⃗_i⟩ are contained in the augmentation core of a 2-additive face of v,⟨v⃗±e⃗_i⟩∪ e_1, …, e_m. This implies that all lines w that form a simplex with them in [m+n] must be isotropic to them.
The proof of <ref> is easier: Assume that v,w is not a σ simplex. Then ω(v,w) = 0. So the only thing to check is that w⃗∉⟨e⃗_1, …, e⃗_m, v⃗⟩ and this follows again as in <cit.>.
§ REGULAR MAPS
In order to prove <ref>, i.e. to show that π_k() = 0 for all k≤ n, we need to study maps from spheres into and its relative versions as well as homotopies between such maps.
In order to control the behaviour of these, we will only work with certain simplicial maps S^k→ from triangulated spheres into these complexes and we will only allow certain “regular” homotopies between such maps. Related ideas have been used by Putman to prove that is (n-1)-connected (see <cit.>, and <cit.>, which fixes some small gaps in Putman's argument).
The aim of this section is to introduce the necessary definitions and properties of the types of maps that we will be working with.
§.§ Combinatorial manifolds
We start by introducing definitions and elementary properties of combinatorial manifolds. We mostly stick to the notation used in <cit.>. For general references about this topic, see <cit.> and <cit.>.
In what follows, we assume that for k<0, the empty set is both a k-ball and a k-sphere.
Let k be a natural number. We define the notion of a combinatorial k-manifold inductively as follows:
* Every 0-dimensional simplicial complex is a combinatorial 0-manifold.
* A k-dimensional simplicial complex M is called a combinatorial k-manifold if for every simplex Δ of M, the link _M(Δ) is a combinatorial (k-(Δ)-1)-manifold whose geometric realisation is either homeomorphic to a sphere or to a ball.
If M is a combinatorial k-manifold, we denote by ∂ M the subcomplex consisting of all simplices Δ such that (Δ)<k and |_M(Δ)| is a ball.
We say that M is a combinatorial k-sphere or a combinatorial k-ball if its geometric realisation is homeomorphic to a k-sphere or a k-ball.
The following lemma collects some elementary properties of combinatorial manifolds. For the proofs, see <cit.>, in particular <cit.> and <cit.>.
Let M be a combinatorial k-manifold. Then
* the geometric realisation |M| is a topological manifold;
* the boundary ∂ M is a combinatorial (k-1)-manifold;
* the geometric realisation of the boundary is the boundary of the geometric realisation, |∂ M| = ∂ |M|.
In particular, these properties yield the following descriptions of the boundaries of combinatorial balls and spheres.
If B is a combinatorial k-ball, then ∂ B is a combinatorial (k-1)-sphere.
If S is a combinatorial k-sphere, then ∂ S = ∅, so for all simplices Δ of S, the link _S(Δ) is a combinatorial (k-(Δ)-1)-sphere.
The next lemma is an easy consequence of the definition of ∂ M.
Let M_1 and M_2 be combinatorial k_1- and k_2-balls or -spheres.
* If at least one of M_1, M_2 is a ball, then the join M_1 ∗ M_2 is a combinatorial (k_1+k_2+1)-ball.
* If both M_1 and M_2 are spheres, then the join M_1 ∗ M_2 is a combinatorial (k_1+k_2+1)-sphere.
Furthermore, for B, B' combinatorial balls and S, S' combinatorial spheres, we have
∂ (B ∗ B') = (∂ B ∗ B') ∪ (B ∗∂ B'),
∂ (S ∗ B) = S ∗∂ B,
∂ (S ∗ S') = ∅.
<ref> and <ref> follow from <cit.> (that our definition of combinatorial balls and spheres agrees with the one in <cit.> follows from <cit.>).
If Δ = Δ_1 ∗Δ_2 is a simplex of M_1∗ M_2, where Δ_1∈ M_1 and Δ_2∈ M_2, we have
_M_1∗ M_2(Δ) = _M_1(Δ_1)∗_M_2(Δ_2).
Hence, the statement about the boundaries follows from <ref> and <ref>. (Note that ∂ D^0 = ∅ = ∂ S^0 and ∅∗ X = X for all simplicial complexes X.)
Note that the above statement does not generalise to arbitrary combinatorial manifolds. (E.g. the join of three points with a singleton is not a manifold.)
We will frequently use the following consequence of <ref> of <ref>:
Let S be a combinatorial k-sphere and Δ a simplex of S.
Then _S(Δ) = Δ∗_S(Δ) is a combinatorial k-ball and
∂_S(Δ) = ∂Δ∗_S(Δ).
This immediately follows from <ref> because _S(Δ) is a (k-1)-sphere, therefore ∂_S(Δ) = ∅ as observed above.
The next lemma allows us to restrict ourselves to combinatorial spheres and balls when investigating the homotopy groups of simplicial complexes.
Let X be a simplicial complex and k≥ 0.
* Every element of π_k(X) can be represented by a simplicial map ϕ S→ X, where S is a combinatorial k-sphere.
* If S is a combinatorial k-sphere and ϕ S→ X is a simplicial map such that |ϕ| is nullhomotopic, then there is a combinatorial (k+1)-ball B with ∂ B = S and a simplicial map ψ B→ X such that ψ|_S = ϕ.
This can be shown using Zeeman's simplicial approximation (<ref>), see <cit.>. The key observation for it is that subdivisions of combinatorial manifolds are again combinatorial manifolds.
Recall that if M is a simplicial complex and C⊆ M a subcomplex, we write M∖ C for the subset (not subcomplex) of M consisting of all simplices that are not contained in C.
Let M be a combinatorial k-manifold and C⊆ M a subcomplex that is a combinatorial k-ball. Let C' be a combinatorial k-ball such that
C'∩ M = ∂ C = ∂ C'.
Then M' (M∖ C) ∪ C' is a combinatorial k-manifold with |M|≅ |M'|. (See <ref>.)
First observe that topologically, |M'| is obtained from |M| by removing the maximal-dimensional ball |C| and attaching a ball |C'| of the same dimension along the same boundary ∂ |C| = ∂ |C'|. Clearly, this does not change the homeomorphism type of |M|, i.e. we have |M|≅ |M'|.
Hence, it suffices to show that M' is indeed a combinatorial manifold.
We proof this claim by induction over k.
The induction beginning are the cases k≤ 0, which are trivial.
Now assume k>0 and that the claims holds for all l<k.
Let Δ be a simplex of M'. We need to show that _M'(Δ) is a combinatorial (k-(Δ)-1)-sphere or -ball.
If Δ is contained in M'∖ C', then
_M'(Δ) = _M(Δ),
so the claim follows because M is a combinatorial manifold.
Similarly, if Δ is contained in C'∖(C'∩ M), then _M'(Δ) = _C'(Δ), so the statement follows because C' is a combinatorial ball.
So we can assume that Δ⊆ C' ∩ M, which by assumption means that
Δ⊆∂ C = ∂ C'.
As C and C' are combinatorial k-balls, both _C(Δ) and _C'(Δ) are combinatorial (k-(Δ)-1)-balls.
Furthermore, _M(Δ) is a combinatorial (k-(Δ)-1)-ball or sphere and it is not hard to see that _M'(Δ) is obtained from _M(Δ) by replacing _C(Δ) with _C'(Δ).
So by the induction hypothesis, _M'(Δ) is again a combinatorial (k-(Δ)-1)-ball or sphere, respectively.
It follows that M' is a combinatorial k-manifold.
In particular, <ref> implies that subdivisions of combinatorial manifolds are again combinatorial manifolds.
We also need the following variant of <ref>, which is depicted in <ref>. We state it separately to simplify references later on, but omit the proof as it is entirely parallel to the one of <ref>.
Let M be a combinatorial k-ball and C⊆ M a subcomplex that is a combinatorial k-ball. Let C' be a combinatorial k-ball such that
C'∩ M ⊆∂ C ∩∂ C' and ∂ C ⊆∂ M ∪∂ C'.
Let M' (M∖ C) ∪ C'. Then if |M'| is a k-ball, in fact M' is a combinatorial k-ball.
If M' is obtained from M as in <ref> or <ref>, We say that M' is obtained from M by replacing C with C'.
We record the observation from <ref> in the proof of <ref> for later reference:
If M' is obtained from M by replacing C with C' and Θ is a simplex in M'∖ C', we have _M'(Θ) = _M(Θ).
§.§ Cross maps and regularity
<ref> above allows us to restrict to maps M→ from combinatorial manifolds to for proving <ref>. We will further restrict the class of maps by asking that they satisfy certain regularity conditions, which we introduce in this subsection. The notion of regularity presented here extends the concept of σ-regularity as defined by Putman, see <ref>. In <ref> at the end of this subsection, we briefly describe some intuition for these regularity conditions.
* We denote by C_k the join of k copies of S^0. (This is the boundary of the k-dimensional cross polytope and a combinatorial (k-1)-sphere by <ref>.). We interpret C_k for k≤ 0 as the empty set.
* We denote by P_3 the simplicial complex with the vertices x_1, x_2, x_12, y_1, y_2, y_12 that is given as the union of the three 3-simplices x_1, x_2, x_12, y_12, x_1, x_2, y_1, y_12 and x_1, y_1, y_2, y_12. This complex is the 3-dimensional “prism” depicted in <ref>.
<ref> of <ref> implies that C_k is a combinatorial (k-1)-sphere.
The following is easy to verify.
The complex P_3 is a combinatorial manifold.
In this work, it is helpful to think of prisms P contained in a simplicial complex M as one cell. We use the following notation to describe neighbourhoods of prisms P in M.
If P is a subcomplex of a simplicial complex M, then …
* … the link of P in M is
_M(P) = ⋂_Δ simplex of P_M(Δ);
* … the star of P in M is
_M(P) = P ∗_M(P).
Note that if P is a simplex, this definition agrees with the usual definition of link and star in simplicial complexes.
Let Δ^1, Δ^2 denote an abstract 1- and 2-simplex, respectively.
* A σ^2 cross map is a simplicial map ϕΔ^1 ∗Δ^1 ∗ C_k-2→ with the following property: Let x_1,y_1 …, x_k,y_k be the vertices of Δ^1 ∗Δ^1 ∗ C_k-2. Then there is a symplectic summand of ^2(m+n) with a symplectic basis v⃗_1, w⃗_1, …, v⃗_k, w⃗_k such that ϕ(x_i)= v_i and ϕ(y_i) = w_i for all i.
* A prism cross map is a simplicial map ϕ P_3 ∗ C_k-2→ with the following property: Let x_1, x_2, x_12, y_1, y_2, y_12 be the vertices of P_3 as in <ref> and x_3, y_3, …, x_k,y_k be the vertices of C_k-2. There is a symplectic summand of ^2(m+n) with a symplectic basis v⃗_1, w⃗_1, …, v⃗_k, w⃗_k such that
ϕ(x_1) = v_1, ϕ(x_2) = v_2, ϕ(x_12) = ⟨v⃗_1 + v⃗_2 ⟩,
ϕ(y_1) = w_1, ϕ(y_2) = w_2, ϕ(y_12) = ⟨w⃗_1 - w⃗_2 ⟩,
and ϕ(x_i)= v_i, ϕ(y_i)= w_i for i∈ 3,…, k .
* A σ-additive cross map is a simplicial map ϕΔ^2 ∗ C_k-1→ with the following property: Let xy_1,x_1,y_1, …, x_k,y_k be the vertices of Δ^2 ∗ C_k-1. Then there exists a symplectic summand of ^2(m+n) with a symplectic basis v⃗_1, w⃗_1, …, v⃗_k, w⃗_k such that ϕ(x_i)= v_i, ϕ(y_i) = w_i for all i and ϕ(xy_1) = ⟨v⃗_1 + w⃗_1 ⟩.
* An external 2-skew-additive cross map is a simplicial map ϕΔ^2 ∗ C_k-1→ with the following property: Let xy_1, x_1, y_1, …, x_k, y_k be the vertices of Δ^2 ∗ C_k-1. Then there exists a symplectic summand of ^2(m+n) with a symplectic basis {v⃗_1, w⃗_1, …, v⃗_k, w⃗_k} such that ϕ(x_i) = v_i, ϕ(y_i) = w_i for all i and
ϕ(xy_1) = ⟨v⃗_1 ±⟩ for some ∈{e_1, …, e_m}.
We observe the following facts about cross maps:
* All cross maps defined above are isomorphisms onto their images.
* If {v⃗_1, w⃗_1, …, v⃗_k, w⃗_k} is the symplectic basis in the image of a cross map, then it is compatible with {e⃗_1, …, e⃗_m}. That is, {v⃗_1, w⃗_1, …, v⃗_k, w⃗_k, e⃗_1, …, e⃗_m}
can be extended to a symplectic basis of ^2(m+n).
* The link of the prism P_3 in the domain P_3 ∗ C_k-2 of a prism cross map (as defined in <ref>) is equal to C_k-2 and its star is the whole domain,
_P_3 ∗ C_k-2(P_3) = C_k-2 and _P_3 ∗ C_k-2(P_3) = P_3 ∗ C_k-2.
* A prism cross map ϕ sends P_3 to the union of the following three simplices:
* the 2-skew-additive simplex v_1, v_2, ⟨v⃗_1 + v⃗_2 ⟩, ⟨w⃗_1 - w⃗_2 ⟩,
* the skew-σ^2 simplex v_1, v_2, w_1, ⟨w⃗_1 - w⃗_2 ⟩ and
* the 2-skew-additive simplex v_1, w_1, w_2, ⟨w⃗_1 - w⃗_2 ⟩.
This image contains two skew-additive simplices, namely { v_1, v_2, ⟨w⃗_1 -w⃗_2 ⟩} and { v_1, w_1, ⟨w⃗_1 - w⃗_2 ⟩}.
It is a combinatorial 3-ball whose boundary ∂ϕ(P_3) is the union the following eight simplices, which are all contained in :
v_1, v_2, ⟨v⃗_1 + v⃗_2 ⟩ — 2-additive simplex
v_1, ⟨v⃗_1 + v⃗_2 ⟩, ⟨w⃗_1 - w⃗_2 ⟩ —σ simplex
v_2, ⟨v⃗_1 + v⃗_2 ⟩, ⟨w⃗_1 - w⃗_2 ⟩ —σ simplex
v_1, v_2, w_1 —σ simplex
v_2, w_1, ⟨w⃗_1 - w⃗_2 ⟩ —σ simplex
v_1, w_1, w_2 —σ simplex
v_1, w_2, ⟨w⃗_1 - w⃗_2 ⟩ —σ simplex
w_1, w_2, ⟨w⃗_1 - w⃗_2 ⟩ — 2-additive simplex
Spelling out the definitions, we can describe the simplex types in the images of cross maps:
Let ϕ C → be a cross map.
Then C≅ϕ(C) is a combinatorial ball and:
* If ϕ is a σ^2 cross map, then
* ϕ(∂ C) contains only standard and σ simplices;
* ϕ(C ∖∂ C) contains only σ^2 simplices.
* If ϕ is a prism cross map, then
* ϕ(∂ C) contains only standard, σ and (internal) 2-additive simplices;
* ϕ(C ∖∂ C) contains only (internal) skew-additive, (internal) 2-skew-additive and skew-σ^2 simplices.
* If ϕ is a σ-additive cross map, then
* ϕ(∂ C) contains only standard and σ simplices;
* ϕ(C ∖∂ C) contains only σ-additive simplices.
* If ϕ is an external 2-skew-additive cross map, then
* ϕ(∂ C) contains only standard, σ and (external) 2-additive simplices;
* ϕ(C ∖∂ C) contains only (external) 2-skew-additive simplices.
We can write C = Σ∗ C_i as in the definition of the corresponding cross map. By <ref>, this is a combinatorial ball and we have
∂ C = ∂Σ∗ C_i∪Σ∗∂ C_i = ∂Σ∗ C_i,
where we use that ∂ C_i is a combinatorial sphere and hence has empty boundary ∂ C_i = ∅.
Spelling out the definitions of the cross maps, we obtain the claimed descriptions of ϕ(∂ C) (for the case of prism cross maps, see <ref> of <ref>).
<ref> also implies that if a simplex of C does not lie in ∂ C, it must be of the form of the form Δ∗Θ, where Δ is a face of Σ that is not contained in ∂Σ and Θ is contained in C_i. The image ϕ(Δ∗Θ) is then a simplex of the same type as ϕ(Δ). Again going through the definitions, the claimed descriptions of ϕ(C ∖∂ C) follow.
Noting that ∂ϕ(C) = ϕ(∂ C), we get the following corollary.
Let ϕ C → be a σ^2-, prism-, σ-additive or external 2-skew-additive cross map. Then ∂ϕ(C) ⊆.
Let M be a combinatorial manifold. A simplicial map ϕ M → is called …
* … σ^2-regular if the following holds: If Δ is simplex of M such that ϕ(Δ) is a minimal (i.e. 3-dimensional) σ^2 simplex, then ϕ|__M(Δ) is a σ^2 cross map.
* … weakly prism-regular if the following holds: Let Δ be a simplex of M such that ϕ(Δ) is a minimal skew-additive, 2-skew-additive or skew-σ^2 simplex. Then one of the following two cases holds:
* There exists a unique subcomplex P ≅ P_3 of M such that Δ⊆ P, _M(P) = _M(Δ') for any of the three maximal simplices[The intuition behind this condition is that in a (weakly) prism-regular map, a prism P, although it is the union of three maximal simplices, should rather be considered as a single – non-simplicial – cell.] Δ' of P, and ϕ|__M(P) is a prism cross map.
* The simplex Δ has dimension 2,
ϕ(Δ) = {v_0, v_1, ⟨v⃗_0 ±⟩} for some ∈{e_1, …, e_m},
is an external 2-skew-additive simplex with ω(v⃗_0, v⃗_1) = ± 1, and ϕ|__M(Δ) is an external 2-skew-additive cross map.
* … prism-regular if the following holds: Let Δ be a simplex of M such that ϕ(Δ) is a minimal skew-additive, 2-skew-additive or skew-σ^2 simplex. Then the condition in <ref> is satisfied.
* … σ-additive-regular if the following holds: If Δ is a simplex of M such that ϕ(Δ) is a minimal (i.e. 2-dimensional) σ-additive simplex, then ϕ|__M(Δ) is a σ-additive cross map.
* … weakly regular if ϕ is σ^2-regular, weakly prism-regular and σ-additive-regular.
* … regular if ϕ is σ^2-regular, prism-regular and σ-additive-regular.
Note that, compared to the definition of a regular map, the regularity notion for prisms is slightly less restrictive in the definition of a weakly regular map. In particular, regularity implies weak regularity:
Every regular map ϕ M → is weakly regular.
Let ϕ M → be a weakly regular map.
Let C and C' be maximal subcomplexes of M such that the restrictions ϕ|_C and ϕ|_C' are σ^2-, prism-, σ-additive- or external 2-skew-additive cross maps.
Then either C and C' coincide or they only intersect in their boundaries,
C=C' or C ∩ C' ⊆∂ C ∩∂ C'.
It is enough to show that a simplex Δ in C ∖∂ C can only be contained in C' if C=C': By symmetry, the same argument then also show that a simplex in C' ∖∂ C' can only be contained in C if C=C'. This is equivalent to the claim.
By <ref>, the simplex Δ∈ C ∖∂ C can only be contained in C' if ϕ|_C and ϕ|_C' are cross maps of the same type.
It is easy to verify that this implies C = C'.
Let 0≤ k ≤ n and let X be a subcomplex of .
* For i = 1,2, let ϕ_i S_i → be simplicial maps from combinatorial k-spheres S_i. We say that ϕ_1 and ϕ_2 are (weakly) regularly homotopic (in X) if there are a combinatorial manifold M homeomorphic to S^k × [0,1] with ∂ M = S_1 ⊔ S_2 and a (weakly) regular map Ψ M → X⊆ such that Ψ|_S_i = ϕ_i for i=1,2.
* Let ϕ S → be a simplicial map from a combinatorial k-sphere S. We say that ϕ is (weakly) regularly nullhomotopic (in X) if there is a combinatorial ball B with ∂ B = S and a (weakly) regular map Ψ B → X⊆ such that Ψ|_S = ϕ.
Note that we only define (weak) regular homotopies between maps whose domains are combinatorial spheres of the same dimension. So if S is a combinatorial k-sphere, ϕ S → a simplicial map and we say “ϕ is (weakly) regularly homotopic to →”, then we always assume that is also a combinatorial k-sphere and is a simplicial map.
The notions of regular homotopies and nullhomotopies are clearly special cases of the corresponding topological notions. I.e. if ϕ and are weakly regularly homotopic, then |ϕ| and || are homotopic; and if ϕ is weakly regularly nullhomotopic, then |ϕ| is nullhomotopic.
Furthermore, these notions are compatible with one another (cf. <cit.>):
If ϕ and ϕ' are (weakly) regularly homotopic and ϕ' is (weakly) regularly nullhomotopic, then so is ϕ.
For constructing regular homotopies, we mostly use the following lemma. It can be proved like <cit.>.
Let S be a combinatorial k-sphere and ϕ S→ a simplicial map.
Let B be a combinatorial (k+1)-ball and Ψ B → a (weakly) regular map.
Assume that ∂ B = D_1 ∪ D_2
is the union of two combinatorial k-balls D_1, D_2 such that
D_1 ∩ D_2 = ∂ D_1 = ∂ D_2.
Furthermore, assume that S ∩ B = D_1 and Ψ|_D_1 = ϕ|_D_1 (see <ref>).
Let be the combinatorial k-sphere that is obtained from S by replacing D_1 with D_2 and : → the simplicial map defined by |_D_2 = Ψ|_D_2 and |_∖ D_2 = ϕ. Then ϕ is (weakly) regularly homotopic to .
We say that is obtained from ϕ by replacing ϕ|_D_1 by Ψ|_D_2.
Using <ref>, we obtain the following.
If is obtained from ϕ by replacing ϕ|_D_1 by Ψ|_D_2 and Θ is a simplex in ∖ D_2 = S ∖ D_1, we have
|__(Θ) = ϕ|__S(Θ).
We finish with a comment about the motivation for introducing the regularity conditions above.
Much of this work is concerned with showing that the complex is highly connected, i.e. proving <ref>. When we eventually do this in <ref> and <ref>, we consider σ-regular maps ϕ S →, which are introduced in <ref>, and regular homotopies ψ B →, in the sense of <ref>, between these.
Throughout this work and in these arguments in particular, it is useful to think of the cross maps contained in ϕ and ψ as being defined on a single polyhedral cell that consists of several simplices. For example, a prism cross map ψ|_P_3 ∗ C_k-2 P_3 ∗ C_k-2→ contained in a regular homotopy ψ can be thought of as a map defined on a single polyhedral cell that for k = 2 has the shape of the prism depicted in <ref>.
It might be possible to formulate the arguments presented in this work in a category of suitable polyhedral cell complexes. The disadvantage of this might be that the arguments are less parallel to the ones in previous work <cit.>.
§ REDUCING THE RANK
In this section, we study certain subcomplexes X of , , and .
We develop tools that are applied in later sections (see <ref> and <ref>) to prove that various simplicial complexes, including , are highly connected.
Recall from <ref> that the rank of a vertex v of X is defined as the absolute value of the f⃗_m+n-coordinate of some (hence any) primitive vector v⃗∈ v, (v) = |ω(e⃗_m+n, v⃗)|, and that we denote by X^< R the full subcomplex of X on all vertices v satisfying (v) < R. This yields a filtration of X,
X(W) = X^< 1⊂…⊂ X^< R⊂…⊂ X,
interpolating between X(W), the full subcomplex of X on all vertices v contained in the submodule
W = ≪e⃗_1, f⃗_1, …, e⃗_m+n-1, f⃗_m+n-1, e⃗_m+n,
and the simplicial complex X. Our goal is to develop techniques that allow us to “reduce the rank”, i.e. to construct a map ϕ' M → X^< R from a given map ϕ M → X. An additional difficulty is to do this in such a way that desirable properties such as regularity (see <ref>) are preserved. More precisely, we will show the following:
Let n ≥ 2, m ≥ 0, and R > 0. Let be a vertex of with () = R. Assume we are given a commutative diagram of simplicial maps
S^k-1[r, "ϕ|"] [d, hook] _^<R() [d, hook]
D^k [r, "ϕ"] _()
such that ϕ is weakly regular[A precise definition for what this means is given in <ref>.]. Then there exists a combinatorial ball D^k_ with boundary sphere S^k-1 and a commutative diagram of simplicial maps
S^k-1[r, "ϕ|"] [d, hook] _^<R()
D^k_[ur, "ψ", swap]
such that ψ is weakly regular.
We work our way to this result through several subsections.
Recall that ⊂⊂⊂. We start by studying subcomplexes of in the first subsection. Then we move to , and , respectively. The results in each subsection are extensions of the ones from the previous subsection. A key idea in the arguments here is that forgetting the symplectic form yields inclusion maps
* ↪_2n+m^m,
* ↪_2n+m^m,
* ↪_2n+m^m and
* ↪^m_2n+m
(compare <ref>). The complexes on the right of these arrows were studied in the context of high-dimensional rational cohomology of the special linear group n. This is why, up to technicalities, “restricting” along the first three inclusion maps allows us to deduce our results for , and from work of Maazen <cit.>, Putman <cit.>, Church–Putman <cit.> and Brück–Miller–Patzt–Sroka–Wilson <cit.>.
When “reducing the rank” in , we additionally need to ensure that certain regularity properties are preserved. Overcoming this extra difficulty for requires a careful study of cross maps and is our main contribution here.
§.§ Reducing the rank in I
In this subsection, we explain how ideas due to Maazen <cit.> and Church–Putman <cit.> generalise to the symplectic setting, and how they can be used to “reduce the rank” in . Similar ideas were used by Putman in the proof of <cit.>. We use the notation introduced in <ref> and frequently apply the notions defined in <ref>. The following construction explains how to “reduce the rank” of vertices; it plays a key role in throughout this section.
Let m,n ≥ 0, R > 0 and X^m_n be or any other complex in <ref>. Let Δ be a standard simplex of X^m_n such that some vertex ∈Δ satisfies () = R. Let v ∈(_X^m_n(Δ)) be a vertex. Recall that v̅∈{v⃗, - v⃗} is a primitive vector in v satisfying (v̅) ≥ 0 (see <ref>). Then we define (v) ∈(_X^m_n^< R(Δ)) to be the vertex of _X^m_n^< R(Δ) given by
(v) ≪v̅ - a
where a ∈ is chosen such that (v̅ - a ) ∈ [0, R) and determined by the Euclidean algorithm.
The goal of this subsection is to show that for X^m_n = the map between the vertex set of _(Δ) and _^< R(Δ) is simplicial.
Let n ≥ 0, m ≥ 0 and R > 0. Let Δ be a simplex of such that some vertex of Δ satisfies () = R. Then the map ρ defined in <ref> is a simplicial retraction
_(Δ) ↠^< R_(Δ).
<ref> is a symplectic analogue of the following result by Church–Putman.
Let n ≥ 0, m ≥ 0, and R > 0. Let Δ be a simplex of ^m_2n+m such that some vertex of Δ satisfies () = R. Then the map ρ defined in <ref> is a simplicial retraction
_^m_2n+m(Δ) ↠^< R_^m_2n+m(Δ).
To deduce <ref> from <ref>, we study the effect of <ref> on symplectic information. This is the content of the next lemma.
Let n,m ≥ 0 and R > 0. Let Δ be a standard simplex of such that some vertex of Δ satisfies () = R. A simplex Δ' ∈_(Δ) is called non-additive if it is of type standard, σ, σ^2, skew-additive, or skew-σ^2.
Observe that in the setting of <ref>, Δ' is non-additive if and only if its image under the inclusion _(Δ) ↪_^m_2n+m(Δ) is a standard simplex. The next lemma is similar to <cit.>.
Let n ≥ 0, m ≥ 0 and R > 0. Let Δ be a standard simplex of such that some vertex of Δ satisfies () = R. Assume that Δ' ∈_(Δ) is a non-additive simplex of type
τ∈{standard, σ, σ^2, skew-additive, skew-σ^2}.
Then the set of vertices (Δ') forms a non-additive simplex of type τ and of the same dimension as Δ' in . Here, is the map of sets defined in <ref>.
By <ref> and <ref>, it follows that (Δ') is a standard simplex in ^m_2n+m. The proof of <cit.> shows that (Δ') has the same dimension as Δ'. Hence, we only need to see that the retraction map preserves the symplectic relations. By the definition of _(Δ), it holds that
⟨v⃗| v ∈Δ' ⟩⊂^⊥,
where ^⊥ is the symplectic complement of in ^2(m+n).
This implies that
ω(v̅, v̅') = ω(v̅ - a , v̅' - a' )
for all v, v' ∈Δ' and any choice of a, a' ∈. Again by the definition of _(Δ) and since Δ is a standard simplex of , it holds that
ω(u̅, v̅ - a ) = 0
for any v ∈Δ, u ∈Δ' and any a ∈. These three observations together yield that (Δ') is a non-additive simplex in of the same type as Δ'.
Consider the inclusion ↪^m_2n+m. It restricts to the inclusion
_(Δ) ↪_^m_2n+m(Δ).
<ref> yields a simplicial retraction
_^m_2n(Δ) ↠^<R_^m_2n+m(Δ)
defined as in <ref>.
By <ref>, it restricts to a retraction
_(Δ) ↠^<R_(Δ).
§.§ Reducing the rank in I"005E{"03C3, "03B4}
In this subsection, we explain how ideas due to Church–Putman <cit.> generalise to the symplectic setting, and how they can be used to “reduce the rank” in . Similar ideas are used by Putman in the proof of <cit.>. We continue to use the notation introduced in <ref> and frequently apply the notions defined in <ref> to refer to simplices in _(). The goal of this subsection is to prove the following symplectic analogue of Church–Putman <cit.>.
Let n ≥ 1, m ≥ 0 and let R > 0. Let be a vertex of such that () = R. Then there exists a subdivision (_()) of _() containing _^< R() as a subcomplex and a simplicial retraction
(_()) ↠^< R_()
that extends the retraction introduced in <ref>.
The following is based on <cit.>. Let Δ = {v_0, …, v_k} be an internal 2-additive simplex in _() with augmentation core given by Θ = {v_0, v_1, v_2}. We may assume that v̅_0 = v̅_1 + v̅_2. In the setting of <ref>, Δ is called carrying if
⌊(v̅_0)/R⌋≠⌊(v̅_1)/R⌋ + ⌊(v̅_2)/R⌋.
The subdivision (_()) of _() is obtained by placing a new vertex t(Θ) at the barycentre of every carrying minimal 2-additive simplex Θ. This subdivides Θ into three simplices Θ_1, Θ_2 and Θ_3 involving t(Θ). The subdivision is then extended to every carrying 2-additive simplex Δ = Θ∗Δ' with augmentation core Θ by subdividing Δ into three simplices Θ_1 ∗Δ', Θ_2 ∗Δ' and Θ_3 ∗Δ'. In particular, only internal 2-additive simplices with carrying augmentation core get subdivided when passing from _() to (_()).
On vertices v ∈_(), the retraction in <ref> agrees with the retraction for (see <ref>). Recall that v̅∈{v⃗, - v⃗} denotes a primitive vector in v satisfying (v̅) ≥ 0 (see <ref>). Then (v) = ≪v̅ - a where a ∈ is chosen such that
(v̅ - a ) ∈ [0, R).
On the new vertices t(Θ) ∈(_()) sitting at the barycentres of minimal carrying internal 2-additive simplices Θ = {v_0, v_1, v_2}∈_() (see <ref>), the vertices {v_0, v_1, v_2} are used to define (t(Θ)) as follows: As in <ref>, we may assume that v̅_0 = v̅_1 + v̅_2. Furthermore, if Θ is carrying one has (v̅_1), (v̅_2) > 0 (see <cit.>). Therefore, v_0 is the unique vertex in Θ maximising (-) among {v_0, v_1, v_2}. Pick an arbitrary index i ∈{1,2}. Then the value of at t(Θ) is defined as
(t(Θ)) ⟨(v_i) - ⟩.
One can check that 0 < ((v_i)) < R (see <cit.>). This implies that
((t(Θ))) = R - ((v_i)) < R,
i.e. that (t(Θ)) ∈_^< R().
<ref> is a consequence of the following.
Let n ≥ 1, m ≥ 0 and let R > 0. Let be a vertex of ^m_2n+m such that () = R. Then there exists a subdivision (_^m_2n+m()) of _^m_2n+m() containing _^m_2n+m^< R() as a subcomplex and a simplicial retraction
(_^m_2n+m()) ↠^< R_^m_2n+m()
extending the retraction introduced in <ref>.
In <ref>, the subdivision (_^m_2n+m()) of _^m_2n+m() and the retraction are defined as discussed in <ref> and <ref>.
Combining this result with <ref> yields <ref>.
The inclusion ↪^m_2n+m restricts to an inclusion _() ↪_^m_2n+m(). By <ref>, there is a simplicial retraction
(_^m_2n+m()) ↠^< R_^m_2n+m().
Let (_()) be the restriction of the subdivision (_^m_2n+m()) of _^m_2n+m() to the subcomplex _(). Restriction yields a simplicial map
(_()) →^< R_^m_2n+m().
We claim that the image of each simplex Δ in (_()) is contained in ^< R_(). <ref> implies that if Δ∈_^m_2n+m() is a standard or σ simplex then so is (Δ). If Δ is a (possibly carrying) 2-additive simplex, then the image ((Δ)) consists of standard and 2-additive simplices in ^< R_^m_2n+m(). By definition (see <ref>), every vertex in ((Δ)) is contained in the isotropic summand ≪Δ∪{, e⃗_1, …, e⃗_m}_. In particular, the simplices in ((Δ)) form standard or 2-additive simplices in ^<R_(), and cannot be of σ-type. It follows that restricting the codomain yields a well-defined map
(_()) →^<R_().
This concludes the proof of the proposition, because this map is by definition the identity on ^< R_() (see <ref>).
§.§ Reducing the rank in IA
In this subsection, we explain how one can extend <ref> to . We continue to use the notation introduced in <ref> and frequently apply the notions defined in <ref> to refer to simplices in _(). The goal of this subsection is to prove the following.
Let n ≥ 1, m ≥ 0 and let R > 0. Let be a vertex of such that () = R. Then there exists a subdivision (_()) of _() containing _^< R() as a subcomplex and a simplicial retraction
(_()) ↠^< R_()
extending the retraction introduced in <ref>.
Recall that is obtained from by attaching mixed simplices. For the proof of <ref>, we hence only need to verify that the retraction for (see <ref>) extends over mixed simplices.
The following is based on <cit.>. We explain how the subdivision (_()), described in <ref>, can be extended over mixed simplices to obtain the subdivision (_()) of _() in <ref>. For this, let Δ be mixed simplex in _(). Then Δ = Θ∗Δ', where Θ is a minimal 2-additive simplex and Δ' is a σ simplex. If Θ is internal minimal 2-additive and carrying in the sense of <ref>, it is subdivided into three simplices Θ_1, Θ_2 and Θ_3 in (_()) by placing a new vertex t(Θ) at its barycentre. Hence, to extend (_()) to (_()), we subdivide such carrying mixed simplices Δ into three simplices Θ_1 ∗Δ', Θ_2 ∗Δ' and Θ_3 ∗Δ'.
On the vertices v ∈_() and the new vertices t(Θ) ∈(_()) (see <ref>) the retraction in <ref> is defined exactly as described in <ref>.
We are now ready to prove <ref>.
Consider the inclusion ↪^m_2n+m. It restricts to an inclusion _() ↪_^m_2n+m(). By <ref>, there is a simplicial retraction
(_^m_2n+m()) ↠^< R_^m_2n+m().
In the proof of <ref>, we explained why restricts to a retraction
(_()) →^< R_().
Hence, it suffices to focus on mixed simplices Δ∈_(). Note that Δ can be written as Δ = Θ∗Δ', where Θ is a minimal 2-additive simplex and Δ' is a σ simplex. In particular, forgetting the symplectic information yields a 2-additive simplex Δ∈_^m_2n+m() that might be subdivided when passing to (_^m_2n+m()) (see <ref>). We know that ((Δ)) ∈^< R__2n+m^m(), and need to check that ((Δ)) ∈^< R_().
To see this, we first note that by <ref> the image (Δ') of Δ' is a σ simplex of the same dimension as Δ'.
If Θ is not carrying, then (Δ) = (Θ) ∗(Δ') is either a standard simplex or a 2-additive with augmentation core (Θ) in ^< R__2n+m^m() (see <cit.>). Therefore, we conclude that (Δ) = (Θ) ∗(Δ') is either a σ simplex or a mixed simplex in _^<R().
If Θ is carrying, then Θ is subdivided into three simplices Θ_1, Θ_2, Θ_3 when passing to (_^m_2n+m(v)) (exactly as described in <ref>). These three simplices have the property that (Θ_i) is either internal 2-additive or -related 2-additive (compare with <cit.>) for 1 ≤ i ≤ 3. Therefore, (Θ_i∗Δ') = (Θ_i) ∗(Δ') is a mixed simplex for 1 ≤ i ≤ 3. We conclude that the retraction for ^m_n restricts to a simplicial retraction
(_()) ↠^< R_().
§.§ Reducing the rank in IAA*
In this subsection, we explain how work of Brück–Miller–Patzt–Sroka–Wilson <cit.> generalises to the symplectic setting, and how it can be used to “reduce the rank” in . We continue to use the notation introduced in <ref> and frequently apply the notions defined in <ref> to refer to simplices in _(). The goal of this subsection is to prove the following extension of <ref>.
Let n ≥ 1, m ≥ 0, and let R > 0. Let be a vertex of such that () = R. Then there exists a subdivision (_()) of _() containing _^< R() as a subcomplex and a simplicial retraction
(_()) ↠^< R_()
extending the retraction introduced in <ref>.
<ref> is a consequence of the following.
Let n ≥ 1, m ≥ 0 and R > 0. Let be a vertex of _2n+m^m such that () = R. Then there exists a subdivision (__2n+m^m()) of __2n+m^m() containing __2n+m^m^< R() and a simplicial retraction
(__2n+m^m()) ↠__2n+m^m^< R()
extending the retraction introduced in <ref>.
The following is based on <cit.>. In <ref>, the subdivision (_()) of _() has the property that only certain “carrying” 2-additive, mixed, 3-additive, double-double and double-triple simplices are subdivided. Carrying 2-additive and mixed simplices are defined exactly as in <ref> and <ref>. The relevant properties of the other carrying simplices are discussed in <ref> below. Using the inclusion _() ↪__2n+m^m(), the subdivision (_()) is obtained as a restriction of the subdivision (__2n+m^m()) in <ref> to _(). The subdivision (_()) of _() occurring in <ref> is a subcomplex of (_()) (compare with <cit.>).
The following is based on <cit.>. The restriction of the map in <ref> to (_()) yields the retraction in <ref> (compare <cit.>). Furthermore, if Δ is a 2-additive, 3-additive, double-double or double-triple simplex, then Δ = Θ∗Δ', where Θ is the augmentation core of Δ and Δ' is a standard simplex. If Θ is “carrying”, then the subdivision (Δ) of Δ is of the form (Δ) = (Θ)∗Δ' for some subdivision of (Θ) of Θ. In this case, it holds that ((Δ)) = ((Θ)) ∗(Δ') and that every vertex (v) contained in ((Θ)) satisfies (v) ⊂≪Θ∪{, e⃗_1, …, e⃗_m}.
Combining <ref> with <ref>, we can prove <ref>.
The inclusion ↪_2n+m^m restricts to an inclusion
_() ↪__2n+m^m().
By <ref>, there is a simplicial retraction
(__2n+m^m()) ↠^<R__2n+m^m().
Let (_()) be the restriction of the subdivision (__2n+m^m()) of __2n+m^m() to the subcomplex _(). Restricting the domain of to (_()) yields a simplicial map
(_()) →^<R__2n+m^m().
We claim that the image (Δ) of each simplex Δ in (_()) is contained in ^<R_(). By <ref> and because the retraction in <ref> is an extension of the one in <ref> (see <ref> and <ref>), this holds for all simplices except possibly 3-additive, double-double and double-triple simplices. If Δ is a (possibly “carrying”) simplex of this type in _(), then the image ((Δ)) consists of standard, 2-additive, 3-additive, double-double and double-triple simplices in ^<R__2n+m^m(). By definition (see <ref>), every vertex in ((Δ)) is contained in the isotropic summand ≪Δ∪{, e⃗_1, …, e⃗_m} of ^2(m+n). In particular, the simplices contained in ((Δ)) form standard, 2-additive, 3-additive, double-double or double-triple simplices in ^<R_(). It follows that restricting the codomain yields a well-defined map
(_()) →^<R_().
This completes the proof.
§.§ Reducing the rank in IAA
In this final subsection, we explain how one can “reduce the rank” in by proving <ref>. In contrast to the previous subsections, it is important that we are able to reduce the rank of maps ϕ M →_() in such a way that weak regularity properties are preserved. This is used later, in the proof that the complex is highly connected, to construct regular maps in ; see the proof of <ref>. Throughout this subsection, we make the standing assumption that
n≥ 2, m≥ 0 and R > 0.
Standing assumption
In <ref>, we explained what weak regularity means for maps with codomain . The following definition clarifies what we mean by a weakly regular map with codomain _().
Let M be a combinatorial manifold, ∈ be a vertex and consider a simplicial map ϕ M →_(). Then ϕ is called weakly regular if for some (hence any) symplectic isomorphism ^2(m+n)→^2(m+n) fixing e_1, …, e_m and sending to e_m+1, the induced map
ϕ M →_() ≅[n-1][m+1]
is weakly regular in the sense of <ref>.
Equivalently, one can adapt the definition of cross maps and regularity in <ref> to maps ϕ M →_() as follows:
* In the definition of cross maps (see <ref>), replace by _(), and allow ∈{e_1, …, e_m, } in the part concerning external 2-skew-additive cross maps.
* In the definition of regular maps (see <ref>), replace by _(), and allow ∈{e_1, …, e_m, } in <ref> of the part concerning weakly prism-regular.
In contrast to <ref>, <ref> and <ref>, the authors have not been able to extend the a retraction in <ref> over a subdivision of _(). In particular, we did not find a suitable definition on the following simplices: Let Δ = {v_0, v_1, v_2, v_3} be a minimal internal 2-skew-additive simplex in _(). The 2-additive face {v_0, v_1, v_2} of Δ contains a unique vertex v̂∈{v_0, v_1, v_2} such that {v̂, v_3} is not a σ edge. We say that a 2-skew-additive simplex with augmentation core Δ is inessential if (v̂) is the unique maximum of (-) on {v_0, v_1, v_2}.
The proof of <ref> consists of three steps: In Step 1, we replace ϕ by a “nice” weakly regular map ϕ' with ϕ|_S^k-1=ϕ'|_S^k-1 . This is to avoid inessential simplices (see <ref>), i.e. the image of ϕ' does not contain any such simplices. In Step 2, we explain how one can extend the retraction for (see <ref>) over the image of cross maps that occur in such “nice” weakly regular maps ϕ'. In Step 3, this and Zeeman's relative simplicial approximation theorem (<ref>) are used to construct ψ from ϕ'.
§.§.§ Step 1: Essential prisms in weakly regular maps
In this subsection, we explain how one can replace a weakly regular map ϕ into _() by a “nice” weakly regular map ϕ', whose image contains no inessential simplices (see <ref>). The weak regularity of ϕ (see <ref>) implies that every inessential 2-skew-additive simplex in image of ϕ has to be contained in the image of a prism cross map in ϕ. This leads us to the next definition.
Let ∈.
* An internal 2-skew-additive simplex in _() with augmentation core given by Δ = {v_0, v_1, v_2, v_3} is called essential if there exists a vertex m^v of the 2-additive face {v_0, v_1, v_2} of Δ such that {m^v, v_3} is a σ edge in Δ and (m^v) is the maximum value (-) takes on {v_0, v_1, v_2}. Otherwise, the simplex is inessential (compare with <ref>).
* Let ϕ D^k →_() be weakly regular. We call a pair (P, ϕ|_P), where P ⊂ D^k is a subcomplex, a prism in ϕ if ϕ|__D^k(P) is a prism cross map (compare with <ref>).
* Let ϕ D^k →_(v) be a weakly regular map and let (P, ϕ|_P) be a prism in ϕ. We say that (P, ϕ|_P) is essential if both internal 2-skew-additive simplices in ϕ(P) are essential (see <ref> for an equivalent, pictorial description of this condition). Otherwise, (P, ϕ|_P) is called inessential.
A weakly regular map ϕ D^k →_() is “nice” if every prism in ϕ is essential. The following lemma allows us to assume this property.
Let ∈ be a vertex with () = R > 0. Given a commutative diagram of simplicial maps
S^k-1[r, "ϕ|"] [d, hook] _^<R() [d, hook]
D^k [r, "ϕ"] _()
such that ϕ is (weakly) regular (see <ref>). Then there is a (weakly) regular map ϕ' D^k →_() such that ϕ'|_S^k-1 = ϕ|_S^k-1 and such that every prism in ϕ' is essential.
Let (P_3, ϕ|_P_3) be a prism in ϕ that is inessential. We show how to remove (P_3, ϕ|_P_3) from ϕ without introducing a new inessential prism.
Since ϕ is (weakly) regular, ϕ|__D^k(P_3) is a prism cross map and, using the notation in <ref>, we may write _D^k(P_3) = P_3 ∗_D^k(P_3) = P_3 ∗ C_k-3. <ref> and the fact that ∂ C_k-3 = ∅ imply that _D^k(P_3) is a k-ball with boundary given by
∂_D^k(P_3) = ∂ P_3 ∗ C_k-3.
As in <ref>, we denote the vertices of P_3 by x_1, x_2, x_12, y_1, y_2, y_12 and their images by v_1, v_2, v_12, w_1, w_2, w_12, where P_3 is union of the three 3-simplices
x_1, x_2, x_12, y_12 , x_1, x_2, y_1, y_12, x_1, y_1, y_2, y_12P_3
(see <ref>).
There are three σ edges in ϕ(P_3) and they form a path connecting the vertices w_1, v_1,w_12, v_2 = v_1,v_2, w_1, w_12.
Let m^v∈{v_1, v_2, v_12} = Θ^v and m^w ∈{w_1, w_2, w_12} = Θ^w be two vertices, whose rank (m^v) and (m^w) is maximal among Θ^v and Θ^w respectively. If both m^v and m^w can be chosen to be in v_1,v_2, w_1, w_12, then P_3 is essential. Hence, we can assume that at least one of them is not contained in this set. Without loss of generality[In order to assume this, we might need to relabel the vertices of P_3 using the isomorphism x_1 ↔ y_12, x_2 ↔ y_1, x_12↔ y_1.], assume that this is the case for m^w. Then, it holds that
m^w = w_2.
We need to consider two cases: Either
m^v ∈ v_1, v_2 or m^v = v_12.
We first discuss the case m^v ∈ v_1, v_2 and comment on the other case at the end of the proof.
Let D denote the simplicial complex with the same vertices as P_3 that is the union of the four 3-simplices
{x_1, x_2, x_12, y_12}, {x_1, x_2, y_2, y_12}, {x_2, y_1, y_2, y_12} and {x_1, x_2, y_1, y_2}D
(see <ref>).
Note that the subcomplex P_3' ⊂ D that is the union of the first three simplices
{x_1, x_2, x_12, y_12}, {x_1, x_2, y_2, y_12}, {x_2, y_1, y_2, y_12}P_3'
is again a prism.
The complex D is a 3-ball and we have ∂ D = ∂ P_3.
Consequently,
C D ∗ C_k-3
is a k-ball that has the same vertices as _D^k(P_3) and the same boundary
∂ C = ∂ (D ∗ C_k-3) = ∂ P_3 ∗ C_k-3 = ∂_D^k(P_3).
As C has the same vertices as _D^k(P_3), we can define a map
ψ C →_()
by setting it to be equal to ϕ|__D^k(P_3) on these vertices.
To see that this actually defines a simplicial map, one checks that ψ sends the four maximal simplices of D (see <ref>) to a 2-skew-additive simplex, a skew-σ^2 simplex, a 2-skew-additive simplex and a σ^2-simplex, respectively. The first three of these simplices form the image of the new prism ψ(P_3'). It is not hard to see that _C(P_3') = P_3' ∗ C_k-3 and that the restriction of ψ to this is a prism cross map. Similarly, the restriction of ψ to {x_1, x_2, y_1, y_2}∗ C_k-3 is a σ^2 cross map. It follows that ψ is regular.
There is only one prism in ψ, namely (P_3',ψ_P_3').
Recall that in the first case m^v ∈ v_1, v_2. It follows that, under this assumption, the prism (P_3',ψ_P_3') is essential because the vertices m^v and m^w = w_2 are contained in the σ edge path in ψ(P_3') (see <ref>).
As ψ agrees with ϕ on ∂_D^k(P_3) = ∂ C, we can alter ϕ by replacing the k-ball _D^k(P_3) in D^k with the k-ball C and replacing ϕ|__D^k(P_3) with ψ. The result is a new map D^k→_(). It has one less inessential prism than ϕ because the replacement removed (P_3, ϕ_P_3) and only introduced the essential prism (P_3',ψ_P_3'). Furthermore, the new map is still (weakly) regular because ψ is regular and domains of cross maps can only intersect in their boundaries (<ref>). It agrees with ϕ on ∂
D^k = S^k-1 because S^k-1 can intersect C only in its boundary ∂ C = ∂_D^k(P_3), where ψ agrees with ϕ.
It remains to discuss the second case m^v = v_12. Observe that, under this assumption, the two vertices m^v and m^w = w_2 are not contained in the σ edge path in image of the prism (P_3',ψ_P_3') constructed in the first case. To resolve this, we apply the same procedure as above again, now to (P_3',ψ_P_3') instead of (P_3,ϕ_P_3): We define D' as the union of the four 3-simplices
{x_1, x_2, x_12, y_2}, {x_2, x_12, y_2, y_12}, {x_2, y_1, y_2, y_12} and {x_1, x_12, y_2, y_12}.
D'
The first three of these simplices form a new prism P_3” and ∂ D' = P_3'. Just as before, we obtain a regular map ψ' D' ∗ C_k-3→_(). Note that the prism (P_3”,ψ'|P_3”) is essential because the vertices m^v = v_12 and m^w = w_2 are contained in the σ edge path in ψ'(P_3”). The rest of the argument works as above. For an overview of the replacement process, see <ref>.
We explained how to replace ϕ with a map that has one less inessential prism. Iterating this procedure and successively removing all inessential prisms, we obtain the desired map ϕ'.
§.§.§ Step 2: Reducing the rank of cross maps
In order to prove <ref>, we need to explain how one can reduce the rank of weakly regular maps ϕ D^k →_() in which every prism is essential (compare <ref>). To do this, we show that the retraction for (see <ref>) extends over all, except inessential, simplices and record the effect of this retraction on cross maps, i.e. the building blocks of weak regularity (see <ref>). We continue to use the notation introduced in <ref> and frequently apply the notions defined in <ref> to refer to simplices in _().
We start by explaining why the retraction for (see <ref>) extends over all simplices of non-additive type (see <ref>).
Let ∈ be a vertex with () = R > 0. The composition
(_()) →^< R_() ↪^< R_()
of the retraction defined in <ref> and the inclusion map extends simplicially over all σ^2, skew-additive and skew-σ^2 simplices Δ∈_().
Let τ∈{σ^2, skew-additive, skew-σ^2} in the following. Let Δ be a simplex of type τ in _(). Then Δ = Θ∗Δ' for a minimal Θ of type τ and a standard simplex Δ'. Recall from <ref> that the only simplices that were subdivided to obtain (_()) from _() are 2-additive, mixed, 3-additive, double-double and double-triple simplices.
* Let τ∈{σ^2, skew-additive}. It follows that ∂Θ∗Δ' is a subcomplex of the subdivision (_()). The retraction on ∂Θ∗Δ' is hence defined as described in <ref>. Note that Δ is a non-additive simplex in the sense of <ref>. Hence, <ref> implies that extends over Δ and that (Δ) = (Θ) ∗(Δ') ∈^< R_() is again a simplex of type τ of the same dimension as Δ.
* Let τ = skew-σ^2. Then ∂Θ∗Δ' consists of standard, σ and skew-additive simplices. The retraction is defined on these simplices by the previous item. Now, Δ is again a non-additive simplex in the sense of <ref>. Hence, <ref> implies that extends over Δ and that (Δ) = (Θ) ∗(Δ') ∈^< R_() is a simplex of type skew-σ^2 of the same dimension as Δ.
The next lemma shows that the retraction extends over external 2-skew-additive simplices.
Let ∈ be a vertex with () = R > 0. The composition
(_()) →^< R_() ↪^< R_()
of the retraction defined in <ref> and the inclusion map extends simplicially over all external 2-skew-additive Δ = Θ∗Δ' ∈_(), where Θ is a minimal 2-skew-additive simplex of the form Θ = {v_0 , ⟨v⃗_0 ±⟩, v_1} for ∈{e_1, …, e_m, }, ω(v⃗_0, v⃗_1) = ± 1, and Δ' is a standard simplex.
Note that ∂Θ∗Δ' consists of external 2-additive and σ simplices. Since neither of these simplices can be carrying in _() (see <ref>), it follows that ∂Θ∗Δ' is contained in (_()). Forgetting symplectic information Δ = Θ∗Δ' ∈__2n+m() is an external 2-additive, hence non-carrying, simplex. It follows from <ref> (compare <cit.>), that ρ(Δ) = ρ(Θ) ∗ρ(Δ') ∈__2n+m^<R() is a standard or (external) 2-additive simplex with augmentation core ρ(Θ). We need to check that ρ(Δ) = ρ(Θ) ∗ρ(Δ') also forms a simplex in ^< R_().
Since Δ' is a standard simplex in _(), <ref> implies that ρ(Δ') is a standard simplex of the same dimension in ^< R_(). We can hence focus on ρ(Θ).
Firstly, assume that =. Then either (⟨v⃗_0 ±⟩) = (v_0), or (v_0) = v_0 and (⟨v⃗_0 ±⟩) = ⟨(v_0)±⟩ (see <cit.>). In the first case (Θ) = {(v_0), (v_1)} is a σ-edge contained in _^<R(). In the second case, (Θ) = {(v_0) = v_0, (⟨v⃗_0 ±⟩) = ⟨v⃗_0 ±⟩, (v_1)} is a 2-skew-additive of dimension 2 in ^< R_(). We conclude that ρ(Δ) = ρ(Θ) ∗ρ(Δ') is either a σ simplex or a -related 2-skew-additive simplex in ^< R_().
Secondly, we observe that if = e_i for some 1 ≤ i ≤ m, then (⟨v⃗_0 ±e⃗_i ⟩) = ⟨(v_0)± e_i ⟩. Therefore, (Θ) = {(v_0), (⟨v⃗_0 ±e⃗_i ⟩) = ⟨(v_0)±e⃗_i ⟩, (v_1)} and ρ(Δ)=ρ(Θ) ∗ρ(Δ') are external 2-skew-additive simplices in ^< R_().
Our next goal is to extend the retraction over σ-additive simplices. To do this, we need to subdivide all “carrying” σ-additive simplices as explained in the following remark.
Let ∈ with () = R > 0 and Δ∈_() σ-additive simplex. Forgetting symplectic information yields an internal 2-additive simplex Δ∈_^m_2n+m() which might be carrying in the sense of <ref>. Define (Δ) and the retraction on (Δ) exactly as for such internal 2-additive simplices. I.e. if Δ is carrying, (Δ) is the union of three simplices (see <ref>) on which the retraction is defined as in <ref>. Otherwise, (Δ) = Δ and the value of (Δ) is determined by the values on the vertex set as in <ref>.
Let ∈ with () = R > 0. The composition
(_()) →^< R_() ↪^< R_()
of the retraction defined in <ref> and the inclusion map extends simplicially over the subdivision (Δ) of all σ-additive simplices Δ∈_(). Here, (Δ) and the value of ρ on (Δ) are defined as in <ref>.
Let Δ be a σ-additive simplex in _(). Then Δ = Θ∗Δ' for a minimal σ-additive simplex Θ and a standard simplex Δ'. Note that ∂Θ∗Δ' consists of σ and standard simplices. Hence it follows that ∂Θ∗Δ' is contained in the subdivision (_()), even in the subcomplex (_()). The retraction for is an extension of the retraction on .
We recall that the retraction on (_()) was defined using the inclusion
↪^m_2n+m.
Forgetting symplectic information, Δ corresponds to an internal 2-additive simplex in ^m_2n+m. We check that the retraction for ^m_2n+m (see <ref>) can be used to extend the retraction for over σ-additive simplices: Let Θ = {v_0,v_1,v_2}.
First assume that Θ is not a carrying internal 2-additive simplex in _^m_2n+m() (in the sense of <ref>). Then (Δ) = (Θ) ∗(Δ') is an internal 2-additive simplex in _^m_2n+m^< R() with additive core (Θ) (see <cit.>). Arguing as in <ref> it follows that (Θ) forms a σ-additive simplex in _^< R(), because Θ is a minimal σ-additive simplex in _() by assumption, (v_i) = ≪v̅_i - a_i for some a_i and ω(, v̅_i) = 0 for all i.
Now assume that Θ is a carrying internal 2-additive simplex in _^m_2n+m() (in the sense of <ref>). Following Church–Putman's construction, we subdivide Δ = Θ∗Δ' into three simplices Θ_i ∗Δ with
Θ_i = {t(Θ)}∪{v_j | 0 ≤ j ≤ 2 and j ≠ i }
(compare with <ref>) and define the retraction on t(Θ) exactly as in <ref>. Without loss of generality, let us assume that v_0 = ⟨v̅_1 + v̅_2⟩ is the unique vertex in Θ that maximises (-) and that (t(Θ)) = ⟨(v_0) - ⟩. It follows from <cit.> that in ^m_2n+m, these three simplices are mapped to one internal 2-additive simplex (Θ_1) ∗(Δ') and two -related 2-additive simplices (Θ_i) ∗(Δ') for i = 0 and i = 2. The image of the Θ_i has the following form:
* (Θ_1) = {(v_0), (t(Θ)), (v_2)} = {(v_0), ≪(v_0) - (v_2), (v_2)}
(compare with π(β) on <cit.>).
* (Θ_0) = {(t(Θ)), (v_1), (v_2)} = {≪(v_1) - , (v_1), (v_2)}
(compare with π(α) on <cit.>).
* (Θ_2) = {(v_0), (v_1), (t(Θ))} = {(v_0), (v_1), ≪(v_1) - }
(compare with π(γ) on <cit.>).
Recall that Θ = {v_0,v_1,v_2} forms a minimal σ-additive simplex in the complex _(). Hence, ω(v⃗_i, v⃗_j) = ± 1 for any 0 ≤ i ≠ j ≤ 2. Taking this symplectic information into account, we conclude that (Θ_1 ∗Δ) = (Θ_1) ∗(Δ') is σ-additive, and that (Θ_i ∗Δ) = (Θ_i) ∗(Δ') for i = 0 and i = 2 are external 2-skew-additive in _^< R(). Hence, extends over (Δ).
The three previous lemmas and their proofs have the following consequence.
Let ∈ be a vertex with () = R > 0 and let denote the extension of the retraction of over σ^2, skew-additive, skew-σ^2, σ-additive and external 2-skew-additive simplices constructed in <ref>, <ref>, <ref> and <ref>. Assume that ϕ is a …
* … σ^2 cross map ϕΔ^1 ∗Δ^1 ∗ C_k-2→_();
* … external 2-skew-additive cross map ϕΔ^2 ∗ C_k-1→_(); or
* … σ-additive cross map ϕΔ^2 ∗ C_k-1→_().
Then
* ψ = ∘ϕΔ^1 ∗Δ^1 ∗ C_k-2→_^< R() is a σ^2 cross map;
* ψ = ∘ϕΔ^2 ∗ C_k-1→_^< R() has image in _^<R() or is an external 2-skew-additive cross map; or
* ψ = ∘ϕ(Δ^2) ∗ C_k-1→_^< R() is a weakly regular map.
Here, (Δ^2) is the subdivision of the 2-simplex Δ^2 into three simplices obtained by placing the new vertex t(Δ^2) at the barycentre of Δ^2 if ϕ(Δ^2) is carrying and (Δ^2) = Δ^2 otherwise (compare with <ref>).
Note that a σ-additive cross map ϕ might only yield a weakly regular map ψ = ∘ϕ. The reason for this is that if the minimal σ-additive simplex in the image of ϕ is carrying, then the image of ψ = ∘ϕ contains two external 2-skew-additive simplices. This has been made explicit at the end of the proof of <ref>.
The next and final lemma shows that the retraction extends over the image of any prism cross map whose prism is essential (see <ref>). In light of <ref>, this suffices for our purposes. We use the following notation, which is illustrated in <ref>.
Let P_3 be the prism on the vertex set x_1, x_2, x_12, y_1, y_2, y_12 introduced in <ref>. Recall that P_3 is the union of the three 3-simplices x_1, x_2, x_12, y_12, x_1, x_2, y_1, y_12 and x_1, y_1, y_2, y_12 (see <ref>).
* We let P_3^♢ be the subcomplex of P_3 obtained by removing the two simplices x_1, x_2, x_12, y_12 and x_1, y_1, y_2, y_12 that are mapped to 2-skew-additive simplices by a prism cross map. All other simplices, including all proper faces of x_1, x_2, x_12, y_12 and x_1, y_1, y_2, y_12, are contained in P_3^♢.
Let ∈ and assume that ϕ P_3 ∗ C_k-2→_() is a prism cross map (see <ref>). Then ϕ({x_1, x_2, x_12}) and ϕ({y_1, y_2, y_12}) are internal 2-additive simplices in the complex _() that might be carrying or not.
* We denote by (P_3^♢) the subdivision of P_3^♢ obtained by placing new vertices t( x_1, x_2, x_12) and t({y_1, y_2, y_12}) at the barycentre of x_1, x_2, x_12 and {y_1, y_2, y_12}, respectively, depending on whether ϕ({x_1, x_2, x_12}),
ϕ({y_1, y_2, y_12}), neither or both are carrying.
Let ∈ be a vertex with () = R > 0 and let denote the extension of the retraction for over σ^2, skew-additive, skew-σ^2, σ-additive and external 2-skew-additive simplices constructed in <ref>, <ref> and <ref>.
Assume that
ϕ P_3 ∗ C_k-2→_()
is a prism cross map such that (P_3, ϕ|_P_3) is essential (in the sense of <ref>). Then the simplicial map
ψ|_P_3^♢∗ C_k-2 = ∘ϕ|_(P_3^♢) ∗ C_k-2(P_3^♢) ∗ C_k-2→_^< R()
can be extended to a weakly regular map
ψ(P_3) ∗ C_k-2→_^< R(),
where (P_3) is a subdivision of P_3 containing (P_3^♢).
The proof of <ref> actually shows that the retraction extends over all essential internal 2-skew-additive simplices in _(). I.e. the extension over such simplices, constructed in the proof of <ref>, does not rely on the ambient prism cross map.
Throughout this proof we use the notation introduced in <ref> and, for prism cross maps, in <ref>. We need to explain how to extend ψ|_P_3^♢∗ C_k-2 over the two simplices x_1, x_2, x_12, y_12 and x_1, y_1, y_2, y_12 of P_3, which are not contained in (P_3^♢) and whose image under ϕ are essential internal 2-skew-additive simplices. We focus on Δ = x_1, x_2, x_12, y_12 and write Θ = x_1, x_2, x_12 for the face with the property that ϕ(Θ) is an internal 2-additive simplex in _(). The extension over x_1, y_1, y_2, y_12 is completely analogous.
There are two cases depending on whether the image of Θ,
ϕ(Θ) = {v_1, v_2, v_12 = ≪v⃗_1 ±v⃗_2 },
is a carrying 2-additive simplex in _() (see <ref>) or not.
First assume that ϕ(Θ) is not carrying. Then it follows from <ref> that the simplex ϕ(Θ) ∈_() is not subdivided when passing from _() to (_()). In this case <cit.> implies that (∘ϕ)(Θ) is an internal 2-additive simplex. Taking the symplectic information into account (see <ref>), we conclude that
(∘ϕ)(Δ) = {(v_1), (v_2), (v_12) = ≪(v_1)±(v_2), (w_12)}
is an internal 2-skew-additive simplex. Therefore, for any simplex Δ' ∈ C_k-2, the image (∘ϕ)(Δ∗Δ') is a 2-skew-additive simplex and ψ|_P_3^♢∗ C_k-2 extends over Δ.
Now assume that ϕ(Θ) is carrying. This situation is depicted in <ref>. In this case, it follows that the simplex ϕ(Θ) ∈_() is subdivided into three simplices when passing from _() to (_()) (see <ref>). Recall that the definition of the retraction on the subdivision of a carrying simplex depends on which vertex of ϕ(Θ) = {v_1, v_2, v_12} maximises (-) (see <ref>). I.e. if ϕ(Θ) is carrying, then there exists a unique vertex m ∈{v_1, v_2, v_12} such that
(m) is maximal in {(v_1), (v_2), (v_12)}
(compare <cit.>).
Since (P_3, ϕ|_P_3) is essential, ϕ(Δ) is an essential internal 2-skew-additive simplex in the sense of <ref>. We therefore have m = v_1 or m = v_2 because
v_12 is not contained in a σ edge in ϕ(Δ)E
(compare <ref>).
We assume that m = v_1 (without loss of generality, compare <ref>). Then, we have that v̅_1 = v̅_2 + v̅_12 and (v_1) = (v_2) + (v_12) - (compare <cit.>). Furthermore, it holds that ((v_2)), ((v_12)) > 0 (compare <ref> and <cit.>).
Let v^†∈{v_2, v_12} denote the arbitrary choice made in the definition of the retraction (see <ref>). Using <ref>, the image of the new vertex t(Θ) in (Θ) under the retraction is given by
(t(Θ)) = ≪(v^†) - .
This implies that the image of the subdivided simplex (Θ) under the map ψ|_(P_3^♢) = ∘ϕ is the union of the following three 2-additive simplices:
* ψ|_(P_3^♢)(Θ_1) = {≪(v^†) - , (v_2), (v_12)}, which is -related 2-additive;
* ψ|_(P_3^♢)(Θ_2) = {≪(v^†) - , (v_1), (v_12) }, which is -related 2-additive if v^† = v_12 and internally 2-additive otherwise;
* ψ|_(P_3^♢)(Θ_3) = {≪(v^†) - , (v_1), (v_2) }, which is -related 2-additive if v^† = v_2 and internally 2-additive otherwise.
We now reduce the case v^† = v_2 to the case v^† = v_12, and then explain how to extend ψ|_P_3^♢∗ C_k-2.
Reduction to the case v^† = v_12: Assume that v^† = v_2. By <cit.>, we know that
Ω_2 = {(v_2), (v_12), (v_1)}∗{≪(v_2) - }
and
Ω_12 = {(v_2), (v_12), (v_1)}∗{≪(v_12) - }
are two double-triple in _() sharing the 3-additive facet
{(v_2), (v_12), (v_1)}
(see <ref>).
This implies that Ω_2 ∗Ω and Ω_12∗Ω are double-triple simplices for every simplex Ω∈ C_k-2. Therefore, we can extend ψ|_P_3^♢∗ C_k-2 over (Ω_2 ∪Ω_12) ∗Ω for any simplex Ω∈ C_k-2. And hence, we may assume that v^† = v_12.
The case v^† = v_12:
To see that extends over Δ = x_1, x_2, x_12, y_12, we start by subdividing the simplex Δ∗Ω for any Ω∈ C_k-2 in a way that is compatible with the subdivision (Θ) of the face Θ = x_1, x_2, x_12⊂Δ into the simplices Θ_1, Θ_2, Θ_3 discussed above. Let Ω be any simplex in C_k-2. Then we subdivide Δ∗Ω into the following three simplices (compare <ref>):
* Δ_1 = Θ_1 ∗{y_12}∗Ω,
* Δ_2 = Θ_2 ∗{y_12}∗Ω and
* Δ_3 = Θ_3 ∗{y_12}∗Ω.
Note that ψ|_P_3^♢∗ C_k-2 = ∘ϕ is defined on the vertex set of these simplices. We claim that their image in ^< R_() always forms a simplex. This is exactly where we use the assumption v^† = v_12, i.e. for v^† = v_2 this claim is wrong. The key point is that ω(v⃗_12, w⃗_12) = 0 while ω(v⃗_2, w⃗_12) = ± 1 and ω(v⃗_1, w⃗_12) = ± 1 (compare <ref> and <ref>). Computing the image of each simplex, we find:
ψ(Δ_1) = ψ(Θ_1) ∗ψ({y_12}) ∗ψ(Ω)
= {≪(v_12) - , (v_2), (v_12)}∗{(w_12)}∗ψ(Ω),
which is an -related mixed simplex.
ψ(Δ_2) = ψ(Θ_2) ∗ψ({y_12}) ∗ψ(Ω)
= {≪(v_12) - , (v_1), (v_12)}∗{(w_12)}∗ψ(Ω),
which is an -related mixed simplex.
ψ(Δ_3) = ψ(Θ_3) ∗ψ({y_12}) ∗ψ(Ω)
= {≪(v_12) - , (v_1), (v_2)}∗{(w_12)}∗ψ(Ω)
= {≪(v_1) - (v_2), (v_1), (v_2)}∗{(w_12)}∗ψ(Ω),
which is a 2-skew-additive. To check these statements, we used:
* ω((v^†) - , (w_12)) = 0 if v^† = v_12. Note that this fails if v^† = v_2.
* The identity ≪(v_12) - = ≪(v_1) - (v_2), which follows from the fact that (v_1) = (v_2) + (v_12) -.
This completes the proof of the fact that the simplicial map
ψ|_(P_3^♢) ∗ C_k-2 = ∘ϕ(P_3^♢) ∗ C_k-2→_^< R()
can be extended to a simplicial map
ψ(P_3) ∗ C_k-2→_^< R().
It remains to check that the extension ψ is weakly regular. For this, we note that the image of ψ contains exactly two 2-skew-additive simplices and one skew-σ^2 simplex (which is already contained in the image of ψ|_(P_3^♢) ∗ C_k-2), see <ref>. By construction these three simplices are the image of a prism P in (P_3), which has the property that _(P_3) ∗ C_k-2(P) = P ∗ C_k-2 and ψ|_P ∗ C_k-2 is a prism cross map. Since the image of ψ does not contain external 2-skew-additive simplices, ψ is prism-regular. Since it also does not contain any σ^2 or σ-additive simplices, we conclude that ψ is weakly regular (compare <ref>).
§.§.§ Step 3: Proof of Proposition 6.1
The proof of <ref> relies on Zeeman's relative simplicial approximation theorem (<ref>).
We use the notation introduced in the statement of <ref> and, invoking Step 1 (i.e. <ref>), may assume that every prism in the weakly regular map
ϕ D^k →_()
is essential.
The following construction relies on the fact that the domains of cross maps in ϕ can only intersect at their boundary spheres (see <ref>). <ref> implies that the only cross maps Σ∗ C_i→_() in ϕ for which the image of the boundary sphere ∂ (Σ∗ C_i) can contain carrying simplices in _() are prism cross maps. Indeed, let ϕ|__D^k(P)_D^k(P) →_() be a
prism cross map in ϕ. Writing _D^k(P) = P ∗ C_k-3, we have ∂_D^k(P) = ∂ (P ∗ C_k-3) = (∂ P) ∗ C_k-3, and two simplices in ∂ P might get mapped to carrying internal 2-additive simplices by ϕ.
We start the construction of ψ by removing the interior of the domain of any cross map in ϕ from D^k. We denote the resulting subcomplex of D^k by K. Note that the boundary sphere of the domain of any cross map in ϕ is contained in K and that the boundary sphere ∂ D^k = S^k-1 of D^k is also contained in K. We denote by L the subcomplex of K obtained as the union of all of these spheres. For each prism cross map in ϕ, consider the boundary sphere (∂ P) ∗ C_k-3 of its domain. Let Θ_x and Θ_y denote the two simplices in ∂ P that get mapped to internal 2-additive simplices by ϕ. Subdivide Θ_x and Θ_y by placing a new vertex at their barycentre depending on whether ϕ(Θ_x), ϕ(Θ_y), neither or both are carrying to obtain (∂ P). For each prism cross map in ϕ, replace (∂ P) ∗ C_k-3 by (∂ P) ∗ C_k-3 in L to obtain L' and let K' be the coarsest subdivision of K containing L'.
Next, we use that weak regularity and <ref> imply that the restriction ϕ|_K has image in _(), i.e.
ϕ|_K K →_().
Since
|(_())| ≅ |_()|,
we obtain a continuous map ϕ|_K |K| → |(_())| that is simplicial except on simplices of K whose image under ϕ was carrying in _() (see <ref>). Identifying K with K', we obtain a continuous map
ϕ'|_K' |K'| → |(_())|
that is simplicial on the subcomplex L' of K'. Composing this map with the simplicial retraction for (_()) constructed in <ref> yields a continuous map
∘ϕ'|_K' |K'| → |_^< R()|
that is simplicial on the subcomplex L' of K'. An application of Zeeman's simplicial approximation theorem (<ref>) now provides us with a subdivision K” of K' containing L' and a simplicial map
ψ|_K” K”→_^< R()
that agrees with ∘ϕ'|_L' on L' ⊆ K”.
Finally, we inspect the domains of cross maps in ϕ and use ψ|_K” described above, <ref> and <ref> to construct the weakly regular map
ψ D^k_→_^< R().
* Consider a σ^2 cross map ϕ|__D^k(Δ)_D^k(Δ) →_() in ϕ. Then <ref> implies that ∘ϕ|__D^k(Δ) is a σ^2 cross map. Since ∂_D^k(Δ) is contained in L' and ψ|_K” agrees with ∘ϕ|__D^k(Δ) on ∂_D^k(Δ), it follows that we can glue _D^k(Δ) to K” and extend ψ|_K” by ∘ϕ|__D^k(Δ).
* Let ϕ|__D^k(Δ)_D^k(Δ) →_() be an external 2-skew-additive cross map in ϕ. Then <ref> implies that ∘ϕ|__D^k(Δ) is either contained in _^< R() or an external 2-skew-additive cross map. Since ∂_D^k(Δ) is contained in L' by definition and ψ|_K” agrees with ∘ϕ|__D^k(Δ) on ∂_D^k(Δ), it follows that we can glue _D^k(Δ) to K” and extend ψ|_K” by ∘ϕ|__D^k(Δ).
* Consider a σ-additive cross map ϕ|__D^k(Δ)_D^k(Δ) →_() in ϕ, and write the domain as _D^k(Δ) = Δ∗ C_k-2. Note that ∘ϕ|__D^k(Δ) might not be simplicial because ϕ(Δ) could be carrying in the sense of <ref>. However, <ref> implies that
∘ϕ|_(Δ) ∗ C_k-2(Δ) ∗ C_k-2→_^<R()
is a weakly regular. Noting that
∂_D^k(Δ) = (∂Δ) ∗ C_k-2 = (∂(Δ)) ∗ C_k-2 = ∂ ((Δ) ∗ C_k-2),
that ∂_D^k(Δ) is contained in L' by definition and that ψ|_K” agrees with ∘ϕ|_(Δ) ∗ C_k-2 on ∂_D^k(Δ), we can glue (Δ) ∗ C_k-2 to K” and extend ψ|_K” by ∘ϕ|_(Δ) ∗ C_k-2.
* Consider a
prism cross map ϕ|__D^k(P)_D^k(P) →_() in ϕ, and write the domain as _D^k(P) = P ∗ C_k-3. By assumption (P, ϕ|_P) is essential, hence we can invoke <ref> to obtain a weakly regular map
ψ|_(P) ∗ C_k-3(P) ∗ C_k-3→_^<R().
The boundary ∂(P) of (P) (see <ref>) is exactly the complex (∂ P), i.e. the subdivision of ∂ P that we used to define L'. Hence,
∂ ((P) ∗ C_k-3) = (∂ P) ∗ C_k-3
is contained in L' by construction and ψ|_K” agrees with ψ|_(P) ∗ C_k-3 on ∂ ((P) ∗ C_k-3). It follows that we can glue (P) ∗ C_k-3 to K” and extend ψ|_K” by ψ|_(P) ∗ C_k-3.
Using items 1.-4. above, we can extend ψ| to a simplicial map
ψ' D^k_→_^< R().
The domain D^k_ of this map is a combinatorial k-ball because it is constructed as a subdivision of D^k (see <ref>). Furthermore, the map is weakly regular and agrees with ϕ on S^k-1⊆ L' by construction. This completes the proof.
§ HIGHLY CONNECTED SUBCOMPLEXES
In this section we collect and prove auxiliary connectivity results about Tits buildings and subcomplexes of . We use these later to show that is highly connected, i.e. to prove <ref>.
§.§ The complexes B, BA and BAA
In <ref>, <ref>, <ref> and <ref> we introduced the complexes ^m_n, ^m_n and ^m_n. These were first defined in <cit.> to study the Steinberg module of n: A complex closely related to _n^m was used by Church–Farb–Putman <cit.> to construct a generating set, which was first described by Ash–Rudolph in <cit.>. The complex _n^m was used by Church–Putman <cit.> to construct a presentation, which was first described by Bykovskiĭ in <cit.> and for n = 2 by Manin in <cit.>. The complex _n^m was used by Brück–Miller–Patzt–Sroka–Wilson <cit.> to understand the two-syzygies, the relations among the relations. The following two theorems summarise the main connectivity theorems contained in <cit.>.
Let n ≥ 1 and m ≥ 0.
* _n^m is (n-2)-connected and Cohen–Macaulay of dimension n-1.
* _n^m is (n-1)-connected. If m+n ≥ 2, then _n^m is Cohen–Macaulay of dimension n. If m+n ≤ 2, then _n^m is contractible.
Let n ≥ 1 and m ≥ 0.
The complex _n^m is n-connected. If m+n ≥ 3, then _n^m is Cohen–Macaulay of dimension n+1. If m+n ≤ 2, then _n^m is contractible.
§.§ The symplectic Tits building
In the introduction, we defined the rational Tits building of type 𝙲_n as the order complex of the poset of nonzero isotropic subspaces of ^2n.
If V ⊆^2n is such a subspace, then by <ref>, V∩^2n is an isotropic summand of ^2n. On the other hand, given an isotropic summand V' ⊆^2n, the tensor product V' ⊗ gives an isotropic subspace of ^2n. This yields isomorphisms between the poset of isotropic subspaces of ^2n and the poset of isotropic summands of ^2n. We use the latter as the definition of the Tits building for the remainder of this article.
Let n≥ 1. We denote the poset of nonzero isotropic summands of ^2n by T^ω_n and call it the symplectic Tits building (of rank n).
We usually do not distinguish between this poset and its order complex.
The following is a version of the Solomon–Tits Theorem.
The symplectic Tits building T^ω_n is Cohen–Macaulay of dimension n-1.
§.§ W-restricted subcomplexes of I, I"005E"03B4 and IAA*
Let m,n ∈. In this subsection, we study subcomplexes of , and whose vertex sets are restricted using the submodule
W = W_m+n = ≪e⃗_1, f⃗_1 …, e⃗_m+n-1, f⃗_m+n-1, e⃗_m+n⊆^2(m+n).
The results here build on ideas contained in <cit.>, <cit.> and <cit.>.
Let X ∈{[], [], []}. We define X_n^m(W) to be the full subcomplex of X^m_n on the set of vertices that are contained in W_m+n.
Similarly, we define T^ω_m+n(W) to be the subposet of the symplectic Tits building T^ω_m+n of summands contained in W_m+n. We let
T^ω,m_n(W) = T^ω_m+n(W)_> ≪ e_1, …, e_m
denote the upper link of the summand ≪ e_1, …, e_m, i.e. the subposet of all V∈ T^ω_m+n(W) such that ≪ e_1, …, e_m < V.
The following proposition concerns the connectivity properties of these W-restricted subcomplexes.
Let n ≥ 1 and m ≥ 0.
* (W) is Cohen–Macaulay of dimension n-1.
* (W) is (n-1)-connected.
* (W) is n-connected.
The connectivity results of <ref> and <ref> of <ref> are essentially <cit.>. Putman informed us that their proof contains some small gaps. These gaps were fixed in <cit.>. The Cohen–Macaulay property of (W) has been verified in <cit.>. We are hence left with checking <ref> of <ref>. For this, we apply the same strategy as in <cit.>. We start by defining an intermediate simplicial complex.
The simplicial complex ∫_W^m_n has as vertices the rank-1 summands v of W such that ≪e⃗_1, …, e⃗_m, v⃗ is an isotropic rank-(m+1) summand of ^2(m+n). A collection Δ of such lines forms a simplex if there exists V ∈ T^ω,m(W) such that Δ is a simplex in ^m(V ).
Let n ≥ 1 and m ≥ 0. Then ∫_W^m_n is n-connected.
There is a poset map
s(∫_W^m_n) ⟶ T^ω, m_n(W)
Δ ⟼⟨e⃗_1, …, e⃗_m ⟩ + ⟨Δ⟩,
where (∫_W^m_n) denotes the simplex poset of ∫_W^m_n.
The target T^ω, m_n(W) of this poset map is a contractible Cohen–Macaulay poset by <cit.>.
Hence, a result by van der Kallen–Looijenga[For the n in <cit.>, choose what is (n+1) in the notation of the present article; define their map t by t(V)=(V) - m + 2] <cit.> implies that it suffices to check that for every V ∈ T^ω, m_n(W), the poset fibre s_≤ V is ((V) - m)-connected.
This follows from <ref> because s_≤ V≅(^m(V)).
The following lemma shows that (W) is n-connected, which finishes the proof of <ref>. For its proof, we use that (W) is obtained from ∫_W^m_n by attaching σ simplices along highly connected links.
Let n ≥ 1 and m ≥ 0. Let S a combinatorial k-sphere for k ≤ n and consider a simplicial map ϕ S →(W). Then ϕ is weakly regularly nullhomotopic in (W).
We start with the following observation: If ψ D →(W) is any nullhomotopy of ϕ, it follows from <ref> of <ref> that we may assume that D is a combinatorial (k+1)-ball. Because (W) does not contain any simplices of the type listed in <ref>, it follows that any such nullhomotopy ψ is regular and, in particular, weakly regular. Hence, we only need to prove that (W) is n-connected.
To see this, we apply the standard link argument as explained in <ref>: By <ref>, we know that the subcomplex X_0 = ∫_W ^m_n of X_1 = (W) is n-connected. Let B be the set of all minimal σ simplices Δ = {v,w} in X_1. This set is a set of bad simplices in the sense of <ref>. Following <ref>, we find that ^good_X_1(Δ) = _(W)(Δ) = []^δ,m(W ∩≪Δ^⊥) for Δ = {v, w}∈ B. Using that there is an isomorphism
(W ∩≪Δ^⊥, ω|_W ∩≪Δ^⊥) ≅(⟨e⃗_1, f⃗_1, …,e⃗_m+n-2, f⃗_m+n-2, e⃗_m+n⟩ , ω|_⟨e⃗_1, …, f⃗_m+n-2, e⃗_m+n⟩)
that preserves e⃗_1, …, e⃗_m, it follows that
[]^δ,m(W ∩≪Δ^⊥) ≅[]^δ,m(≪ e_1, f_1 …,e_m+n-2, f_m+n-2, e_m+n).
The complex on the right hand side is (n-2)-connected by <ref> of <ref>. Hence, <ref> of <ref> implies that X_1 is (n-2)-connected.
§.§ Rank-R subcomplex of I and I"005E"03B4
Let R ∈∪{∞}. In this subsection, we collect results about the full subcomplex ()^≤ R and ()^≤ R of and , respectively, on the set of vertices of rank at most R (see <ref>).
The first lemma is an easy consequence of <cit.> using <ref> or <cit.>. An alternative proof can be found in <cit.>.
Let n,m ≥ 0. Then is Cohen–Macaulay of dimension (n-1).
The next lemma is a refinement of the previous one, and an immediate consequence of the proof of <cit.>. For the convenience of the reader, we include a short argument using <ref> and <ref>.
Let n,m ≥ 0. For all R∈∪∞, the complex ()^≤ R is (n-2)-connected.
We perform a standard link argument as explained in <ref>, using induction on R and the retraction for constructed in <ref>. For R = 0, ()^≤ 0 = (W) and the claim follows from <ref> of <ref>. For R = ∞, ()^≤∞ = and the claim follows from <ref>. Assume for the induction that X_R = ()^≤ R is (n-2)-connected.
Consider the complex X_R+1 = ()^≤ R+1 containing X_R as a subcomplex. Let B ⊂ X_R+1 - X_R be the set of simplices Δ = {v_0, …, v_k} with the property that (v_i) = R+1 for all i. This is a set of bad simplices in the sense of <ref>. Following <ref>, we find that _X_R+1^good(Δ) = _^<R+1(Δ) for Δ∈ B.
By <ref>, we know that _(Δ) is (n - (Δ) - 3)-connected. Hence, it follows from <ref> that _X_R+1^good(Δ) = _^<R+1(Δ) is (n - (Δ) - 3)-connected as well. Therefore, <ref> of <ref> implies that X_R+1 = ()^≤ R+1 is (n-2)-connected.
The next lemma refines <ref> further and verifies that ()^≤ R is Cohen–Macaulay of dimension (n-1) for all R∈∪∞.
Let n,m ≥ 0. Let Δ be a simplex of and R∈ the maximum of (-) among all vertices of Δ. Then for all R'∈ with 0< R'≥ R, the complex _^< R'(Δ) is (n - (Δ) - 3)-connected.
In particular, for all R'∈∪∞, the complex ()^≤ R' is Cohen–Macaulay of dimension (n-1).
We show inductively that for all R' ≥ R, the complex
Y_R'_^< R'(Δ)
is (n - (Δ) - 3)-connected.
We start by considering two cases: If R = 0 and R'=1, then Δ is a simplex of (W) and <ref> of <ref> implies that the subcomplex Y_R' = _(W)(Δ) is (n - (Δ) - 3)-connected. If on the other hand R' = R >0, the retraction for (see <ref>) and the fact that _(Δ) is (n - (Δ) - 3)-connected by <ref> imply that Y_R' is (n - (Δ) - 3)-connected.
Now assume that R' > R, R' > 1 and that Y_R'-1 is (n-(Δ) -3)-connected. Let B ⊂ Y_R' - Y_R'-1 be the set of all simplices Θ in Y_R' such that all vertices w ∈Θ satisfy (w) = R'-1. This is a set of bad simplices in the sense of <ref>. Following <ref>, we find that
^good_Y_R'(Θ) = _Y_R'^< R'-1(Θ) ≅_^< R'-1(Δ∗Θ)
for Θ∈ B. The join Δ∗Θ is a standard simplex of dimension ((Δ)+(Θ)+1), therefore <ref> implies that _(Δ∗Θ) is (n - ((Δ) + (Θ) + 2) - 2)-connected. As the maximal rank of a vertex in Δ∗Θ is R'-1, the retraction for (see <ref>) implies that ^good_Y_R'(Θ) is ((n - (Δ) - 3) - (Θ) - 1)-connected. Hence, <ref> of <ref> implies that Y_R' is (n- (Δ) - 3)-connected.
It remains to check why this implies that for all R'∈∪∞, the complex ()^≤ R' is Cohen–Macaulay of dimension (n-1). For R' = ∞ and R' = 0, we already showed Cohen–Macaulayness in <ref> and <ref> of <ref>. So let 0< R' ∈ and Δ be a simplex in ()^≤ R'. Then the maximal rank among all vertices of Δ is given by some R≤ R'.
Hence by the first part of the claim, we have that _^< R' +1(Δ) = _()^≤ R'(Δ) is (n - (Δ) - 3)-connected.
Finally, we need a similar result for .
Let n ≥ 1 and m ≥ 0. For all R∈∪∞, the complex ()^≤ R is (n-2)-connected.
We perform a standard link argument as described in <ref>. Let X_1 = ()^≤ R and consider the (n-2)-connected subcomplex X_0 = ()^≤ R (see <ref>). Let B be the set of all minimal 2-additive simplices in X_1. This set is a set of bad simplices in the sense of <ref>. Following <ref>, we find that ^good_X_1(Δ) = _^≤ R(Δ) for Δ∈ B. Let Δ = {v_0, …, v_k} such that v_0 = ⟨v⃗_1 + v⃗_2 ⟩ or v_0 = ⟨e⃗_i + v⃗_1 ⟩ for some 1 ≤ i ≤ m. Then by <ref>, the simplex Δ' = {v_1 …, v_k} is standard and _^≤ R(Δ) = _^≤ R(Δ').
This complex is (n - (Δ') - 3) = (n-(Δ) -2)-connected
by <ref>. Hence, <ref> of <ref> implies that X_1 is (n-2)-connected.
§.§ IA and links of certain vertices
In this subsection, we collect results about and links of certain vertices. The following is <cit.> and a consequence of <cit.>.
Let n ≥ 1 and m ≥ 0. Then is (n-1)-connected.
Using the retraction for (see <ref>), the previous lemma has the following consequence.
Let n ≥ 1 and m ≥ 0.
Let v ∈ be a vertex of rank R = (v)>0.
* _(v) is (n-2)-connected.
* _^<(v) is (n-2)-connected.
By <ref>, there is an isomorphism _(v)≅[n-1][m+1]. This complex is (n-2)-connected by <ref>, which shows the first item.
The second item then follows by applying <ref>.
§.§ Subcomplexes of IAA
In this final subsection, we study the subcomplex of and show that the inclusion ↪ is a highly connected map.
Let n ≥ 1 and m ≥ 0. Then is (n-1)-connected.
We apply the standard link argument explained in <ref> twice.
Firstly, let X_1 be the simplicial complex that is obtained from by attaching all 3-additive simplices. It has X_0 = as a subcomplex, which (n-1)-connected by <ref>. Let B be the set of minimal 3-additive simplices contained in X_1. This is a set of bad simplices in the sense of <ref>. Following <ref>, we find that _X_1^good(Δ) = _X_1(Δ) for Δ∈ B.
It is not hard to check that for every minimal 3-additive simplex Δ we have _X_1(Δ) ≅[n-(Δ)][m+(Δ)]. This complex is (n-(Δ)-2)-connected by <ref>. Because X_0 = is (n-1)-connected, <ref> of <ref> implies that X_1 is (n-1)-connected as well.
Secondly, let X_2 = [n][m] and consider its subcomplex X_1, which is (n-1)-connected by the previous argument. Let B be the set of minimal double-triple and double-double simplices contained in X_2. This is a set of bad simplices in the sense of <ref>. Following <ref>, we find that _X_2^good(Δ) = _X_2(Δ) for Δ∈ B.
For every minimal double-triple or double-double simplex Δ, we have _X_2(Δ) ≅[n-((Δ) - 1)][m+( (Δ) - 1)] (see <ref>). This complex is (n-(Δ)-1)-connected by <ref>. Because X_1 is (n-1)-connected, <ref> of <ref> implies that X_2 = is (n-1)-connected as well.
We now start working towards the proof that the inclusion ↪ is a highly connected map. For this the following observation is key.
Let Δ be a skew-additive simplex in of dimension k. Then there is a unique 2-skew-additive simplex in of dimension (k+1) that has Δ as a face.
Let Δ be skew-additive. We can write Δ= v_0, v_1, …, v_k, where (after choosing appropriate representatives) ω(v⃗_0, v⃗_k)= 1, ω(v⃗_1, v⃗_k) = -1.
It is easy to see that ⟨v⃗_0+v⃗_1⟩, v_0, v_1, …, v_k is a (k+1)-dimensional 2-skew-additive simplex containing Δ.
Now assume that Δ' = Δ∪ v_k+1 is any such 2-skew-additive simplex. We claim that v_k+1 = ⟨v⃗_0+v⃗_1⟩.
As Δ' is 2-skew-additive, there are exactly three vertices
of it that are not isotropic to every other vertex (see <ref>).
These must be v_0, v_1, v_k using the notation in the first paragraph.
By <ref>, we also know that one vertex of Δ' is equal to ⟨±v⃗_0 ±v⃗_1 ⟩ and isotropic to v_k. As the set v⃗_0, …, v⃗_k is linearly independent, this vertex must be v_k+1 = ⟨±v⃗_0 ±v⃗_1 ⟩. What is left to show is that indeed v_k+1 is equal to ⟨v⃗_0 + v⃗_1 ⟩ = ⟨ - v⃗_0 - v⃗_1 ⟩ and not to ⟨v⃗_0 - v⃗_1 ⟩ = ⟨ -v⃗_0 + v⃗_1 ⟩. This follows easily from the assumption that ω(v_k,v_k+1) = 0.
<ref> has the following two consequences.
Let n≥ 2 and ⊂ X_1⊂ be the complex obtained from by adding all skew-additive and 2-skew-additive simplices of . Then there is a deformation retraction X_1 →.
The skew-additive and 2-skew-additive simplices in X_1 are the only simplices that contain a skew-additive face. Let Δ be a skew-additive simplex of maximal dimension n in X_1. <ref> implies that there is a unique (n+1)-dimensional simplex Δ' that contains Δ as as face. (I.e. Δ is a “free face” in X_1.) “Pushing all such free faces Δ through their Δ'” gives a deformation retraction from X_1 to a complex whose 2-skew-additive faces have dimension at most n-1 and whose skew-additive faces have dimension at most n. Iterating over the dimension of skew-additive simplices yields the desired deformation retraction X_1 →.
The inclusion ↪ is n-connected.
We show that there is a sequence of complexes
⊂ X_1 ⊂ X_2 ⊂ X_3 ⊂,
such that each inclusion in this sequence is n-connected.
Let X_1 be obtained from by adding all skew-additive and 2-skew-additive simplices of . Then by <ref>, there is a deformation retraction X_1→, so the inclusion ↪ X_1 is n-connected.
We now apply the standard link argument explained in <ref> three times.
Firstly, let X_2 be the complex that is obtained from X_1 by attaching all σ^2 simplices. Let B be the set of minimal σ^2 simplices contained in X_2. This is a set of bad simplices in the sense of <ref>. Following <ref>, we find that for all Δ∈ B, we have _X_2^good(Δ) = _X_2(Δ) = _(Δ).
By <ref>, this complex is isomorphic to [n-(Δ)+1] and hence (n-(Δ)-1)-connected by <ref>. Hence, X_1↪ X_2 is n-connected by <ref> of <ref>.
Secondly, let X_3 be obtained from X_2 by attaching all skew-σ^2 simplices. Then, a set B of bad simplices in X_3 ∖ X_2 is given by all minimal skew-σ^2 simplices. Using <ref>, we obtain that the map X_2↪ X_3 is n-connected by <ref> of <ref>.
Lastly, is obtained from X_3 by attaching all σ-additive simplices. A set B of bad simplices in ∖ X_3 is given by all minimal σ-additive simplices. The desired result follows from <ref> and <ref> of <ref>.
Note that <ref> and <ref> already imply that is (n-1)-connected. In the next section, we improve upon this and show that is n-connected.
§ THEOREM C: A HIGHLY CONNECTED COMPLEX
We now collected everything that is needed to show that is n-connected.
By <ref>, we know that the inclusion ↪ induces a surjection on π_k for k≤ n. Hence, <ref> follows if we can show that this map is zero on π_k.
By <ref> of <ref>, every element of π_k() is represented by a map from a combinatorial k-sphere to . Hence, it suffices to show the following result that uses the notions of regularity and weak regularity as defined in <ref>:
Let n ≥ 1, m ≥ 0 and k≤ n. Let S be a combinatorial k-sphere and ϕ S→ a simplicial map.
* If n = 1, the map ϕ is weakly regularly nullhomotopic in [n] = [1].
* If n≥ 2, the map ϕ is regularly nullhomotopic in .
We prove <ref> in this section by an inductive procedure. In <ref>, we cover the case n=1. In <ref>, we then show that if every map S^k-1→[n-1][m+1] is weakly regularly nullhomotopic in [n-1][m+1], then every map S^k → is regularly nullhomotopic in [n][m].
§.§ Induction beginning: n = 1
We start by proving <ref> for the case n=1.
Let m ≥ 0, k∈ 0,1, let S be a combinatorial k-sphere and ϕ S →[1] a simplicial map. Then ϕ is weakly regularly nullhomotopic in [1].
We start by describing the complex [1] = _[m+1]( e_1, …, e_m ), spelling out this case of <ref>.
Let V denote its set of vertices. Every vertex v∈ V is spanned by a vector
v⃗ = ∑_i=1^m a_i e⃗_i + a_m+1e⃗_m+1 + b_m+1f⃗_m+1_v⃗',
where a_1, …, a_m∈ and v⃗' a_m+1e⃗_m+1 + b_m+1f⃗_m+1 spans a rank-1 summand of ⟨e⃗_m+1, f⃗_m+1⟩≅^2.
Note that for a pair v_0, v_1 of such vertices, we have ω(v_0,v_1) = ω(v'_0,v'_1).
As ⟨e⃗_m+1, f⃗_m+1⟩ defines a genus-1 summand of (^2(m+1), ω), this implies in particular that there are no standard simplices of dimension greater than zero. Following <ref>, the complex [1] has dimension two and contains the following simplices of dimension greater than zero:
* For every v_0∈ V and 1≤ i ≤ m, there are the 2-additive simplices of the form v_0, ⟨v⃗_0 ±e⃗_i⟩.
* For every v_0∈ V and 1≤ i≠j ≤ m, there are the 3-additive simplices of the form v_0, ⟨v⃗_0 ±e⃗_i±e⃗_j⟩.
* For every v_0∈ V and 1≤ i≠j ≤ m, there are the double-triple simplices of the form v_0, ⟨v⃗_0 ±e⃗_i⟩, ⟨v⃗_0 ±e⃗_i ±e⃗_j⟩.
* For every pair v_0, v_1 ∈ V such that ω(v_0, v_1)= ± 1, there is a σ simplex of the form v_0, v_1.
These are all simplices of [1], no double-double or mixed simplices occur in this low-rank case.
In addition to these, the complex [1] (see <ref>) has the following simplices:
* For every pair v_0, v_1 ∈ V such that ω(v_0,v_1) = ± 1 and every 1≤ i≠j ≤ m, there are the 2-skew-additive simplices of the form v_0, ⟨v⃗_0 ±e⃗_i⟩, v_1.
* For every pair v_0, v_1 ∈ V such that ω(v_0,v_1) = ± 1, there are the σ-additive simplices of the form v_0, v_1, ⟨v⃗_0 ±v⃗_1⟩.
No skew-additive, σ^2 or skew-σ^2 simplices occur in this low-rank case.
Spelling out <ref>, this implies that a map Ψ B →[1] from a combinatorial ball to [1] is weakly regular if and only if
(Weakly regular) Ψ is injective on every simplex mapping to a σ-additive or 2-skew-additive simplex.
This is because the only requirement on a cross map in this low dimension is that it be an isomorphism onto its image.
The complex [1] contains “orthogonal” copies of the complexes _1^m and _2, which we describe next (for a schematic overview, see <ref>).
Let V' V∩⟨e⃗_m+1, f⃗_m+1⟩
(i.e. V' is the set of rank-1 direct summands of ⟨e⃗_m+1, f⃗_m+1⟩). For v'∈ V', we define _v' to be the full subcomplex of [1] on the set of vertices
⟨v⃗' + ∑_i=1^m a_i e⃗_i ⟩ | (a_1, …, a_m)∈^m ⊂ V.
The simplices in <ref> above imply that _v' is isomorphic to the complex _1^m. This complex is described in detail in <cit.>, where it is shown that it is 1-connected (for the cases m < 2, which are not explicitly mentioned in <cit.>, this holds as well as is explained in <cit.>).
Every vertex v∈ V is contained in precisely one such complex (namely in _v' with the notation introduced in <ref>). The same is true for all simplices of <ref>.
The only simplices of [1] that are not contained in any such _v' are the σ edges described in <ref>.
The full subcomplex of [1] on the set V', which we write as , is 1-dimensional and only has edges of this type. It is isomorphic to the complex _2, which in turn is isomorphic to the Farey graph and in particular connected (see <ref>).
If one adds to the σ-additive simplices described in <ref>, one obtains , the full subcomplex of [1] on the set V'. This complex is isomorphic to _2 and hence contractible (see <ref>).
This description in particular implies that [1] is connected, which proves the case k=0 of our claim: Being connected means that we can extend every map ϕ S^0→[1] to a map Ψ D^1→[1]. As the image of Ψ is contained in [1], it in particular is [it_weakly_regular_low_dim]weakly regular, so ϕ is weakly regularly nullhomotopic.
Now let S be a combinatorial 1-sphere and ϕ S→[1] a simplicial map. By <ref>, it suffices to show that ϕ is weakly regularly homotopic to a map →[1] that is weakly regular nullhomotopic. (Recall from <ref> that in this setting, we always mean that is a combinatorial k-sphere and is simplicial.)
For this, we can first homotope ϕ to a map →[1], where is a combinatorial k-sphere and is injective on every edge of .[For a detailed account of how one can construct such a homotopy, see the work of Himes–Miller–Nariman–Putman <cit.>. The key observation that allows to perform this homotopy here is that every edge Δ in [1] has a non-empty link _[1](Δ)≠∅. We use very similar arguments for spheres of arbitrary dimensions in <ref> and <ref>, where we give more details.] This process only involves simplices in [1], so defines a homotopy that is [it_weakly_regular_low_dim]weakly regular. Hence, we can assume that ϕ already satisfies this local injectivity property.
Let γ_1, …, γ_l be the cyclically indexed sequence of (cyclically) maximal subpaths of S that map via ϕ to _v' for some v'∈ V'. These are separated by σ edges in S if l> 1. The subpath γ_i can consist of a single vertex, namely if two σ edges are adjacent in S.
If l=1, the image of ϕ is entirely contained in some _v'. This is a 1-connected subcomplex of [1], so by <ref> of <ref>, there is a combinatorial 2-ball B with ∂ B = S and a map Ψ B→_v'⊂[1] such that Ψ|_S = ϕ. As the image of such a Ψ is contained in [1], it is [it_weakly_regular_low_dim]weakly regular, which means that ϕ is weakly regularly nullhomotopic.
Hence, we can assume that l>1.
We now homotope ϕ by replacing step by step every ϕ|_γ_i⊂_v' by a map whose image is the single vertex v'∈ V', while preserving the other segments γ_j.
Take any γ_i such that ϕ(γ_i) is not a single vertex that lies in V'. (Note that such a path can only occur if m≥ 1; otherwise, every _v' is a single vertex.) There is v'∈ V' such that ϕ(γ_i)⊂_v'. This implies that ϕ(γ_i) gives a
path in _v' with end points of the form α = v'+ ∑_i=1^m a^α_i e⃗_i and ω = v'+ ∑_i=1^m a^ω_i e⃗_i.
We start by replacing ϕ|_γ_i by a map whose image is a sequence of 2-additive simplices (see <ref>):
Using the simplices described in <ref>, we can find a
path in _v' from α to ω that only consists of 2-additive simplices. Let |_γ̃_iγ̃_i →[1] be an injective map that identifies a combinatorial 1-ball (i.e. a path) γ̃_i with this path consisting of 2-additive simplices. As _v' is 1-connected, ϕ|_γ_i is relative to its endpoint homotopic to |_γ̃_i. By <ref> of <ref>, we can realise this homotopy by a map Ψ B→_v'⊂[1] from a combinatorial 2-ball B.
Such a map is [it_weakly_regular_low_dim]weakly regular because (Ψ)⊂[1].
Hence by <ref>, ϕ is weakly regularly homotopic to a simplicial map
→[1]
that is obtained by replacing ϕ|_γ_i by Ψ|_γ̃_i = |_γ̃_i. The resulting map is still injective on every edge of .
We next replace ϕ|_γ̃_i by a map whose image is the single vertex ω (see <ref>).
For this, we use a homotopy in [1] whose image is not contained in [1]:
Assume that ϕ(γ̃_i) has more than one vertex and let s be the vertex that precedes α in ϕ(S), i.e. the last vertex of ϕ(γ_i-1). By assumption, s, α is a σ simplex (this uses the assumption that l>1). As all edges in ϕ(γ̃_i) are 2-additive, the vertex of ϕ(γ̃_i) following α is of the form ⟨α⃗±e⃗_i ⟩ for some 1≤ i ≤ m. Hence, the set s, α, ⟨α⃗±e⃗_i ⟩ is a 2-skew-additive simplex in [1], as in <ref>. We will use this to replace the edges s, α and α, ⟨α⃗±e⃗_i ⟩ in the image of ϕ by the σ edge s, ⟨α⃗±e⃗_i ⟩. Denote by x,y,z the corresponding vertices of S,
ϕ(x) = s, ϕ(y) = α, ϕ(z) = ⟨α⃗±e⃗_i ⟩.
Define a map Ψ B→, where B is the 2-simplex given by x,y,z and Ψ is equal to ϕ on each vertex. As
Ψ( x,y,z ) = s, α, ⟨α⃗±e⃗_i ⟩
is a 2-skew-additive simplex, Ψ is a well-defined and in fact weakly regular map.
Hence by <ref>, ϕ is weakly regularly homotopic to a simplicial map
→[1]
that is obtained by replacing ϕ|_ x,y , y,z by Ψ|_ x,z. The map is still injective on every edge of .
This removes α from ϕ(γ̃_i) without changing the other vertices.
Iterating this, we find a weakly regular homotopy that replaces ϕ|_γ̃_i by a map that sends a singleton to the vertex ω.
We lastly replace ω by v'∈ V' (see <ref>), using a homotopy in [1] whose image is not contained in [1]: Again, let s be the last vertex of ϕ(γ_i-1) and let t be the first vertex of ϕ(γ_i+1). Both s, ω and ω, t are σ edges (that might be equal to one another). There is a path in _v' from ω to v' that consists only of 2-additive edges. Similarly to the procedure described in the previous paragraph, we can use this path to successively replace ω by v', using two 2-skew-additive simplices in every step.
Performing this procedure on all segments γ_j, we obtain →[1] that is injective on every edge of and whose image is entirely contained in the subcomplex ⊆[1].
As noted above, this complex is contained in the contractible complex ⊆[1].
By <ref> of <ref>, there is a combinatorial 2-ball B with ∂ B = and a map Ψ B→⊂[1] such that Ψ|_ =. As is injective on every edge of , we can choose Ψ such that it is also injective on every simplex of B.[This follows from the work of Himes–Miller–Nariman–Putman <cit.>: By <ref>, ≅_2 is Cohen–Macaulay of dimension 2. So by <cit.>, it has the “disc local injectivity property up to dimension 1” (see <cit.>). This means exactly that we can choose Ψ as above to be locally injective.]
From this local injectivity, it follows that Ψ is [it_weakly_regular_low_dim]weakly regular, which means that is weakly regularly nullhomotopic.
§.§ Induction step
We now let n ≥ 2, m≥ 0, k ≤ n and assume that either k=0 or by induction, any simplicial map S^k-1→[n-1][m+1] from a combinatorial (k-1)-sphere into [n-1][m+1] is weakly regularly nullhomotopic in [n-1][m+1]. (Recall that, as noted in <ref>, every regular map is also weakly regular.)
Let S be a combinatorial k-sphere and ϕ S → be simplicial. We need to show that ϕ is regularly nullhomotopic in .
Let
R = max(ϕ(x)) | x vertex of S ,
where is defined as in <ref>.
If R=0, we have (ϕ)∈(W), so the claim follows from <ref>. Hence by <ref>, it suffices to show that ϕ is in regularly homotopic to a map → with this property. (Recall by the convention set up in <ref>, is a combinatorial k-sphere and is simplicial.)
For this, assume that R>0 and call a simplex Δ of S bad if ϕ(Δ)= v with (v) = R.
We will explain how one can regularly homotope ϕ to a map ϕ' with no bad simplices.
Iterating this procedure, we successively reduce R to 0, which finishes the induction step.
So from now and throughout <ref>, we make the standing assumption that
n≥ 2, m≥ 0, k≤ n and R≥ 1.
Standing assumption
In order to remove bad simplices from ϕ, we need to assume the following condition on their links.
Let S be a combinatorial k-sphere, ϕ S → a simplicial map and R = max(ϕ(x)) | x vertex of S. We say that ϕ has [it_isolation]Isolation if it satisfies the following property.
(Isolation) If Δ is a bad simplex of S, ϕ(Δ) = v, and x∈_S(Δ), then
ϕ(x) ∈ v ∪^<R_(v).
This is an “isolation” criterion in the sense that it implies that there cannot be two adjacent vertices in S that map to different vertices of rank R.
In fact, we can always assume that our maps have this property:
Let S be a combinatorial k-sphere and ϕ S → ()^≤ R a simplicial map. Then ϕ is in regularly homotopic to a map → ()^≤ R such that has [it_isolation]Isolation.
<ref> is proved in <ref>. We assume it for now and only show that it has the following consequence.
Let S be a combinatorial k-sphere and ϕ S → ()^≤ R a simplicial map that has [it_isolation]Isolation. Let Δ be a bad simplex of S that is maximal with respect to inclusion among all bad simplices, ϕ(Δ) = v. Then
ϕ(_S(Δ)) ⊂^<R_(v).
As ϕ is simplicial and Δ is maximal among bad simplices, we have
ϕ(_S^k(Δ)) ⊆_(v).
By [it_isolation]Isolation, this implies that every vertex x in _S^k(Δ) is sent to a vertex ϕ(x) in ^<R_(v). But ^<R_(v) is a full subcomplex of _(v) (see <ref> and <ref>). Hence, all higher dimensional simplices in ϕ(_S^k(Δ)) are contained in ^<R_(v) as well, which is what we wanted to show.
The following proposition shows that we can remove all bad simplices from ϕ. By the discussion at the beginning of this section, its proof finishes the induction step.
Let S be a combinatorial k-sphere and ϕ S → ()^≤ R a simplicial map. Then ϕ is in regularly homotopic to a map ϕ' S' → ()^< R.
A key observation for the proof of <ref> is that the links of vertices in and are similar to complexes that we already studied in the previous step of the induction (<ref>).
Our aim is to replace ϕ by a map whose image has only vertices of rank less than R. Hence, we are done if we can find a homotopic map without any bad simplices.
After applying <ref>, we can assume that ϕ has [it_isolation]Isolation.
Let Δ be a bad simplex that is maximal with respect to inclusion among all bad simplices. We will explain how to remove Δ from ϕ without introducing further bad simplices and such that the resulting map still has [it_isolation]Isolation. Iterating this procedure will prove the claim.
Let ϕ(Δ) = {v} be the image of the bad simplex Δ. As ϕ has [it_isolation]Isolation, it maps _S(Δ) to ^<R_(v) by <ref>.
By <ref>, there is a commutative diagram
_(v) [r]^≅@^(->[d] @^(->[d] [n-1][m+1]
_( v ) [r]^≅ [n-1][m+1].
As S is a combinatorial k-sphere, the complex _S(Δ) is a combinatorial (k-(Δ)-1)-sphere.
First assume that k-(Δ)-1≤ n-2. This is the case if k< n or (Δ)>0, i.e. ϕ|_Δ is not injective.
By <ref>, the complex _(v)≅[n-1][m+1] is (n-2)-connected. Hence by <ref> of <ref>, there is a combinatorial (k-(Δ))-ball D̃ with ∂D̃ = _S(Δ) and there is a map
ψ̃D̃→_(v)
such that ψ̃|__S(Δ) = ϕ|__S(Δ).
By <ref>, there are then also a combinatorial ball D with ∂ D = _S(Δ) and a map
ψ D →^<R_(v)
such that ψ|__S(Δ) = ψ̃|__S(Δ) = ϕ|__S(Δ).
By <ref>, BΔ * D is a combinatorial (k+1)-ball whose boundary can be decomposed as
∂ B = _S(Δ) ∪ (∂Δ * D).
By <ref>, both _S(Δ) and (∂Δ * D) are combinatorial k-balls and their intersection is given by
∂_S(Δ) = ∂Δ∗_S(Δ) = ∂Δ * ∂ D = ∂ (∂Δ * D).
We define a map
Ψ B = Δ∗ D→
by letting Ψ|_Δ = ϕ|_Δ and Ψ|_D = ψ|_D. As Ψ has image in , it is regular.
Hence by <ref>, ϕ is regularly homotopic to a simplicial map
→
that is obtained by replacing ϕ|__S(Δ) by Ψ|_∂Δ * D.
As ϕ and Ψ have image in ()^≤ R, so does .
We claim that every bad simplex Θ of is also a bad simplex of ϕ. By <ref>, it suffices to consider Θ⊆∂Δ * D.
As
(D) = ψ (D) ⊆^<R_(v),
we then have Θ⊂∂Δ⊆∂_S(Δ) ⊂ S. So Θ is a bad simplex of ϕ.
This implies that has one less bad simplex than ϕ (namely the simplex Δ that was removed).
It also implies that has [it_isolation]Isolation: If a bad simplex Θ of is not contained in ∂Δ, then by <ref>, |__(Θ) = ϕ|__S(Θ), so the condition for [it_isolation]Isolation is obviously satisfied because ϕ has [it_isolation]Isolation. But if Θ⊆∂Δ, then
_(Θ)⊆_S(Θ) ∪ D.
As observed above, (D) ⊆^<R_(v), so for every x ∈_(Θ), we have
(x) ∈ϕ(_S(Θ)) ∪(D) ⊆ v ∪^<R_(v)
This is what we need to show for establishing [it_isolation]Isolation.
We can therefore assume that k = n and that (Δ)=0, i.e. ϕ|_Δ is injective.
As we assumed n≥ 2, this implies that k>0 and, as S is a combinatorial k-sphere, _S(Δ) is a combinatorial sphere of dimension k-0-1 = n-1 ≥ 1. To simplify notation, we in what follows identify Δ = x with the single vertex x it contains.
By <ref> and the induction hypothesis, the map
ϕ|__S(x)_S(x) →_(v) ≅[n-1][m+1]
is weakly regularly nullhomotopic in _(v) ≅[n-1][m+1] .
I.e. there is a combinatorial n-ball D̃ such that ∂D̃ = _S(x) and there is a weakly regular map
ψ̃D̃→_(v)
that agrees with ϕ|__S(x) on ∂D̃ (see <ref> for the definition of weakly regular maps into _(v)).
By <ref>, there is also a combinatorial ball D with ∂ D =∂D̃ = _S(x) and a weakly regular map
ψ D →^<R_(v)
with the property that ψ|__S(x) = ψ̃|__S(x) = ϕ|__S(x).
Define
B x ∗ D.
By <ref>, the complex B is a combinatorial (n+1)-ball whose boundary can be decomposed as ∂ B = _S(x) ∪ D.
By <ref>, both _S(x) and D are combinatorial k-balls and their intersection is given by
∂_S(x) = _S(Δ) = ∂ D.
As ψ(D) ⊂^<R_(v), we can define a simplicial map Ψ B→
by setting Ψ(x) = ϕ(x) = v and Ψ|_D = ψ|_D.
If Ψ was regular and mapped D to – both of which need not be satisfied a priori – we could similarly to the situation above use Ψ to remove the bad simplex Δ = x.
In our next step, we find a combinatorial (n+1)-ball B' and a new map
Ψ' B' →
such that
* ∂ B' = _S(x) ∪ D', where D' and _S(x) are combinatorial n-balls and it holds that ∂ D' = ∂_S(x) = _S(x),
* Ψ' is regular,
* Ψ'|__S(x) = ϕ|__S(x), and
* Ψ'(D') ⊂ ()^<R.
As described above, we can then use <ref> to regularly homotope ϕ inside to another map →, where ϕ|__S(x) is replaced by Ψ'|_D'. This procedures removes the bad simplex Δ and does not introduce new bad simplices because Ψ'(D') ⊂ ()^<R. As (Δ) = 0, no bad simplex other than Δ intersects _S(Δ). Hence, it is even true that for every bad simplex Θ of , we have _S(Θ) = _(Θ). It follows that the replacement preserves [it_isolation]Isolation.
We start by finding such a map Ψ' for n=2. In this case, the domain B of Ψ is a combinatorial 3-ball and we obtain B' by replacing certain 3-dimensional simplices C in B with combinatorial 3-balls C'. See <ref> for a schematic overview of this process.
All C that we replace are of the form C = x * E, where E is a 2-dimensional simplex in D and Ψ(E) is a simplex of [2] ∖[2]. We define Ψ' on their replacements C' by sending every vertex to ([2])^<R and making sure that this defines a simplicial map on every simplex of C'.
We replace a 3-dimensional simplex C in Ψ whenever it is of one of the following three types:
* Ψ(C) is a σ-additive simplex of the form v, v_1, w_1, ⟨v⃗_1 + w⃗_1 ⟩, where ω(v_1,w_1) = ± 1;
* Ψ(C) is a 2-skew-additive simplex of the form v, v_1, ⟨v⃗_1 + e⃗_i⟩, w_1, where ω(v_1,w_1) = ± 1 and 1≤ i ≤ m;
* Ψ(C) is a 2-skew-additive simplex of the form v, v_1, ⟨v⃗_1 + v⃗⟩, w_1, where ω(v_1,w_1) = ± 1.
The map Ψ is obtained by extending the weakly regular map ψ, which has image in a subcomplex of _[2](v) ≅[1][m+1].
Using the definition of weak regularity (<ref>), we see that every simplex C in B such that Ψ(C) is not contained in [2] is of one of the three types listed above (see <ref> for the types of simplices in [2] ∖[2]).
Using the notation for Ψ(C) introduced above, we now replace each of these simplices as follows:
* If the image of C = x*E is σ-additive of the form v, v_1, w_1, ⟨v⃗_1 + w⃗_1 ⟩, we define C' by attaching a second 3-dimensional simplex to C along its facet E and name the new vertex t_C as depicted in <ref>. It is easy to see that C' is a combinatorial 3-ball.
We extend the map Ψ' to C' by setting Ψ'(t_C) = w, where w is chosen such that v⃗,w⃗,v⃗_1,w⃗_1 is a symplectic basis and (w)<R. To choose such a w, first pick any w' such that v,w',v_1,w_1 is a symplectic basis, then take w⃗ = w⃗' + av⃗ with appropriate a ∈ such that 0≤(w⃗) < R.[Note that this is the integral-valued rank for vectors defined at the beginning of <ref>.] This is possible using the Euclidean algorithm and in particular implies that (w) = |(w⃗)| <R. It is not hard to check that this defines a simplicial map C'→[2], where both maximal simplices of C' get mapped to σ-additive simplices.
* If the image of C = x*E is 2-skew-additive of the form v, v_1, ⟨v⃗_1 + e⃗_i⟩, w_1 for some 1≤ i ≤ m, we also attach a second 3-dimensional simplex to C along its facet E and name the new vertex t_C. We call the resulting simplicial complex C”. Choose a w such that v,w,v_1,w_1 is a symplectic basis and (w)<R again. We want to send t_C to w but this would not result in a regular map on C” as it would lead to two 2-skew-additive simplices in the image of C”.
We remedy this issue by replacing the interior of C” by three 3 dimensional simplices that have an edge x, t_C in common as indicated in <ref>. This simplicial complex shall be C'. Again, it is easy to verify that C' is a combinatorial 3-ball. We can now define Ψ'(t_C) = w as above and in fact this defines a simplicial map C'→[2]. It maps the three maximal simplices to two σ^2 simplices and one mixed simplex.
* If the image of C = x*E is 2-skew-additive of the form v, v_1, ⟨v⃗_1 + v⃗⟩, w_1, we obtain C' by adding to C two 3-simplices with two new vertices t_C and s_C as depicted in <ref>. As observed in <ref>, the prism C' is a combinatorial 3-ball. We define the image of t_C to be w∈[2] such that v⃗, w⃗,v⃗_1,w⃗_1 is a symplectic basis with 0≤(w⃗),(w⃗_1)<R and the image of s_C to be ⟨w⃗ - w⃗_1 ⟩∈[2]. This yields a simplicial map C'→[2] whose image is a “prism” obtained as the union of two 2-skew-additive simplices and one skew-σ^2 simplex. Note that as 0≤(w⃗),(w⃗_1)<R, we have (Ψ'(s_C)) = |(w⃗ - w⃗_1)|<R.
Now, as described above and indicated in <ref>, we let B' be the complex obtained from B by replacing the simplices C by the 3-balls C'.
On the level of geometric realisations, each such replacement amounts in attaching to |B| another 3-ball (the “lower part” of the complexes C' in <ref>, <ref> or <ref>) along a 2-ball (the geometric realisation of the simplex E). By <cit.>, the result is again a (topological) 3-ball. Using <ref> and the fact that that the C' are combinatorial 3-balls, it follows that B' is again a combinatorial 3-ball.
Note that all the new C' are chosen such that their boundary is partitioned into _∂ C(x) and a combinatorial 2-ball E'.
It follows that the boundary of B' decomposes into _∂ B'(x) = _S(x) and a combinatorial 3-ball D' that is obtained by replacing the simplices E of D by the combinatorial balls E'.
That Ψ' is indeed a well-defined simplicial map follows because this is true on each C' discussed above.
To see that Ψ'(D') ⊆ ([2])^<R, it suffices to note that in all of the three cases above, the subcomplex E'⊂∂ C' gets mapped to ([2])^<R – as mentioned above, all simplices of D' that were already contained in D get mapped to ([2])^<R.
Next we need to verify that Ψ' is regular. If Σ is a simplex of B' such that Ψ'(Σ) is 2-dimensional σ-additive, then Σ must be equal to E for some of the C of the first type described above. In this case, by construction _B'(Σ) = C' and Ψ'|_C' is a σ-additive cross map.
Similarly, every Σ⊂ B' mapping to a 3-dimensional σ^2 simplex is contained in a C' of the second type described above. So for σ^2-regularity, it suffices to note that Ψ'|_C' is injective.
Lastly, every simplex mapping to a skew-additive, 2-skew-additive or skew-σ^2 simplex of B' is contained in a C' of the third type described above. Here, regularity is fulfilled because Ψ'|_C' is a prism cross map.
We now consider the case n ≥ 3. Recall that we have a simplicial map Ψ B→, where B is a combinatorial (n+1)-ball with ∂ B = D ∪_S(x), ψ = Ψ|_D and we want to use this to construct a new combinatorial ball B' with a map Ψ' B'→ that satisfies the properties listed on induction_step_properties_Dprime. Similarly to the n=2 case, we replace certain combinatorial (n+1)-balls C of B with ∂ C = _∂ C (x) ∪ E and Ψ(E) ⊈ by combinatorial balls C' with ∂ C' = _∂ C' (x) ∪ E' and Ψ'(E') ⊆ ()^<R (see <ref> for a schematic overview).
We perform such a replacement whenever there is a subcomplex Σ in B such that Ψ(Σ) is of one of the following forms
* a 2-dimensional σ-additive simplex v_1, w_1, ⟨v⃗_1 + w⃗_1 ⟩;
* a 3-dimensional σ^2 simplex v_1, w_1, v_2, w_2;
* a 3-dimensional prism with vertex set v_1, w_1, v_2, w_2, ⟨v⃗_1 + v⃗_2 ⟩, ⟨w⃗_1 - w⃗_2 ⟩;
* a 2-dimensional skew-additive simplex v_1, ⟨v⃗_1 + v⃗⟩, w_1.
(In all of the above, the symplectic pairings are as suggested by the notation, i.e. ω(v_1,w_1) = ± 1 and ω(v_2,w_2) = ± 1.)
As ψ maps x to v and D into _(v), any such Σ is necessarily contained in D.
Let Σ be such a subcomplex, let
E_D(Σ) and C x ∗ E ⊆ B.
As ψ D →^<R_(v) is weakly regular, we have that Ψ|__D(Σ) = ψ|__D(Σ) is a cross map of the corresponding type. Let v⃗_1, w⃗_1, …, v⃗_n-1, w⃗_n-1 denote the symplectic basis in its image.
We distinguish two cases. The first is that Ψ(Σ) is a σ-additive simplex, a σ^2 simplex or a prism. In this case, we perform a construction similar to the case of σ-additive simplices for n=2 (<ref> on it_induction_step_n2_sigma_additive).
To obtain C' from C in this case, we cone off E by a new vertex t_C (as depicted in <ref>). We set Ψ'(t_C) = w, where w∈ is a vertex such that v⃗, w⃗, v⃗_1, w⃗_1, …, v⃗_n-1, w⃗_n-1 is a symplectic basis and (w)<R. (We obtain such a w just as in the case n=2 discussed above.)
The complex C' is a suspension of E along the points x and t_C and hence a combinatorial ball by <ref>. It is easy to see that Ψ|_C' is a cross map of the appropriate type.
We have ∂ C' = _∂ C' (x) ∪ E', where E' = t_C ∗∂ E.
As Ψ'|_C' is a cross map, we know by <ref> that Ψ'(E') is contained in . Also by construction, every vertex of E' maps to a line of rank less than R, so in fact Ψ'(E')⊆ ()^<R.
The second case is that Ψ(Σ) is a skew-additive of the form v_1, ⟨v⃗_1 + v⃗⟩, w_1.
In other words, Ψ(x ∗Σ) is the 2-skew-additive simplex v, v_1, ⟨v⃗_1 + v⃗⟩, w_1.
As ψ|__D(Σ) is an external 2-skew-additive cross map (see <ref>), _D(Σ) is of the form Σ∗ C_n-1. We extend the 3-simplex x ∗Σ as in the corresponding third case for n=2 (<ref> on page it_induction_step_n2_2skew_additive): We add two new vertices t_C and s_C as indicated in <ref> and obtain a 3-dimensional prism Σ'.
We then define
C' Σ' ∗ C_n-1.
Again, this is a combinatorial ball by <ref>.
We set Ψ'|_C_n-1 = Ψ|_C_n-1, let Ψ'(t_C) = w as above and Ψ'(s_C) = ⟨w⃗ - w⃗_1 ⟩, where the sign of w⃗_1 is chosen such that (⟨w⃗ - w⃗_1 ⟩)<R (such a choice exists by the same argument as in <ref> on it_induction_step_n2_2skew_additive). It is not hard to check that Ψ'|_C' is a prism cross map, so as above, it follows that Ψ'(E')⊆ ()^<R.
Performing these replacements for all[We can perform these replacements independently of one another as two distinct such subcomplexes E can only intersect in their boundaries by <ref>. These boundaries are not modified by the replacements.] such Σ, we obtain a simplicial map Ψ' B'→, where B' is a combinatorial[To see that B' is indeed a combinatorial ball, one again first verifies that |B'| is a topological ball and then uses <ref>.] (n+1)-ball with ∂ B' = _S(x) ∪ D' and Ψ'(D')⊆ ()^<R.
To see that this map is regular, it suffices to note again that every simplex mapping to a σ-additive, σ^2, skew-additive, 2-skew-additive or skew-σ^2 simplex is contained in some of the C' above. Regularity then follows because Ψ'|_C' is a cross map. Applying <ref> as above then finishes the cases n ≥ 3 and concludes the proof.
§ ISOLATING BAD SIMPLICES
The aim of this section is to prove <ref>, i.e. to show that every maximal bad simplex in a map ϕ S^k → can be isolated.
Throughout this section, we keep the [eq_standing_assumption_nmkR]Standing assumption from <ref>, which give the context in which <ref> is stated. That is, n, k, m and R are natural numbers such that n≥ 2, m≥ 0, k≤ n and R≥ 1.
We keep the convention introduced in <ref>.
§.§ "03C3-regularity
We need another notion of cross map and associated regularity, similar to the one in <ref> and <ref>. This is the concept of σ-regularity that plays an important role in Putman's work <cit.>:
* A σ cross map is a simplicial map ϕΔ^1∗ C_k-1→ with the following property: Let x_1,y_1 …, x_k,y_k be the vertices of Δ^1∗ C_k-1. Then there is a symplectic summand of ^2(m+n) with a symplectic basis v⃗_1, w⃗_1, …, v⃗_k, w⃗_k such that ϕ(x_i)= v_i and ϕ(y_i) = w_i for all i. See <ref>.
* Let M be a combinatorial manifold. A simplicial map ϕ M → is called σ-regular if the following holds: If Δ is simplex of M such that ϕ(Δ) is a minimal (i.e. 1-dimensional) σ simplex, then ϕ|__M(Δ) is a σ cross map.
* Let ϕ S → be a simplicial map from a combinatorial k-sphere S. We say that ϕ is σ-regularly nullhomotopic (in ) if there is a combinatorial ball B with ∂ B = S and a regular map Ψ B → such that Ψ|_S = ϕ.
The notion of a σ cross map is closely related to the one of a σ^2 cross map:
Let ϕ C → be a σ cross map and D^1 the 1-ball with its standard simplicial structure consisting of two vertices and one edge. Let v_k+1, w_k+1 be a σ simplex such that v⃗_1, w⃗_1, …, v⃗_k, w⃗_k , v⃗_k+1, w⃗_k+1 is a partial symplectic basis of ^m+n.
Then we obtain a σ^2 cross map
Φ C ∗ D^1 →
by setting Φ|_C = ϕ and Φ(D^1) = v_k+1, w_k+1.
We now set up some notation and elementary observations about σ-regular maps that is used in this section.
Let ϕ S → be σ-regular and Δ a simplex of S such that ϕ(Δ) is a σ edge. Then by definition, the restriction of ϕ to C _S(Δ) is a σ cross map C→.
We denote the vertices of C by x_1, y_1, …, x_k,y_k and their images by v_1, w_1, …, v_k, w_k, where for all i, v_i,w_i is a symplectic pair, i.e. ω(v_i, w_i)=± 1. We always assume that v_1,w_1 = ϕ(Δ) is the (unique) σ-edge contained in ϕ(C).
Note that C is a k-ball that is the union of 2^k-1 simplices of dimension k, each of which gets mapped by ϕ to
v_1, w_1 ∪ v_i | i∈ I∪ w_j | j ∈ J ,
for some disjoint, possibly empty, sets I,J such that I∪ J = 2, …, k. Its boundary ∂ C is given by all those (k-1) simplices whose image does not contain both v_1 and w_1, i.e. is of the form
v_1 ∪ v_i | i∈ I∪ w_j | j ∈ J or w_1 ∪ v_i | i∈ I∪ w_j | j ∈ J .
As mentioned above, we always assume that the σ edge in ϕ(C) is given by v_1, w_1. However, as we are only interested in ϕ up to regular homotopy, this does not matter too much because there is the following “flip” that allows us to exchange v_1, w_1 with any other symplectic pair in ϕ(C):
Let 2≤ i ≤ k and let C' be the simplicial complex obtained from C by “replacing the edge x_1, y_1 with the edge x_i, y_i”. That is, C' is the k-ball defined as follows: Let Δ_i be the 1-simplex with vertices x_i, y_i and
C_k-2∗_j∈ 1,…, k ∖ 1,i ∂ x_j, y_j≅ S^k-3,
where ∂ x_j, y_j is a copy of S^0 with vertices x_j, y_j.
We define C' as the join
C' = (∂Δ) ∗Δ_i ∗ C_k-2.
There is an obviously σ-regular map C'→ that agrees with ϕ|_C on the vertex set of C (which is also the vertex set of C'). Furthermore, the following <ref> shows that there is a regular homotopy that allows us to replace ϕ|_C by this map, see also <ref>.
There are a combinatorial (k+1)-ball B with vertices x_1, y_1,…, x_k, y_k and a regular map Ψ B→ that agrees with ϕ on x_1, y_1, …, x_k, y_k and such that ∂ B = C ∪ C'.
We define B as the join
B = Δ∗Δ_i ∗ C_k-2.
So B has the same vertices as C and C', but “an additional edge between x_i and y_i”; that is B ≅ D^1 ∗ D^1 ∗ S^k-3, whereas C ≅ D^1 ∗ S^0 ∗ S^k-3 and C' ≅ S^0 ∗ D^1 ∗ S^k-3.
Define Ψ B → to be the map that agrees with ϕ on the vertices of B (which are also the vertices of C). Using the description of ϕ(C) at the beginning of this subsection, the image Ψ(B) is a union of 2^k-2-many σ^2 simplices of dimension (k+1).
The map Ψ is a σ^2 cross map and in particular regular.
By <ref>,
∂ B = (∂Δ∗Δ_i ∗ C_k-2) ∪ (Δ∗∂Δ_i ∗ C_k-2) = C' ∪ C,
so we have the claimed decomposition of the boundary of B.
This depicts how regular maps arise as homotopies between σ-regular maps. With the intuition given in <ref>, the cross maps Ψ|_C → and Ψ|_C'→ can be thought of as two polyhedral cells (in the shape of cross polytopes). Together, they form the boundary of the higher-dimensional polyhedral cell Ψ B → that yields a homotopy between them.
§.§ Establishing "03C3-regularity
We keep the [eq_standing_assumption_nmkR]Standing assumption that n≥ 2, m≥ 0, k≤ n and R≥ 1.
In this first step to proving <ref>, we show that every map ϕ S^k → is regularly homotopic to a σ-regular map.
Let S be a combinatorial k-sphere and ϕ S → ()^≤ R a simplicial map. Then ϕ is in regularly homotopic to a σ-regular map → ()^≤ R.
Our proof of <ref> relies on a result of Putman.
In <cit.>, he shows that the inclusion ↪ induces a trivial map on π_k for all k≤ n-1. In fact, his proof shows more, namely:
Let 0≤ k ≤ n-1 and R∈∪∞. Let S be a combinatorial k-sphere and ϕ S → ()^≤ R a simplicial map. Then ϕ is σ-regularly nullhomotopic in ()^≤ R.
From this, we obtain σ-regularity using a “cut-out” argument very similar to the one in the proof <ref>:
Call a simplex Δ of S non-regular in ϕ if ϕ(Δ) is a σ edge and ϕ|__S(Δ) is not a σ cross map.
Assume that there are non-regular simplices in ϕ and let Δ be one that is maximal with respect to inclusion among all non-regular simplices.
We will show how to homotope ϕ to a map that has one less non-regular simplex. Iterating this procedure then leads to a map without non-regular simplices, which proves the claim.
As S is a combinatorial k-sphere, the link _S(Δ) is a combinatorial (k-(Δ)-1)-sphere.
As ϕ is simplicial and Δ is maximal among non-regular simplices, we have ϕ(_S(Δ))⊆ (_(ϕ(Δ)))^≤ R. By <ref>, the latter is isomorphic to ([n-1][m])^≤ R' for some R'∈∪∞.
We consider two cases: If ϕ|_Δ is not injective, then (Δ)≥ 2, so _S(Δ) is a combinatorial sphere of dimension k-(Δ)-1 ≤ n-3. By <ref>, the complex (_(ϕ(Δ)))^≤ R≅ ([n-1][m])^≤ R' is (n-3)-connected. Hence by <ref>, there are a combinatorial (k-(Δ))-ball D with ∂ D = _S(Δ) and a map
ψ D → (_(ϕ(Δ)))^≤ R
such that ψ|__S(Δ) = ϕ|__S(Δ).
By <ref>, the complex BΔ * D is a combinatorial (k+1)-ball whose boundary can be decomposed as
∂ B = _S(Δ) ∪ (∂Δ * D) .
By <ref>, both _S(Δ) and (∂Δ * D) are combinatorial k-balls and their intersection is given by
∂_S(Δ) = ∂Δ∗_S(Δ) = ∂Δ * ∂ D= ∂ (∂Δ * D).
We define a map
Ψ B→ ()^≤ R
by letting Ψ|_Δ = ϕ|_Δ and Ψ|_D = ψ|_D. As Ψ has image in , it is regular.
Hence by <ref>, ϕ is regularly homotopic to a simplicial map
→
that is obtained by replacing ϕ|__S(Δ) by
Ψ|_∂Δ * D∂Δ * D →ϕ(Δ) * (_(ϕ(Δ)))^≤ R⊆ ()^≤ R.
As ϕ and Ψ have image in ()^≤ R, so does .
We claim that every non-regular simplex Θ of is also a non-regular simplex of ϕ. By <ref>, it suffices to consider Θ⊆∂Δ * D.
As
(D) = ψ (D) ⊆^<R_(v),
we then have Θ⊂∂Δ⊆∂_S(Δ) ⊆ S. So Θ is a non-regular simplex of ϕ.
This implies that has one less non-regular simplex than ϕ (namely the simplex Δ that was removed).
The more difficult case is the one where ϕ|_Δ is injective. Here, we know that _S(Δ) is a combinatorial sphere of dimension k-(Δ)-1 ≤ n-2, so we cannot necessarily do the same replacement.
But now by <ref> and <ref>, the map
_S(Δ) → (_(ϕ(Δ)))^≤ R≅ ()^≤ R'
is σ-regularly nullhomotopic in (_(ϕ(Δ)))^≤ R≅ ())^≤ R'. I.e. there are a combinatorial ball D with ∂ D = _S(Δ) and a σ-regular map
ψ D → (_(ϕ(Δ)))^≤ R
such that ψ|__S(Δ) = ϕ|__S(Δ).
Just as before, we can extend this to a map
Ψ B = Δ∗ D → ()^≤ R
by setting Ψ|_Δ = ϕ|_Δ and Ψ|_D = ψ|_D. Again, B is a combinatorial ball with boundary ∂ B = _S(Δ) ∪ (∂Δ * D) and Ψ agrees with ϕ on _S(Δ).
The image of Ψ contains no skew-additive, 2-skew-additive or σ-additive simplices. The map ψ is σ-regular and the isomorphisms in <ref> identify σ simplices in ()^≤ R' with σ^2 simplices in (_(ϕ(Δ)))^≤ R. This implies that Ψ is regular (cf. <ref>).
Furthermore, as ϕ|_Δ is injective, we have ϕ(∂Δ) ⊆∂ϕ(Δ). Hence, Ψ|_∂Δ∗ D has image in
(Ψ|_∂Δ∗ D) ⊆∂ϕ(Δ) * (_(ϕ(Δ)))^≤ R⊆ ()^≤ R.
Invoking <ref>, we see that ϕ is regularly homotopic to a simplicial map
→ ()^≤ R
that is obtained by replacing ϕ|__S(Δ) by
Ψ|_∂Δ∗ D∂Δ * D → ()^≤ R.
We claim that every non-regular simplex Θ of is also a non-regular simplex of ϕ. By <ref>, it again suffices to consider Θ⊆∂Δ∗ D. We show that every such simplex is regular. By construction, every simplex Θ of ∂Δ∗ D that maps to a σ edge must be contained in D.
The map Ψ|_D is equal to ψ, which is σ-regular. So ψ|__D(Θ) is a σ cross map. Furthermore, as Θ⊆ D, we have
_∂Δ∗ D(Θ) = ∂Δ∗_D(Θ)
and Ψ maps ∂Δ to a symplectic pair v,w that extends the partial symplectic basis given by the vertices of ψ(_D(Θ)). Hence, |__∂Δ∗ D(Θ) is a σ cross map as well, so Θ is a regular simplex. This implies that has one less non-regular simplex than ϕ (namely the simplex Δ that was removed).
§.§ Establishing "03C3-smallness
We keep the [eq_standing_assumption_nmkR]Standing assumption that n≥ 2, m≥ 0, k≤ n and R≥ 1.
In this section, we show how to regularly homotope a σ-regular map ϕ S^k→ to a map such that in the image of , no vertex of rank R is contained in a σ edge. In fact, we establish the following slightly stronger condition.
Let S be a combinatorial k-sphere.
A σ-regular map ϕ S→ is called σ-small if for every simplex Δ of S such that ϕ(Δ) is a σ edge, the following properties hold:
* The image of the σ cross map ϕ|__S(Δ) contains at most one vertex of rank R.
* For all x∈Δ, we have (ϕ(x))<R.
Recall that as ϕ is σ-regular, the simplex Δ in the above definition is necessarily an edge.
We establish σ-smallness with three lemmas in this subsection. In each one, we show how to obtain a map S^k → such that every Δ mapping to a σ edge has a certain desired property that brings the map closer to being σ-small. To do so, we each time remove step by step all “critical” simplices, i.e. those Δ that do not have the desired property. The following is the first of these lemmas.
Let S be a combinatorial k-sphere and ϕ S → ()^≤ R a σ-regular map. Then ϕ is in regularly homotopic to a σ-regular map → ()^≤ R such that the following property holds.
* If Δ⊂ is a simplex such that ϕ'(Δ) is a σ edge, then in the image of the σ cross map |__(Δ), every symplectic pair v_i, w_i satisfies (v_i)< R or (w_i)< R.
Let Δ be a simplex such that ϕ(Δ) is a σ edge. We keep the notation set up in <ref>, so C = _S(Δ), the vertices in ϕ(C) are v_1, w_1, …, v_k, w_k and ϕ(Δ)= v_1,w_1 is the σ-edge contained in ϕ(C).
Assume that Δ is critical, i.e. there is 1≤ i ≤ k such that (v_i)=R=(w_i).
Using <ref> to replace the edge v_1,w_1 by v_i,w_i, we can assume that i=1.
Choose representatives v⃗_1 and w⃗_1 such that (v⃗_1) = R = (w⃗_1) and define w_1' ⟨w⃗_1 - v⃗_1 ⟩. This is a vertex in and (w_1') = 0.
Let
B t ∗ C
be the combinatorial (k+1)-ball obtained as a join of C with a new vertex t.
Define a map
Ψ B →
by letting Ψ|_C = ϕ|_C and h(t) w_1'.
We claim that Ψ is well-defined and regular: It follows from the description of the simplicial structure of C at the beginning of this subsection that B = t ∗ C is a union of 2^k-1 simplices of dimension (k+1), each of which gets mapped to
v_1, w_1, w_1' = ⟨w⃗_1 - v⃗_1 ⟩∪ v_i | i∈ I∪ w_j | j ∈ J
for some disjoint sets I,J such that I∪ J = 2, …, k.
See <ref> for a low-dimensional picture.
It is easy to check that these are all σ-additive simplices. It follows that Ψ has image in , is a σ-additive cross map and hence is indeed regular. By <ref>, the boundary of B decomposes as
∂ B = (t ∗∂ C ) ∪ C
Both t ∗∂ C and C are combinatorial k-balls and their intersection is given by
∂ (t ∗∂ C ) = ∂ C.
Using the description of ∂ C at the beginning of this subsection, we have Ψ(t ∗∂ C ) ⊂ ()^≤ R.
Hence by <ref>, ϕ is regularly homotopic to a simplicial map
→ ()^≤ R
that is obtained by replacing ϕ|__S(Δ) by Ψ|_t ∗∂ C.
Using the description of ∂ C above, one sees that t ∗∂ C is the union of _t ∗∂ C ( x_1, t ) and _t ∗∂ C ( y_1, t ), where ϕ(x_1)=v_1 and ϕ(y_1) = w_1. The restriction of Ψ to each of those is a σ cross map and the vertices in their image are v_1, w_1', v_2, w_2, …, v_k, w_k and w_1, w_1', v_2, w_2, …, v_k, w_k, respectively. In particular, both of these images contain one less symplectic pair with two vertices of rank R than ϕ(C).
Hence, replacing ϕ with removes the critical simplex Δ from ϕ and only produces critical simplices that have less symplectic pairs with two vertices of rank R in the images of their stars.
Iterating this procedure yields a map with the desired property.
Let S be a combinatorial k-sphere and ϕ S → ()^≤ R a σ-regular map. Then ϕ is in regularly homotopic to a σ-regular map → ()^≤ R such that the following property holds.
* If Δ⊂ is a simplex such that (Δ) is a σ edge, then in the image of the σ cross map |__(Δ), there is at most one vertex of rank R.
Again, let Δ be a simplex such that ϕ(Δ) is a σ edge and keep the notation from <ref>. So C = _S(Δ) has vertices x_1, y_1, …, x_k, y_k and their images are v_1, w_1, …, v_k, w_k, where ϕ(Δ)= v_1,w_1.
Using <ref>, we can assume that (w_i) < R for all i.
Now assume that Δ is critical, i.e. there are 1≤ i ≠ j ≤ m such that (v_i) = (v_j) = R.
Using <ref> to replace the edge v_1,w_1 by v_i,w_i, we can assume that i=1 and j=2.
We will show how to remove C from ϕ while only creating σ cross maps that have less vertices of rank R in their image than ϕ|_C.
The set v_1,v_2 is a standard edge in ϕ(C) both of whose vertices have rank R.
Choose representatives v⃗_1, v⃗_2 such that (v⃗_1) = R = (v⃗_2) and w⃗_1, w⃗_2 such that ω(v⃗_1, w⃗_1) = 1 = ω(v⃗_2, w⃗_2).
Let
v⃗v⃗_1 - v⃗_2 and w⃗w⃗_1+ w⃗_2.
Both v and w are vertices in and ω(v,w) = 0. We have (v) = 0.
*Step 1:
We first show that we can assume that
(w) = (⟨w⃗_1 + w⃗_2⟩)<R.
Assume that <ref> does not hold. Then 1 ≤(w_1), (w_2) < R and
R ≤(⟨w⃗_1 + w⃗_2⟩) < 2R.
As (v_1)= R, this implies that for some ϵ∈ -1,1,
(⟨w⃗_1 + w⃗_2 + ϵv⃗_1⟩) < R.
Define w⃗_1' w⃗_1 + ϵv⃗_1. We then also have (w_1')<R.
Let B t ∗ C and define
Ψ B = t ∗ C →
by setting Ψ|_C = ϕ|_C and Ψ(t) w_1'. It follows as in the proof of <ref> that Ψ is a σ-additive cross map and in particular regular.
Hence by <ref>, ϕ is regularly homotopic to a simplicial map
→ ()^≤ R
that is obtained by replacing ϕ|_C by Ψ|_t ∗∂ C.
We need to verify that replacing ϕ by only creates cross maps that are “better” than ϕ|_C. By <ref>, it suffices to consider cross maps in Ψ|_t ∗∂ C.
For this, first note that the image of Ψ|_t ∗∂ C contains two σ edges, namely Ψ( x_1,t )= v_1, w_1' and Ψ( y_1,t )= w_1, w_1' (see <ref>). Hence, there are also two corresponding σ cross maps. The vertices in the images of these σ cross maps are v_1, w_1', v_2, w_2, …, v_k, w_k and w_1, w_1', v_2, w_2, …, v_k, w_k, respectively. In particular, as (w_1')<R, these still satisfy the property that at least one vertex of every symplectic pair has rank less than R (this is the condition we achieved in <ref>; it shows that the cross maps are not “worse” than ϕ|_C in this sense).
As both w_1 and w_1' have rank less than R, the σ cross map in Ψ|__t ∗∂ C that corresponds to the edge y_1,t has less vertices of rank R in its image than ϕ|_C, so it is “better” in this sense.
The other σ cross map in Ψ|__t ∗∂ C corresponds to the edge x_1,t. It has the same number of rank R vertices in its image as ϕ|_C. However, the vertices in the image of this map are v_1, w_1', v_2, w_2, …, v_k, w_k. Here, we still have (⟨v⃗_1- v⃗_2 ⟩) = 0, but also
(⟨w⃗_1'+w⃗_2⟩) = (⟨w⃗_1 + w⃗_2 + ϵv⃗_1⟩) < R.
Hence, this cross map satisfies the condition of <ref>, which was violated by ϕ|_C and it is “better” in this sense.
Step 2:
This allows us to assume that <ref> holds, i.e. (w)<R for w = ⟨w⃗_1+ w⃗_2 ⟩ as defined in <ref>.
Let t and s be new vertices and define a combinatorial 3-ball B that looks as follows: It has six vertices x_1,x_2, y_1, y_2, t, s and is the union of three 3-simplices, namely x_1, x_2, y_1, t, x_1, y_1, y_2, t and y_1, y_2, t, s, see <ref>.
Let
B̃ B ∗ C_k-2
be the combinatorial (k+1)-ball that is obtained by joining B with
C_k-2∗_i=3^k x_i, y_i≅ S^k-3.
The vertex set of B̃ consists of the vertices of C = _S(Δ) and the newly added vertices t, s.
We define a map
ΨB̃→
on the vertices by letting Ψ|_C = ϕ|_C, Ψ(t) = v and Ψ(s) = w for v and w as defined in <ref>.
Using the description of C at the beginning of this subsection, the maximal simplices of B̃ are all of the form
Θ∪ x_i | i∈ I∪ y_j | j ∈ J ,
where Θ is a maximal simplex of B and I,J are disjoint sets such that I∪ J = 3, …, k. The images of these simplices look as follows:
Ψ( x_1,x_2, y_1, t ∪ x_i | i∈ I∪ y_j | j ∈ J ) =
v_1,v_2, w_2, v = ⟨v⃗_1 - v⃗_2 ⟩∪ v_i | i∈ I∪ w_j | j ∈ J
— 2-skew-additive simplices;
Ψ( x_1, y_1, y_2, t∪ x_i | i∈ I∪ y_j | j ∈ J ) =
v_1, w_1, w_2, v = ⟨v⃗_1 - v⃗_2 ⟩∪ v_i | i∈ I∪ w_j | j ∈ J
— skew-σ^2 simplices;
Ψ( y_1, y_2, t, s ∪ x_i | i∈ I∪ y_j | j ∈ J ) =
w_1, w_2, v = ⟨v⃗_1 - v⃗_2 ⟩, w = ⟨w⃗_1 + w⃗_2 ⟩∪ v_i | i∈ I∪ w_j | j ∈ J )
— 2-skew-additive simplices.
This shows that the definition we gave for Ψ on the vertices actually defines a simplicial map with image in .
It also shows that this map is regular, namely a prism cross map.
The boundary of B is a union of the subcomplex D_1 spanned by x_1, y_1, x_2, y_2, which is a union of two 2-dimensional simplices that are both mapped to σ simplices, and a subcomplex D_2 consisting of six 2-dimensional simplices, four of which are mapped to σ simplices and two of which are mapped to 2-additive simplices (see <ref>). We have C = _S(Δ) = D_1 ∗ C_k-2, so using <ref>, the boundary of B̃ decomposes as
∂B̃ = ∂ B ∗ C_k-2 = D_1 ∗ C_k-2∪(D_2 ∗ C_k-2) = C ∪ (D_2 ∗ C_k-2).
Both C = D_1 ∗ C_k-2 and D_2 ∗ C_k-2 are combinatorial k-balls and their intersection is given by
∂ C = (∂ D_1) ∗ C_k-2 = (∂ D_2) ∗ C_k-2 = ∂ (D_2 ∗ C_k-2).
Hence by <ref>, ϕ is regularly homotopic to a simplicial map
→
that is obtained by replacing ϕ|_C by Ψ|_D_2 ∗ C_k-2.
From the description of D_2 given above, it is easy to verify that Ψ|_D_2 ∗ C_k-2 has image in ()^≤ R, so the same is true for .
We need to verify that is still σ-regular and replacing ϕ by only creates cross maps that are “better” than ϕ|_C. By <ref>, it suffices to consider cross maps in Ψ|_D_2 ∗ C_k-2.
The complex D_2 ∗ C_k-2 contains two simplices mapping to σ edges, namely y_1, t, which maps to w_1, v, and y_2, t, which maps to w_2, v. The restriction of Ψ to the stars of these simplices in D_2 ∗ C_k-2 is a σ cross map. This implies that Ψ|_D_2 ∗ C_k-2, and hence , is σ-regular. Furthermore, these maps still satisfy the property of <ref>, i.e. at most one element of each symplectic pair in the image has rank R. But now all vertices of the σ edges w_1, v and w_2, v have rank less than R.
This implies that both Ψ|__D_2 ∗ C_k-2( y_1, t ) and Ψ|__D_2 ∗ C_k-2( y_2, t ) have less vertices of rank R in their image than ϕ|_C.
So replacing ϕ by , we get closer to obtain the removing all critical simplices. Iterating this procedure yields the desired map .
We are now ready to show the final result of this subsection.
Let S be a combinatorial k-sphere and ϕ S → ()^≤ R a σ-regular map. Then ϕ is in regularly homotopic to a σ-regular map → ()^≤ R such that the following property holds.
* is σ-small.
We can assume that ϕ satisfies the property of <ref>, i.e. the first point in <ref>.
Let Δ = x_1, y_1 be a simplex such that ϕ(Δ) = v_1,w_1 is a σ edge. We keep again the notation from <ref> and write C = _S(Δ), denote its vertices by x_1, y_1, …, x_k, y_k and their images by v_1, w_1, …, v_k, w_k.
Assume that Δ is critical, i.e. its image v_1,w_1 contains (exactly) one vertex of rank R.
As ϕ satisfies the first point of <ref>, both (v_2) and (w_2) are less than R (this uses the [eq_standing_assumption_nmkR]Standing assumption that n≥ 2).
Now using <ref>, we find a regular homotopy that replaces v_1, w_1 by v_2, w_2. This modifies ϕ|_C such that the result is a σ cross map Ψ|_C' whose image contains the σ edge v_2, w_2. As both of these vertices have rank less than R, this removes a critical simplex of ϕ without creating a new one. As ϕ|_C and Ψ|_C' agree on the vertex set, this preserves the first point in <ref>. Iterating this, we obtain the desired map ϕ'.
§.§ Removing edgy simplices
We keep the [eq_standing_assumption_nmkR]Standing assumption that n≥ 2, m≥ 0, k≤ n and R≥ 1.
We now prove <ref>. This is done in several steps. Much of it is similar to <cit.>. For those parts where the proofs carry over almost verbatim, we provide outlines here and refer the reader to <cit.> for more details.
Let S be a combinatorial k-sphere and ϕ S → a simplicial map.
A simplex Δ of S is called edgy if ϕ(Δ)={ v_0, v_1}, v_0≠ v_1, is an edge such that (v_0) = (v_1) = R.
This coincides with the definition of edgy simplices in <cit.>.
A necessary condition to make sure that bad vertices are isolated in the sense of <ref> is that ϕ S → has no edgy simplices. Assuring that this holds is our main aim before we before we prove <ref> at the end of this subsection.
It is not hard to see that if Δ is edgy, then ϕ(Δ)= v_0, v_1 is either a standard simplex, a 2-additive simplex where v⃗_0 = v⃗_1 ±e⃗_i for some 1 ≤ i ≤ m, a 3-additive v⃗_0 = v⃗_1 ±e⃗_i ±e⃗_j for some 1 ≤ i ≠ j ≤ m or a σ simplex where ω(v_0,v_1) = ± 1 (cf. <ref>).
In the previous <ref>, we already showed how to remove edgy simplices from ϕ whose image is a σ simplex.
The remaining three cases all corresponds to simplex types that are also present in the complex (cf. <ref>). This allows us to closely follows the arguments in <cit.>.
Just as in the setting of that article, we need to further control the stars of such edgy simplices before we can remove these. More precisely, we need to make sure that there are no simplices of the following type:
A simplex Δ of S is called overly augmented, if
* ϕ(Δ) is a 3-additive, double-triple or double-double simplex,
* every vertex of ϕ(Δ) has rank R or is contained in the augmentation core,
* Δ contains at least one vertex x such that (ϕ(x)) = R,
* if ϕ(Δ) is 3-additive, then for all v_0 ∈ϕ(Δ) of rank (v_0) = R, there does not exist v_1∈ϕ(Δ) and 1 ≤ i ≠ j ≤ m such that v⃗_0 = v⃗_1 ±e⃗_i ±e⃗_j.
This is the analogue of <cit.>.
In order to remove overly augmented simplices from ϕ, we will apply the following lemma several times:
Let S be a combinatorial k-sphere and ϕ S → ()^≤ R a simplicial map. Then ϕ is in regularly homotopic to a map → ()^≤ R such that the following properties hold.
* If Δ⊂ is an edgy simplex of , then Δ is a simplex of S and ϕ|_Δ = |_Δ. In particular, every edgy simplex of is also an edgy simplex of ϕ.
* If Δ⊂ is a simplex such that (Δ) is a σ simplex, then Δ is a simplex of S and ϕ|__S(Δ) = |__(Δ). In particular, if ϕ is σ-regular or σ-small, then so is .
* The map has no overly augmented simplices.
The proof works very similarly to the one of <cit.>. The idea is as follows:
First define a measure of “badness” for overly augmented simplices. In <cit.>, this measure is given by three integers a,b,c. Then successively remove overly augmented simplices, starting with the “worst” ones.
Let Δ be (a,b,c)-over augmented in the sense of <cit.>, with (a,b,c) as large as possible (in the lexicographical order). In order to remove Δ from ϕ, one modifies ϕ|__S^k(Δ) such that the image of the result is contained in ϕ(∂Δ) ∗ K(Δ), where K(Δ) is a certain subcomplex of whose vertices have better properties than those of ϕ(Δ).
The complex K(Δ) is defined as follows: If ϕ(Δ) is a double-triple or double-double simplex, we have ϕ(Δ) = v_0, …, v_l, where v_2, …, v_l is a standard simplex. Define
K(Δ) ^<R_( v_2, …, v_l ).
If ϕ(Δ) is 3-additive, we can write ϕ(Δ) = v_0, …, v_l, where v_1, …, v_l is a standard simplex and v⃗_0 = w⃗_1 + w⃗_2 + w⃗_3 for w_1, w_2, w_3∈ v_1, …, v_l, e_1, …, e_m. Let J^< be the set of those lines in ⟨w⃗_1 + w⃗_2 ⟩, ⟨w⃗_1 + w⃗_3 ⟩, ⟨w⃗_2 + w⃗_3 ⟩ that have rank less than R. This set is non-empty by the last condition in <ref>. Define
K(Δ) J^<R∗^<R_( v_1, …, v_l ).
Similarly to <cit.>, one can check that K(Δ) is a subcomplex of _(ϕ(Δ)) and that ϕ(_S(Δ)) ⊆ K(Δ).
Next, one verifies the analogue of <cit.>, namely that K(Δ) is (_S(Δ))-connected. Using that J^<R≠∅, this is a consequence of <ref>.
Just as in <cit.>, one can now use K(Δ) to replace ϕ|__S(Δ) by a map with image in ϕ(∂Δ) ∗ K(Δ) (this is again a cut out argument similar to the proof of <ref>). It follows from <ref> that this defines a regular homotopy. (In fact, the entire homotopy takes place in ). Following <cit.>, one can verify that the result has less overly augmented simplices than ϕ and that no new edgy simplices are introduced in the process.
This replacement takes place on _S(Δ), where ϕ(Δ) is a double-triple, double-double or 3-additive simplex. As no such simplex contains a σ simplex in its star, the process does not affect the stars of simplices of S that map to σ simplices.
Iterating this procedure removes all overly augmented simplices from ϕ, which proves the claim.
We now begin to remove simplices that are edgy in the sense of <ref>. We start with edgy simplices with 3-additive image, i.e. simplices Δ such that ϕ(Δ) = v_0, v_1, where v⃗_0 = v⃗_1 ±e⃗_i ±e⃗_j for some 1 ≤ i ≠ j ≤ m.
Let S be a combinatorial k-sphere and ϕ S → ()^≤ R a map that is σ-regular and σ-small. Then ϕ is in regularly homotopic to a map → ()^≤ R such that the following properties hold.
* is σ-regular.
* is σ-small.
* has no edgy simplices with 3-additive image.
The proof of this lemma is entirely parallel to the one of <cit.>, so we are brief here. By <ref>, we can assume that ϕ has no overly augmented simplices. Let Δ be an edgy simplex such that ϕ(Δ) = v_0, v_1 is 3-additive. There are representatives v⃗_0 and v⃗_1 with (v⃗_0) = R = (v⃗_1) and v⃗_0 = v⃗_1 ±e⃗_i ±e⃗_j for some 1 ≤ i ≠ j ≤ m. Set v⟨v⃗_1 ±e⃗_i ⟩, such that {v_0, v_1, v} is a double-triple simplex.
Using that ϕ has no overly augmented simplices, one can verify that ϕ(_S(Δ)) is contained in _( v_0, v_1, v ).
Now alter S on _S(Δ) by a adding a new point t to the barycentre of Δ and subdividing the simplices in _S(Δ) accordingly (as described in <cit.>). The result is a combinatorial sphere , see <ref>. We define a map by sending t to v. The fact that ϕ(_S(Δ))⊆_( v_0, v_1, v ) implies that this gives indeed a simplicial map → that is regularly homotopic in to ϕ by <ref>. (Again, the homotopy has image in .) This process removes the edgy simplex Δ without introducing new edgy simplices with 3-additive image.
It preserves σ-regularity (<ref>) and σ-smallness (<ref>) because no σ simplex is contained in the star of the 3-additive simplex ϕ(Δ).
The process might have introduced new overly augmented simplices, but using <ref>, these can again be removed without introducing new edgy simplices. Iterating this procedure removes all edgy simplices with 3-additive image from ϕ.
Let S be a combinatorial k-sphere and ϕ S → ()^≤ R a map that is σ-regular and σ-small. Then ϕ is in regularly homotopic to a map → ()^≤ R such that the following properties hold.
* ϕ' is σ-regular.
* ϕ' is σ-small.
* ϕ' has no edgy simplices with 3-additive image.
* ϕ' has no edgy simplices with standard image.
The proof works similarly to that of <cit.>. However, in contrast to the proofs of <ref> and <ref>, it also uses the assumptions that ϕ be σ-regular (<ref>) and σ-small (<ref>). This is why we give a few more details here than in the previous proofs.
Let Δ be an edgy simplex of S such that ϕ(Δ) = v_0,v_1 is a standard simplex. Choose representatives v⃗_0, v⃗_1 such that (v⃗_0) = R = (v⃗_1) and let v ⟨v⃗_0 - v⃗_1 ⟩. This is a vertex in and (v) = 0.
If Δ̃⊇Δ is a simplex containing Δ, then ϕ(Δ̃) contains the vertices v_0 and v_1, which have rank R. As ϕ has no overly augmented simplices and no edgy simplices with 3-additive image, it follows that ϕ(Δ̃) cannot be a 3-additive, double-triple or double-double simplex (see <cit.>). As ϕ is σ-regular, ϕ(Δ̃) also cannot be a mixed simplex (see the description of σ-regular maps at the beginning of <ref>). Lastly, as ϕ is σ-small, the first condition of <ref> implies that ϕ(Δ̃) cannot be a σ simplex. Hence ϕ(Δ) has to be a standard or 2-additive simplex.
Define B to be the combinatorial (k+1)-ball
B t ∗_S(Δ)
that is obtained by coning off _S(Δ) with a new vertex t. Define
Ψ B → ()^≤ R
by setting Ψ|__S(Δ) = ϕ|__S(Δ) and Ψ(t) = v. To see that Ψ gives a well-defined map with image in , note that by the previous paragraph, every Δ̃ in _S(Δ) gets mapped to a standard or 2-additive simplex. In either case, it forms a simplex with v. These simplices are either 2-additive, double-triple or double-double simplices, so they are contained in .
In particular, Ψ is regular.
By <ref>, the boundary of B decomposes as
∂ B = _S(Δ) ∪ (t ∗∂_S(Δ)) = _S(Δ) ∪ (t ∗_S(Δ) ∗∂Δ ).
By <ref>, both _S(Δ) and t ∗_S(Δ) ∗∂Δ are combinatorial k-balls and their intersection is given by
∂_S(Δ) = _S(Δ) ∗∂Δ = ∂ (t ∗_S(Δ) ∗∂Δ ).
Hence by <ref>, ϕ is regularly homotopic to a simplicial map
→ ()^≤ R
that is obtained by replacing ϕ|__S(Δ) = Ψ|__S(Δ) by Ψ|_(t ∗_S(Δ) ∗∂Δ ).
As Ψ(t ∗_S(Δ) ∗∂Δ ) contains no σ simplices, the map is still σ-regular and σ-small. As Ψ(t ∗_S(Δ) ∗∂Δ ) contains no 3-additive simplices and ϕ had no edgy simplices with 3-additive image, neither does .
After possibly applying <ref>, we can remove further edgy simplices with standard image in order to obtain the map .
What remains to be done is to remove edgy simplices with 2-additive image. We do this in the following lemma. In contrast to the previous steps, the removal process here does not preserve σ regularity. Hence, the first condition for σ-smallness in <ref> makes no sense for the resulting map. However, the second condition does and it will be preserved by this process.
Let S be a combinatorial k-sphere and ϕ S → ()^≤ R a map that is σ-regular and σ-small. Then ϕ is in regularly homotopic to a map → ()^≤ R such that the following properties hold.
* If Δ⊂ S is a simplex such that (Δ) is a σ edge, then for all x∈Δ, we have (ϕ'(x))<R.
* has no edgy simplices with 3-additive image.
* has no edgy simplices with standard image.
* has no edgy simplices with 2-additive image.
In particular, ϕ' has no edgy simplices.
The proof of this lemma works similarly to the one of <cit.>, we provide an outline in what follows.
We can assume that all conditions of <ref> are satisfied and that (using <ref>) ϕ has no overly augmented simplices. These in particular imply the first property in the statement of this lemma. In what follows, we do not use the assumption that ϕ be σ-regular and σ-small, which are only true at the first step of our iterative removal procedure.
Let Δ be a maximal edgy simplex with 2-additive image, i.e. ϕ(Δ) = v_0, v_1, where (v_0) = (v_1) and v⃗_1 = v⃗_0±e⃗_i for some 1≤ i ≤ m.
The general strategy is the same as that of the proof of <ref>: We define a complex K(Δ) and modify ϕ|__S(Δ) such that the result has image in ϕ(∂Δ) ∗ K(Δ).
Define K(Δ) _^<R(v_0).
Using <ref>, it is not hard to see that
ϕ(_S(Δ)) ⊆_^<R(ϕ(Δ)).
(This uses that ϕ has no overly augmented simplices and no edgy simplices whose image is standard.)
By <ref> of <ref>,
we have
_^<R(ϕ(Δ)) = _^<R( v_0, ⟨v⃗_0±e⃗_i⟩) = _^<R(ϕ(Δ)).
By <ref>, every simplex of _^<R(ϕ(Δ)) is either contained in
_^<R(v_0) = K(Δ)
or is of type double-triple. But as (v_0) = R and ϕ has no overly augmented simplices, there are no double-triple simplices in ϕ(_S(Δ)) (see <cit.>). Hence, we have
ϕ(_S(Δ)) ⊆ K(Δ).
The complex K(Δ) is (n-2)-connected by <ref>. So as
_S(Δ) = k-(Δ)-1 ≤ n - 2,
the complex K(Δ) is (_S(Δ))-connected.
By <ref>, ϕ is regularly homotopic to a simplicial map
→ ()^≤ R
that is obtained by replacing ϕ|__S(Δ) by a map that has image in ϕ(∂Δ) ∗ K(Δ). It is easy to see that this removes the edgy simplex Δ without introducing new edgy simplices. Again, the homotopy takes place in .
The complex ϕ(∂Δ) ∗ K(Δ) can contain σ edges. However, by <ref>, we have
K(Δ)⊆_(Δ),
so every such σ edge is contained in K(Δ). As K(Δ)⊂ ()^<R by definition, all vertices of such a σ edge have rank less than R. Hence, the map still satisfies the first property in the statement of this lemma. It need not be σ-regular any more though because its restrictions to the stars of the potential new σ simplices in K(Δ) need not be σ cross maps.
This allows us to iterate this procedure (after possibly applying <ref> again) to obtain a map that satisfies all four properties in the statement of the lemma. This implies that has no edgy simplices as the image of every edgy simplex is a standard, 2-additive, 3-additive or σ simplex.
We now combine the results of <ref>, <ref> and <ref> for proving <ref>.
Let S be a combinatorial k-sphere and
ϕ S → ()^≤ R
a simplicial map.
Using <ref>, we can assume that ϕ is σ-regular.
Using <ref>, we can then assume that it is also σ-small. This allows us to apply <ref> and assume that ϕ satisfies all properties of this lemma. In particular, we can assume that ϕ has no edgy simplices.
To show that under these conditions, ϕ actually has [it_isolation]Isolation, let Δ be a bad simplex of S and set vϕ(Δ). We need to see that for all x ∈_S(Δ), we have
ϕ(x)∈ v ∪^<R_(v).
As ϕ is simplicial, we have
ϕ(_S(Δ)) ⊆_(v) = v ∪_(v).
Assume that there was x∈_S(Δ) that gets mapped to a vertex of rank equal to R. Then, as there are no edgy simplices, we have ϕ(x) = v. Consequently,
ϕ(_S(Δ)) ⊆ v ∪^<R_(v).
To show that <ref> holds, by <ref> of <ref>, it only remains to verify that no vertex in ϕ(_S(Δ)) forms a σ simplex with ϕ(Δ) = v. But as (v) = R, this follows from the first condition of <ref>.
§ THEOREM B: A PRESENTATION OF ST"005E"1D714
Recall that by the Solomon–Tits Theorem (<ref>), the reduced homology of the symplectic Tits building T^ω_n is concentrated in dimension n-1. This homology group is what we call the symplectic Steinberg module
^ω_n = _n() H_n-1(T^ω_n;).
In this section, we obtain a presentation of this module by proving <ref>. The proof is an induction on the genus n. We start by giving
an overview and outline the four steps of the argument, which correspond to the
four subsections.
Outline of the proof: <ref> contains topological preparations. We enlarge the three highly connected, nested simplicial complexes (, , ) that we studied in the previous sections to three highly connected, nested simplicial complexes (^(2), ^(1), ^(0)). The high connectivity results for the latter are a corollary of the connectivity results for the former. The key difference is that the local structure of the complexes (^(2), ^(1), ^(0)) is “nice”; e.g. links of (certain) simplices in ^(2) are isomorphic to copies of [k]^(0) with k < n. In <ref>, we use this local structure and the long exact homology sequence of the triple (^(2), ^(1), ^(0)) to construct an “intermediate” exact sequence relating the symplectic Steinberg module _n^ω to direct sums
of ^ω_k with k < n. The exactness of this “intermediate” sequence is a key ingredient of the induction argument on the genus n described in the final subsection. Indeed, in <ref>, we reduce <ref> to checking that a certain sequence
_n ⊕_n _n ^ω_n ⟶ 0
of “(augmented) apartment” 2n-modules is exact. In <ref>, we construct a commutative diagram that relates the “intermediate” exact sequence with the sequence in
<ref>. The key point is that this diagram also contains a direct sum of copies of <ref> but for “smaller” Steinberg modules
^ω_k with k < n. This is where the induction hypothesis is applied in the last step of the argument. In the final <ref>,
we use induction on the genus n and a diagram
chasing argument to show that the sequence in
<ref> is exact. Throughout this section, we make the standing assumption that
n ≥ 1.
Standing assumption
§.§ (IAA"005E(2), IAA"005E(1), IAA"005E(0)) and an intermediate exact sequence
This subsection contains topological preparations for the proof of
<ref>. We enlarge the nested complexes (, ,
) to nested complexes (^(2), ^(1),
^(0)), prove that each is highly connected and describe their local
structure. From this we construct an “intermediate” exact sequence, which is a key ingredient of the proof of <ref>.
§.§.§ The complex IAA"005E(2) and its subcomplexes
To define ^(2) and its subcomplexes, we introduce some new types of “mixed” simplices.
Let
n v ⊆^2n | v is a rank-1 summand of ^2n
be as in <ref>. Let τ and τ' be simplex types of the form
τ ∈standard, 2-additive, 3-additive, double-triple, double-double,
τ' ∈σ, skew-additive, 2-skew-additive, σ^2, skew-σ^2, σ-additive,
where the simplex types are defined as in <ref>, <ref> and <ref>.
A subset
Δ = {v_0, …, v_k}⊂n
of (k+1) lines is called a (τ, τ') simplex if there is a subset Δ'⊂Δ such that Δ' is a minimal simplex of type τ' and Δ∖Δ' is a simplex of type τ such that every v ∈Δ∖Δ' is contained in ⟨v⃗' | v'∈Δ'⟩^⊥.
Note that a mixed simplex as defined in <ref> is a (2-additive, σ) simplex in this notation. A (standard, τ') simplex is just the same as a τ' simplex. (This follows using <ref>.)
Let {e⃗_1, …, e⃗_n, f⃗_1, …, f⃗_n } be a symplectic basis of ^2n and let 1≤ k ≤ n. Then
* ⟨e⃗_1 + e⃗_2 + e⃗_3 ⟩, e_1, e_2, e_3, …, e_k-1, e_k, f_k is a (3-additive, σ) simplex, where the minimal simplex of type σ is Δ' = { e_k, f_k};
* {⟨e⃗_1 + e⃗_2 ⟩, e_1, e_2, e_3, …, e_k-1, e_k, f_k, ⟨e⃗_k + f⃗_k ⟩} is a (2-additive, σ-additive) simplex, where the minimal σ-additive simplex is Δ' = { e_k, f_k, ⟨e⃗_k + f⃗_k ⟩}.
The nested complexes (^(2), ^(1), ^(0)) are now defined in such a way that they allow for more and more symplectic pairs in their simplices. I.e. simplices in ^(0) do not contain any symplectic pair (such as standard or double-triple simplices), simplices in ^(1) contain at most one symplectic pair (such as σ or ( double-double,σ) simplices) and simplices in ^(2) contain at most two symplectic pairs (such as σ^2 but also σ-additive simplices). For technical reasons, we also define a complex ^(1.5) that sits between ^(2) and ^(1). <ref> illustrates the inclusion relations of these complexes as well as their relation to the complexes and studied in earlier sections.
The simplicial complexes ^(i) all have vertex set n.
* The simplices of ^(0) are all either standard, 2-additive, 3-additive, double-triple or double-double.
* The simplices of ^(1) are all either standard, 2-additive, 3-additive, double-triple, double-double or (τ, τ') with
τ ∈standard, 2-additive, 3-additive, double-triple, double-double,
τ' ∈σ.
* The simplices of ^(1.5) are all either standard, 2-additive, 3-additive, double-triple, double-double or (τ, τ') with
τ ∈standard, 2-additive, 3-additive, double-triple, double-double,
τ' ∈σ, skew-additive, 2-skew-additive.
* The simplices of ^(2) are all either standard, 2-additive, 3-additive, double-triple, double-double or (τ, τ') with
τ ∈standard, 2-additive, 3-additive, double-triple, double-double,
τ' ∈σ, skew-additive, 2-skew-additive, σ^2, skew-σ^2, σ-additive.
Finally, if ⊆^2n is a summand of genus k ≤ n, we let []^(i)() denote the full subcomplex of ^(i) on the set n∩ of rank-1 summands of .
The complexes ^(1) and ^(1.5) are, in fact, homotopy equivalent as the following lemma shows.
There is a deformation retraction ^(1.5)→^(1).
The complex ^(1.5) is obtained from the complex ^(1) by attaching the simplices of type (τ, 2-additive) and (τ, 2-skew-additive).
The proof now works exactly as that of <ref>:
As in <ref>, one can see that for each k-dimensional (τ, 2-additive) simplex Δ, there is a unique (τ, 2-skew-additive) simplex of dimension (k+1) that has Δ as a face. Starting with the top-dimensional simplices, we can use this to push in all (τ, 2-additive) simplices, which removes both the (τ, 2-additive) and the (τ, 2-skew-additive) simplices.
§.§.§ The local structure of (IAA"005E(2), IAA"005E(1), IAA"005E(0))
In this short subsection, we describe the local structure of the complexes ^(2) and ^(1) by relating the link of certain simplices to [k]^(0) for k < n.
Let Δ be a minimal σ simplex in ^(1). Then
_^(1)(Δ) = []^(0)(⟨Δ⟩^⊥) ≅[n-1]^(0).
Note that ⟨Δ⟩^⊥⊆^2n is a genus (n-1)-summand. The equality on the left immediately follows from the definitions of ^(1) and []^(0)(⟨Δ⟩^⊥) (see <ref>). An isomorphism on the right is induced by the choice of some identification ⟨Δ⟩^⊥^2(n-1).
Similarly, one checks the following:
Let Δ be a simplex in ^(2).
* If Δ is minimal σ^2 or skew-σ^2, then _^(2)(Δ) = []^(0)(⟨Δ⟩^⊥) ≅[n-2]^(0).
* If Δ is minimal σ-additive, then _^(2)(Δ) = []^(0)(⟨Δ⟩^⊥) ≅[n-1]^(0).
§.§.§ Connectivity properties of (IAA"005E(2), IAA"005E(1), IAA"005E(0))
We now use the connectivity results obtained in earlier sections to show that the complexes ^(i) are (n-2+i)-connected for i=0,1,2.
We first note that there is a poset map
s(^(0)) → T^ω_n
given by sending a simplex Δ to the span ≪Δ of its vertices.
The span map s(^(0)) → T^ω_n is (n+1)-connected.
As in the proof of <ref>, this can be shown using <cit.> by van der Kallen–Looijenga[For the n in <cit.>, choose what is (n+1) in the notation of the present article; define their map t by t(V)=(V)+ 2].
To apply it in the setting here, one uses that the target T^ω_n is Cohen–Macaulay of dimension n-1 (<ref>) and that for V∈ T^ω_n, the poset fibre s_≤ V is isomorphic to ((V)). The latter is (V)-connected by <ref>.
As T^ω_n is (n-2)-connected by the Solomon–Tits Theorem (<ref>), this implies:
^(0) is (n-2)-connected.
Furthermore, we obtain the following corollary, which relates ^(0) to the symplectic Steinberg module and which will be frequently used later.
We have H_n-1(^(0)) ≅_n.
This follows from <ref> because H_n-1(T^ω_n) = _n.
We now turn our attention to the complex ^(1), which can be seen as a
version of with a nicer local structure (see
<ref>). Recall that is
(n-1)-connected (see <ref>). In <ref>,
we deduced from this that is (n-1)-connected, using that it is obtained from
by attaching new simplices along highly connected links. Similarly,
^(1) is obtained from and therefore ^(1) is
(n-1)-connected as well. This connectivity argument is formalised in the
following lemma and corollary.
The inclusion ↪^(1) is n-connected.
Comparing <ref> to <ref> and recalling that a mixed simplex has type (2-additive, σ), one sees that the simplices of ^(1) that are not contained in are exactly those of type (τ, σ), where
τ∈3-additive, double-triple, double-double.
We apply the standard link argument explained in <ref> twice.
Firstly, let X_1 be the complex that is obtained from by attaching all (3-additive, σ) simplices. It has X_0 = as a subcomplex. Let B be the set of minimal (3-additive, σ) simplices contained in X_1. This is a set of bad simplices in the sense of <ref>. Following <ref>, we find that _X_1^good(Δ) = _X_1(Δ) for Δ∈ B. One then checks that for any Δ∈ B, there is an isomorphism _X_1(Δ) ≅[n-4][3]. This complex is (n-(Δ)-1)=(n-6)-connected by <ref>. Hence, the inclusion ↪ X_1 is n-connected by <ref> of <ref>.
Secondly, let X_2 = ^(1). By the previous paragraph, the lemma follows if we show that the inclusion X_1 ↪ X_2 is n-connected.
Let B be the set of minimal (double-triple, σ) and (double-double, σ) simplices contained in X_2. This is a set of bad simplices in the sense of <ref>. Following <ref>, we find that _X_2^good(Δ) = _X_2(Δ) for Δ∈ B. For Δ∈ B one then checks: If Δ is a minimal (double-triple, σ) simplex, then Δ has dimension 6 and there is an isomorphism _X_2(Δ) ≅[n-4][3]. If Δ is a minimal (double-double, σ) simplex, then Δ has dimension 7 and there is an isomorphism _X_2(Δ) ≅[n-5][4]. In either case, <ref> implies that _X_2(Δ) is ((n-(Δ)-1)+1)-connected. Using <ref> of <ref>, it therefore follows that X_1 ↪ X_2 is n-connected.
^(1) is (n-1)-connected.
This follows from <ref> because is (n-1)-connected by <ref>.
Just as ^(1) can be seen as a variant of , the complex ^(2) can be understood as a version of with a nicer local structure (see <ref>) and the same connectivity properties. We close this subsection by proving that ^(2) is n-connected.
The inclusion ^(1)↪^(2) is n-connected.
By <ref>, the inclusion ^(1)↪^(1.5) is n-connected. So it suffices to show that the same is true for the inclusion ^(1.5)↪^(2)
The simplices of X_1^(2) that are not contained in X_0^(1.5) are exactly those of type (τ, τ'), where
τ' ∈σ^2, skew-σ^2, σ-additive.
We apply the standard link argument explained in <ref>.
Let B be the set of minimal τ' simplices with τ' as in <ref>. This is a set of bad simplices in the sense of <ref>. Following <ref>, we find that _X_1^good(Δ) = _X_1(Δ) for Δ∈ B.
We need to show that each such link is (n-(Δ)-1)-connected.
But this follows immediately from the results in <ref>: Minimal σ^2 and skew-σ^2 simplices have dimension 3 and their link is isomorphic to [n-2]^(0) by <ref>, hence (n-4) = (n-3-1)-connected by <ref>. Similarly, minimal σ-additive simplices have dimension 2 and their link is isomorphic to [n-1]^(0) by <ref>, hence (n-3) = (n-2-1)-connected by <ref>
These results suffice to deduce that ^(2) is highly connected as well:
The inclusion ↪^(2) is n-connected.
By <ref> and <ref>, the inclusion ↪^(2) is n-connected and by <ref>, the inclusion ↪ is n-connected as well. This implies the claim.
^(2) is n-connected.
This follows from <ref> because is n-connected by <ref>.
We remark that combining <ref> and <ref> with <ref> and <ref> leads to a description of H_n(^(1) , ^(0)) and H_n+1(^(2) , ^(1)) as sums of smaller Steinberg modules _k with k < n.
§.§.§ An intermediate exact sequence
We now use the long exact homology sequence of the triple (^(2), ^(1), ^(0)), the connectivity theorems and the observations about the local structure above to construct an “intermediate” exact sequence that is akin but not equal to the sequence in <ref>. We start by identifying one of the relative homology groups with ^ω_n.
The following is a sequence of 2n-equivariant isomorphisms:
H_n(^(2), ^(0)) H_n-1( ^(0)) H_n-1( T^ω_n) = ^ω_n.
The first isomorphism ∂_n^(2,0) = ∂_(^(2), ^(0)) is the connecting morphism in the long exact sequence of the pair (^(2), ^(0)). This uses the fact that ^(2) is n-connected (see <ref>). The second isomorphism s_* is induced by the composition of the canonical homeomorphism between ^(0) and its barycentric subdivision (^(0)) and the span map from <ref> (defined in the paragraph before <ref>).
In the next step, we extract an exact sequence from the long exact sequence of the triple (^(2), ^(1), ^(0)).
The following is an exact sequence of 2n-modules:
H_n+1(^(2), ^(1)) H_n(^(1), ^(0)) ^ω_n ⟶ 0.
Consider the long exact sequence of the triple (^(2), ^(1), ^(0)
H_n+1(^(2), ^(1)) H_n(^(1), ^(0))
⟶ H_n(^(2), ^(0)) ⟶ H_n(^(2), ^(1)) = 0.
We observe that H_n(^(2), ^(1)) = 0 because ^(2) is n-connected and ^(1) is (n-1)-connected (see <ref> and <ref>). The result then follows by invoking <ref>, and using that the connecting morphism ∂_n^(1,0) of the pair (^(1), ^(0)) satisfies that
H_n(^(1), ^(0)) [r] [rd, "∂_n^(1,0)", swap] H_n(^(2), ^(0)) [d, "∂_n^(2,0)"]
H_n-1(^(0)).
commutes.
Finally, we explain how one can decompose the two remaining relative homology groups in the exact sequence appearing in <ref> into direct sums of “smaller” symplectic Steinberg modules _k^ω for k < n.
Fix a total ordering on the set of vertices of ^(2) such that every simplex Δ is oriented. We have the following 2n-equivariant isomorphisms:
H_n(^(1) , ^(0)) ≅⊕_Δ = {v_1,w_1}
σ simplex^ω(≪Δ^⊥)
and
H_n+1(^(2) , ^(1))
≅⊕⊕_Δ = {v_1,w_1,v_2,w_2}
σ^2 simplex^ω(≪Δ^⊥)
⊕_Δ = {z_0, z_1, z_2, z_3}
skew-σ^2 simplex^ω(≪Δ^⊥)
⊕_Δ = {z_0, z_1, z_2}
σ-additive simplex^ω(≪Δ^⊥)
.
where we use the convention that ^ω({0}).
In <ref>, the 2n-module structure of the sum terms appearing on the right is as follows: We make it precise for the 2n-module
⊕_Δ = {v_1,w_1,v_2,w_2}
σ^2 simplex^ω(≪Δ^⊥),
for the other terms it is defined similarly. Let Δ be a minimal σ^2 simplex in ^(2) and consider a class ζ∈^ω(≪Δ^⊥). Then, an element ∈2n acts by mapping ζ to the class ± (ϕ·ζ) in the summand ^ω(≪ϕ·Δ^⊥) indexed by the σ^2 simplex ϕ·Δ, where the sign is positive if ϕΔ→ϕ·Δ is orientation-preserving with respect to the total ordering of the vertices and negative if it is orientation-reversing.
For the second isomorphism, we first use the identification
H_n+1(^(2) , ^(1))≅ H_n+1(^(2) , ^(1.5))
induced by the deformation retraction in <ref>. Apart from this extra step, the construction of the two claimed isomorphisms is analogous and can be described simultaneously: Let k = n, i = 1 and i' = 0 or k = n+1, i = 2 and i' = 1.5. Using excision and that
_^(i)(Δ) ∩^(i') = ∂Δ∗_^(i)(Δ)
for Δ one of the minimal simplices listed above,
we obtain isomorphisms
H_n(^(1) , ^(0)) ≅⊕_Δ minimal
σ simplex H_n( _^(1)(Δ), ∂Δ∗_^(1)(Δ))
and
H_n+1(^(1.5) , ^(1))
≅⊕⊕_Δ minimal
σ^2 simplex H_n+1( _^(2)(Δ), ∂Δ∗_^(2)(Δ))
⊕_Δ minimal
skew-σ^2 simplex H_n+1(_^(2)(Δ), ∂Δ∗_^(2)(Δ))
⊕_Δ minimal
σ-additive simplex H_n+1(_^(2)(Δ), ∂Δ∗_^(2)(Δ))
.
The terms appearing on the right can then be further simplified: The fact that _^(i)(Δ) is contractible implies that the connecting morphism of the long exact sequence of the pairs ((Δ)_^(i), ∂Δ∗_^(i)(Δ)) is an isomorphism in reduced homology, i.e.
H_k(_^(i)(Δ), ∂Δ∗_^(i)(Δ)) ≅H_k-1(∂Δ∗_^(i)(Δ)).
We then observe that ∂Δ is a (Δ - 1)-sphere and use the ∂Δ-suspension isomorphism associated to _^(i)(Δ) to obtain an identification
H_k-1(∂Δ∗_^(i)(Δ)) ≅H_(k-1)-Δ(_^(i)(Δ)).
We note that this isomorphism is determined by the orientation of Δ, i.e. it is induced by the cross product with the orientation class η_Δ - 1∈H_(Δ)-1(∂Δ) (see e.g. <cit.>).
Finally, we use <ref> to identify _^(i)(Δ) with [n-1]^(0) if i = 1 and <ref> to identify _^(i)(Δ) with [n-2]^(0) or [n-1]^(0) (depending on Δ) if i = 2. Invoking <ref> then yields an isomorphism H_(k-1)-Δ(_^(i)(Δ)) ≅^ω(≪Δ^⊥).
The exact sequence obtained in <ref> identifies with an exact sequence
⊕⊕_Δ = {v_1,w_1,v_2,w_2}
σ^2 simplex^ω(≪Δ^⊥)
⊕_Δ = {z_0, z_1, z_2, z_3}
skew-σ^2 simplex^ω(≪Δ^⊥)
⊕_Δ = {z_0, z_1, z_2}
σ-additive simplex^ω(≪Δ^⊥)
⊕_Δ = {v_1,w_1}
σ simplex^ω(≪Δ^⊥)
^ω_n
→
0
where we use the convention that ^ω({0}).
§.§ Reduction of Theorem B to checking exactness
The goal of this subsection is to define 2n-modules _n, _n and _n as well as maps _n, _n and _n such that <ref> is a consequence of the following statement.
For n≥ 1, the sequence
_n ⊕_n _n ^ω_n ⟶ 0
is exact.
Throughout this section we use the following notation: Let n ≥ 1 and be a symplectic summand of (^2n, ω) of genus k. We denote by Sp() ⊆2n the symplectic automorphisms of and let ^ω() be the symplectic Steinberg module of Sp(). If = {0} we define ^ω().
The apartment module _n and the apartment class map : _n →_n^ω: A theorem of Gunnells <cit.> states that _n^ω is a cyclic 2n-module, i.e. there is an equivariant surjection '_n[2n] ↠_n^ω. The 2n-module _n appearing in <ref> is a quotient of [2n] through which Gunnells' apartment class map '_n factors, i.e. there is a commutative diagram of 2n-equivariant maps
[2n] [d, two heads] [dr, two heads, "'_n"]
_n [r, two heads, "_n"] _n^ω.
Gunnells' map '_n[2n] ↠_n^ω is defined as follows: A symplectic matrix M ∈2n
is, by considering the column vectors of M, equivalent to the data of an
ordered symplectic basis M = (v⃗_1, w⃗_1,…, v⃗_n,w⃗_n) of
^2n. We now describe how each such symplectic basis gives rise to a
unique simplicial embedding
M S^n-1↪ T_n^ω
and hence, after fixing a fundamental class ξ_n-1∈H_n-1(S^n-1), defines a unique class '_n(M) =
M_*(ξ_n-1) ∈_n^ω, the apartment class of M. We
refer the reader to <cit.> for a more detailed account.
Let n{ 1, 1̅, …, n, n̅}. A nonempty subset I ⊆n is called a standard subset if for all 1 ≤ a ≤ n it is true that {a,a̅}⊄I. We denote by C_n the simplicial complex whose vertex set is n and whose k-simplices are the standard subsets I ⊂n of size k+1.
Note that the simplicial complex C_n in <ref> encodes exactly the simplicial structure of the boundary sphere of the n-dimensional cross polytope appearing in <ref> and that there is a
homeomorphism C_n+1≅ C_n ∗∂{n+1,
n+1}. Hence, C_n = ∗_i=1^n S^0 is a
simplicial (n-1)-sphere. Given any ordered symplectic basis M = (v⃗_1,
w⃗_1,…, v⃗_n,w⃗_n) of ^2n, we can label its basis
vectors from left to right using { 1, 1̅, …, n, n̅} (i.e.
M⃗_a = v⃗_a and M⃗_a̅ = w⃗_a). Denoting by
(C_n) the poset of simplices of C_n, we obtain an embedding of posets M(C_n) ↪ T^ω_n by
mapping a standard subset I to M_I = ⟨M⃗_z | z ∈ I ⟩.
Furthermore, we fix a fundamental class ξ_-1∈H_-1(C_0; ) = H_-1(∅; ) = and define ξ_n for n ≥ 0 as the class in H_n(C_n+1; ) obtained from ξ_n-1 using the suspension isomorphism C_n+1≅ C_n ∗∂{n+1, n+1}.
This completes our discussion apartment classes and Gunnells' map
'_n.
We now turn to the definition of _n and _n.
For this we recall that, for n≥ 1, the set of all bijections
πn→n
with π(a̅) = π(a) for all a∈ n is the group of
signed permutations. Here a̅̅̅ =a for a ∈{1, …, n}. We denote this group by _n. Let
_n = s_1, …, s_n ⊂_n
denote the subset containing the following permutations: For 1 ≤ i < n,
s_i swaps i and (i+1) while keeping all other elements fixed, and s_n
swaps n and n̅ while keeping all other elements fixed. Then
(_n,_n) is a
Coxeter system of type 𝙲_n = 𝙱_n (see <cit.> or
<cit.> for more details). For π∈_n, we write len(π) = len_(π) for the word
length of π with respect to the generating set .[A combinatorial description of this length function can be found in <cit.>.]
Let be a genus-k symplectic summand of (^2n, ω).
The symplectic apartment module
() is the
Sp()-module whose underlying group is free
abelian with generators the set of formal symbols [v_1,w_1, …, v_k, w_k],
where
* (v_1,w_1, …, v_k, w_k) is a tuple of lines in
such that, for
some choice of primitive representatives, (v⃗_1,w⃗_1, …,
v⃗_k, w⃗_k) is a symplectic basis of
,
and where for all π∈𝒲_k, we have
[v_1, v_1̅, …, v_k, v_k̅] =
(-1)^len(π)·
[v_π(1),v_π(1̅),…, v_π(k), v_π(k̅)].
We write _n =
(^2n) and set ({0}). The Sp()-action is defined by
· [v_1, w_1, …, v_k, w_k] = [
(v_1), (w_1), …, (v_k),
(w_k)] for all ϕ∈Sp().
Note that () is a quotient of
[Sp()]. The apartment class map on
() is defined as follows:
Let be a symplectic summand of (^2n, ω)
of genus k. The symplectic apartment class map
_() →() is the unique map such that the following is a
commutative diagram of Sp()-equivariant maps
[Sp()] [d, two heads] [dr, two heads, "_'"]
() [r, two heads, "_"] ^ω()
where _' is Gunnells' map that we described above. If = ^2n, we write _n = _^2n.
It follows from <cit.>[<cit.> contains a misprint. Using the notation of <cit.>, the corrected statement is: [v_1, …, v_n; v_n̅, …, v_1̅] = sign(τ)· [τ(v_1), …, τ(v_n); τ(v_n̅), …, τ(v_1̅)].] that Gunnells' map _ factors over (), i.e. that <ref> is well-defined.
Mapping the augmented apartment modules _n and _n to _n: We now define the two augmented apartment modules _n and _n, as well as the two maps _n_n →_n and _n_n →_n occurring in <ref>. We start with the pair (_n, _n), which is related to the σ-additive simplices appearing in .
Let be a genus-k symplectic summand of (^2n, ω). The σ-additive apartment module
() is the
Sp()-module whose underlying group is free
abelian with generators the set of formal symbols [z_0, z_1, z_2] ∗ [v_2, w_2,
…, v_k, w_k], where
* (v_2, w_2, …, v_k, w_k) is a tuple of
lines in such that, for some choice of primitive
representatives, (v⃗_2, w⃗_2, …, v⃗_k, w⃗_k) is
a symplectic basis of a genus-(k-1) summand _k-1⊂;
* (z_0, z_1, z_2) is a tuple of lines in
such that {z_0, z_1, z_2} is a
σ-additive simplex[The conditions given in this item are equivalent to saying that for some choice of primitive representatives z⃗_0, z⃗_1, z⃗_2, these three vectors span the symplectic complement of U_k-1 and satisfy ω(z⃗_0, z⃗_1) = ω(z⃗_1, z⃗_2) = ω(z⃗_0, z⃗_2) = 1.] in the symplectic complement of
_k-1 in ;
and where for all τ∈({0,1,2}) and π∈_k such that π(1)=1, we have
[z_0, z_1, z_2] ∗ [v_1,
v_1̅, …, v_k, v_k̅] =
sign(τ)(-1)^len(π)·
[z_τ(0), z_τ(1), z_τ(2)] ∗ [v_π(2),
v_π(2̅),…, v_π(k),
v_π(k̅)].
We write _n = (^2n) and set ({0}) {0}. The Sp()-action is defined by
ϕ· [z_0, z_1, z_2] ∗ [v_2, …, w_k] =
[ϕ(z_0), ϕ(z_1), ϕ(z_2)] ∗ [ϕ(v_2), …,
ϕ(w_k)] for all ϕ∈Sp().
The Sp()-equivariant map
_() →()
is defined by
_([z_0, z_1, z_2] ∗ [v_2, …, w_k])
[z_1, z_2, v_2, …, w_k] - [z_1, z_0, v_2,…,
w_k] - [z_0, z_2, v_2, …, w_k].
If = ^2n, we write _n = _^2n.
In the setting of <ref>, an elementary
calculation using the symplectic form ω shows that the terms
appearing in the image under _ of a formal
symbol in () are indeed
formal symbols in (). Using the
relations between formal symbols in ()
(see <ref>), one then verifies that
_ is well-defined.
We now introduce (_n, _n), which is related to the skew-apartment simplices in .
Let be a genus-k symplectic summand of (^2n, ω). The skew-apartment module
() is the
Sp()-module whose underlying group is free
abelian with generators the set of formal symbols [z_0, z_1, z_2, z_3] ∗ [v_3, w_3,
…, v_k, w_k], where
* (v_3, w_3, …, v_k, w_k) is a tuple of
lines in such that, for some choice of primitive
representatives, (v⃗_3, w⃗_3, …, v⃗_k, w⃗_k) is
a symplectic basis of a summand _k-2⊂ of genus k-2;
* (z_0, z_1, z_2, z_3) is a tuple of lines in
such that {z_0, z_1, z_2, z_3} is a
skew-apartment simplex[The conditions given in this item are equivalent to saying that for some choice of primitive representatives z⃗_0, z⃗_1, z⃗_2, z⃗_3, these four vectors span the symplectic complement of U_k-2 and satisfy ω(z⃗_0, z⃗_1) = ω(z⃗_1, z⃗_2) = ω(z⃗_2, z⃗_3) = 1.] in the symplectic complement of
_k-2 in and ω(z⃗_0, z⃗_1) = ω(z⃗_1, z⃗_2) = ω(z⃗_2, z⃗_3) = 1 for some choice of primitive representatives;
and where for all π∈_k such that π(1)=1 and
π(2)= 2, we have
[z_0, z_1, z_2,z_3] ∗ [v_3, v_3̅, …, v_k,
v_k̅]
= (-1)^len(π)· [z_0, z_1, z_2,
z_3] ∗ [v_π(3), v_π(3̅),…, v_π(k),
v_π(k̅)]
= (-1)^len(π)· [z_3, z_2, z_1,
z_0] ∗ [v_π(3), v_π(3̅),…, v_π(k),
v_π(k̅)].
We write _n = (^2n) and set ({0}) {0}. The Sp()-action is defined by
ϕ· [z_0, z_1, z_2, z_3] ∗ [v_3, …, w_k] =
[ϕ(z_0), ϕ(z_1), ϕ(z_2), ϕ(z_3)] ∗ [ϕ(v_3),
…, ϕ(w_k)] for all ϕ∈Sp().
The Sp()-equivariant map
_() →()
is defined by
_([z_0, z_1, z_2, z_3] ∗ [v_3,…,w_k])
[ z_0, z_1, ⟨z⃗_0 + z⃗_2 ⟩, z_3, v_3, …,
w_k]
+ [ z_1, z_2, ⟨z⃗_0 + z⃗_2 ⟩, ⟨z⃗_1 + z⃗_3 ⟩, v_3,…, w_k]
+ [z_2, z_3, z_0, ⟨z⃗_1 + z⃗_3 ⟩, v_3,
…, w_k],
where (z⃗_0, z⃗_1, z⃗_2, z⃗_3) are primitive
representatives such that ω(z⃗_0, z⃗_1) = ω(z⃗_1,
z⃗_2) = ω(z⃗_2, z⃗_3) = 1.
If = ^2n, we write _n = _^2n.
In the setting of <ref>, an elementary
calculation using the symplectic form ω shows that
_ in the terms in the image of a formal symbol
in () are indeed in
(). Using the relations between formal
symbols in () (see
<ref>) one then verifies that
_ is well-defined.
We are now ready to prove that <ref> implies <ref>.
Assuming <ref>, ^ω_n is a quotient of
_n with two additional relations:
The first relation is given by
0 = [z_1, z_2, v_2, …, w_n] - [z_1, z_0, v_2, …, w_n] - [z_0, z_2, v_2, …, w_n],
where [z_0, z_1, z_2] ∗ [v_2, …, w_n] is a generator of
_n, i.e. for some choice of primitive
vectors, (z⃗_1, z⃗_2, v⃗_2, …, w⃗_n) is a
symplectic basis of ^2n and z⃗_0 = z⃗_1 + z⃗_2
(compare <ref> and <ref>).
The second relation is given by
0 = [ z_0, z_1, ⟨z⃗_0 + z⃗_2 ⟩, z_3, v_3,
…, w_n]
+ [ z_1, z_2, ⟨z⃗_0 + z⃗_2 ⟩, ⟨z⃗_1 + z⃗_3 ⟩, v_3,…, w_n]
+ [z_2, z_3, z_0, ⟨z⃗_1 + z⃗_3 ⟩, v_3,
…, w_n],
where [z_0, z_1, z_2, z_3] ∗ [v_3,…, w_n] is a generator of
_n, i.e. for some choice of primitive
vectors (z⃗_0, z⃗_1, z⃗_2, z⃗_3, v⃗_3,
…, w⃗_n), it holds that
(v⃗_1', w⃗_1', v⃗_2', w⃗_2', v⃗_3',
…, w⃗_n') (z⃗_0, z⃗_1 + z⃗_3,
z⃗_2, z⃗_3, v⃗_3, …, w⃗_n)
is a symplectic basis of ^2n (compare
<ref> and <ref>).
The relation in the definition of _n (see
<ref>) is the first in <ref>. The relation
imposed by <ref> directly translates to the
second relation claimed in <ref>. Finally, the relation imposed by
<ref> can be translated into the third relation
in <ref>: Using <ref>, we obtain
0 = [ v'_1, ⟨w⃗'_1-w⃗'_2 ⟩ , ⟨v⃗'_1 +
v⃗'_2 ⟩, w'_2, v'_3, …, w'_n]
+ [ ⟨w⃗'_1 - w⃗'_2 ⟩, v'_2, ⟨v⃗'_1 +
v⃗'_2 ⟩, w'_1, v'_3, …, w'_n]
+ [ v'_2, w'_2, v'_1, w'_1, v'_3, …,w'_n].
<ref> is exactly the third relation
of <ref>, since the first relation implies
[v⃗'_2, w⃗'_2,v⃗'_1, w⃗'_1,v⃗'_3,…, w⃗'_n] = - [ v⃗'_1, w⃗'_1, v⃗'_2,
w⃗'_2, v⃗'_3,…, w⃗'_n].
§.§ A commutative diagram
The goal of this subsection is to explain how the sequences and modules described in the previous subsections are related. This relation is expressed in terms of a commutative diagram and the content of the next proposition.
There exist 2n-equivariant maps π_σ^2, π_, π_, ∂_σ^2, ∂_, ∂_, ∂_σ, ∂_σ^ and ∂_σ^
such that the two sequences occurring in <ref> and
<ref> fit into the
commutative diagram depicted in <ref>. Furthermore, the maps
π_σ^2⊕π_⊕π_ and ∂_σ are surjective.
For the proof of <ref> and throughout this subsection, we fix a total ordering on the vertex set of ^(2). We write z < z' if in this ordering the vertex z is smaller than the vertex z' of ^(2). The 2n-module structure of the sum terms appearing in <ref> is defined analogous to <ref>. Keeping this in mind, we now define the equivariant maps appearing <ref>.
Let Δ∈^(2) be a minimal simplex of type τ∈{σ^2, skew-σ^2, σ-additive}.
Then the value of
∂_τ(⟨Δ⟩^⊥) →⊕_Δ' = {v_1,w_1}
σ
simplex(≪Δ'^⊥)
at a formal symbol
[v_k, …, w_n] ∈(⟨Δ⟩^⊥),
where k = 2 if τ = σ-additive and k = 3 otherwise,
is defined as follows.
* If Δ = {v_1, w_1, v_2, w_2} is a σ^2 simplex such that ω(v⃗_1, w⃗_1) = ω(v⃗_2, w⃗_2) = 1, v_1 < w_1 and v_2 < w_2, then
∂_σ^2([ v_3, …, w_n]) =
[v_1, w_1, v_3,…, w_n] ⊕ [v_2, w_2, v_3, …, w_n]
in (⟨ v_2, w_2 ⟩^⊥) ⊕(⟨ v_1, w_1 ⟩^⊥) indexed by the σ simplices {v_2, w_2} and {v_1, w_1}, respectively.
* If Δ = {z_0, z_1, z_2, z_3} is
a skew-σ^2 simplex such that ω(z⃗_0, z⃗_1) = ω(z⃗_1, z⃗_2) = ω(z⃗_2, z⃗_3) = 1,
let
v_0, w_0 = {⟨z⃗_0 + z⃗_2 ⟩, z_3 }, v_1, w_1 = {⟨z⃗_0 + z⃗_2 ⟩, ⟨z⃗_1 + z⃗_3 ⟩},
v_2, w_2 = {z_0 ,⟨z⃗_1 + z⃗_3 ⟩}, where v_i<w_i for 0≤ i ≤ 2.
Define
∂_([v_3, …, w_n]) = [v_0, w_0, v_3, …, w_n] ⊕
[v_1, w_1, v_3, …, w_n] ⊕ [v_2, w_2, v_3, …, w_n]
in (⟨ z_0, z_1 ⟩^⊥) ⊕(⟨ z_1, z_2 ⟩^⊥) ⊕(⟨ z_2, z_3 ⟩^⊥) indexed by the σ simplices { z_0, z_1 }, { z_1, z_2 } and { z_2, z_3 }, respectively.
* If Δ = {z_0, z_1, z_2} is a
σ-additive simplex such that z_0 < z_1 <z_2, then
∂_([ v_2, …, w_n]) = [ v_2,
…, v_n, w_n] ⊕
-[ v_2, w_2, …, w_n] ⊕ [ v_2, …, w_n]
in (⟨ z_1, z_2 ⟩^⊥) ⊕(⟨ z_0, z_2 ⟩^⊥) ⊕(⟨ z_0, z_1 ⟩^⊥) indexed by the σ simplices { z_1, z_2 }, { z_0, z_2 } and { z_0, z_1 }, respectively.
Let Δ = {v_1, w_1} be a minimal σ simplex in ^(2) such that v_1 < w_1.
* The map ∂_σ(⟨Δ⟩^⊥) →_n is defined by
∂_σ([ v_2, …, w_n]) = [ v_1, w_1, v_2, …, w_n].
* The map ∂_σ^(⟨Δ⟩^⊥) →_n is defined by
∂_σ([z_0, z_1, z_2] ∗ [v_3,
…, w_n]) =-[z_0, z_1, z_2] ∗ [ v_1, w_1, v_3,…, w_n]
* The map ∂_σ^(⟨Δ⟩^⊥) →_n is defined by
∂_σ([z_0, z_1, z_2, z_3] ∗ [v_4,…, w_n]) =
[z_0, z_1, z_2, z_3] ∗ [v_1, w_1, v_4, …, w_n].
Let Δ∈^(2) be a minimal simplex.
* If Δ is a σ^2 simplex, the value of
π_σ^2 = 0(⟨Δ⟩^⊥) →_n ⊕_n
at all formal symbols [ v_3, …, w_n] ∈(⟨Δ⟩^⊥) is zero.
* If Δ = {z_0, z_1, z_2, z_3} is a skew-σ^2
simplex such that ω(z⃗_0, z⃗_1) = ω(z⃗_1, z⃗_2) = ω(z⃗_2, z⃗_3) = 1, then the map π_(⟨Δ⟩^⊥) →_n ↪_n ⊕_n is defined by
π_([ v_3, …, w_n]) = [z_0, z_1, z_2, z_3]
∗ [v_3, …, w_n].
* If Δ = {z_0, z_1, z_2} is a σ-additive simplex such that z_0 < z_1 <z_2,
then the map π_(⟨Δ⟩^⊥) →_n ↪_n ⊕_n is defined by
π_([ v_2, …, w_n]) = [z_0, z_1, z_2] ∗
[v_2, …, w_n].
§.§.§ The diagram commutes and surjectivity properties
This subsection contains the proof of <ref>, which is split into several lemmas. We first note that the claimed surjectivies hold because by definition, every generator of _n and _n is in the image of π_σ^2⊕π_⊕π_ and ∂_σ, respectively. Hence, we have:
The maps π_σ^2⊕π_⊕π_ and ∂_σ are surjective.
We now discuss the commutativity of the diagram depicted in <ref>.
The following hold.
* (_n ⊕_n) ∘ (π_σ^2⊕π_⊕π_) = ∂_σ∘ (∂_σ^2⊕∂_⊕∂_).
* (_n ⊕_n) ∘ (∂^_σ⊕∂^_σ) = ∂_σ∘ (_⟨Δ⟩^⊥⊕_⟨Δ⟩^⊥).
Checking the validity of this lemma is elementary; one simply picks
a generator, evaluates the map on the left and right side using the definitions
above and uses the first relation in the apartment module _n
(see <ref>) to see that the two values agree. We leave the
details to the reader, and now focus on the remaining two commuting squares for
which the argument is more involved.
It holds that
(⊕_⟨Δ⟩^⊥) ∘
(∂_σ^2⊕∂_⊕∂_) = ∂_n+1^^ω∘(⊕_⟨Δ⟩^⊥⊕⊕_⟨Δ⟩^⊥⊕⊕_⟨Δ⟩^⊥).
It holds that
_n ∘∂_σ = ∂_n^^ω∘(⊕_⟨Δ⟩^⊥).
To prove these two lemmas, we need to unravel the identifications made in the construction of the exact sequence in <ref>, which appears in the bottom row of the diagram in <ref>. We use the following construction.
Construction: Let Δ∈ be a minimal simplex of type
σ^2, skew-σ^2, σ-additive or σ and let
M^Δ^⊥ = (v⃗_k, w⃗_k, …, v⃗_n, w⃗_n) be an
ordered symplectic basis of ⟨Δ⟩^⊥⊆^2n,
where k = 2 if Δ is σ-additive or of type σ, and k = 3
otherwise. Using the notation introduced in
<ref>, we can label the basis vectors in
M^Δ^⊥ by { 1, 1̅, …, n-k+1, n-k+1} from
left to right (i.e. M⃗_i^Δ^⊥ = v⃗_i+k-1 and
M⃗_i̅^Δ^⊥ = w⃗_i+k-1). Therefore this data determines
a simplicial embedding
M^Δ^⊥Δ∗ C_n-k+1↪
mapping Δ to Δ, i to M⃗^Δ^⊥_i and i̅ to
M⃗_i̅^Δ^⊥. Note that Δ∗ C_n-k+1 is a combinatorial (n+1)- or n-ball
with boundary sphere ∂Δ∗ C_n-k+1. Taking the
simplex types into account, we hence obtain a pair of simplicial
embeddings
(M^Δ^⊥, ∂ M^Δ^⊥) (Δ∗ C_n-k+1, ∂Δ∗ C_n-k+1) ↪
(^(2), ^(1))
if Δ is of type σ^2, skew-σ^2 or σ-additive and
(M^Δ^⊥, ∂ M^Δ^⊥) (Δ∗ C_n-k+1, ∂Δ∗ C_n-k+1) ↪
(^(1), ^(0))
if Δ is of type σ. Now, M^Δ^⊥ is a cross map (see <ref> and <ref>) except if Δ is a skew-σ^2 simplex (since we did not introduce a notion of cross map for this type). However, if Δ = {z_0, z_1, z_2, z_3} is a skew-σ^2 simplex and ω(z⃗_0, z⃗_1) = ω(z⃗_1, z⃗_2) = ω(z⃗_2, z⃗_3) = 1, then <ref> implies that there is a unique prism P
in containing Δ. The two additional vertices in P are ⟨z⃗_0 + z⃗_2 ⟩ and ⟨z⃗_1 + z⃗_3
⟩. Therefore the map M^Δ^⊥ above
extends to a unique prism-regular map
M^Δ^⊥ P ∗ C_n-k+1↪
and we obtain a pair of embeddings
(M^Δ^⊥, ∂ M^Δ^⊥) (P ∗ C_n-k+1, ∂ P ∗ C_n-k+1) ↪
(^(2), ^(1)).
Let d = (Δ) and let η_d-1∈H_d-1(∂Δ) denote the fundamental class associated to the ordering of the vertex set of Δ. Since the ∂Δ-suspension isomorphism is induced by taking the cross product with η_d - 1 (see e.g. <cit.>), we obtain a unique fundamental class η_d-1∗ξ_n-1∈H_d+n-1(∂Δ∗ C_n; ) with the property that for all n ≥ 0 it holds that
H_d+n-1(∂Δ∗ C_n; )≅← H_n-1(C_n; )
η_d-1∗ξ_n-1[origin=c]180↦ ξ_n-1
(compare with <ref> et seq.). If Δ is a minimal skew-σ^2 simplex, there is a unique class η_2 ∈H_2(∂ P_3; ) that is homologous to η_2 ∈H_2(∂Δ; ) in P_3^♢ (see <ref> for the definition of P_3^♢, and <ref> for a closely related argument). Hence, we obtain a fundamental class η_2∗ξ_n-1∈H_n+2(∂ P_3 ∗ C_n; ) exactly as above. Finally, we note that if Δ = {v_1, w_1} is a minimal σ simplex and v_1 < w_1, there is an identification ∂Δ∗ C_n≅ C_n+1 mapping
v_1 to 1, w_1 to 1, i to i+1, i to i+1
i ∈{1, …, n}.
Under this identification, it holds that η_0∗ξ_n-1↦ξ_n.
To relate the maps above to the apartment class maps (see <ref>) in <ref>, we furthermore use that for any n ≥ 1, the long exact sequence of the pairs (Δ∗ C_n-k+1, ∂Δ∗ C_n-k+1) and (P ∗ C_n-k+1, ∂ P ∗ C_n-k+1) yields identifications and relative homology classes
H_d+n-k(∂Δ∗ C_n-k+1; ) ≅ H_d+n-k+1(Δ∗ C_n-k+1, ∂Δ∗ C_n-k+1; )
H_n(∂ P ∗ C_n-k+1; ) ≅ H_n+1(P ∗ C_n-k+1, ∂ P ∗ C_n-k+1; )
η_d-1∗ξ_n-k ↦ (0, η_d-1∗ξ_n-k).
From this construction, we obtain the following “topological” description of the image of the “relative” apartment class maps occurring in <ref>.
The following correspondence holds under the identifications in
<ref>:
* If Δ is of type σ^2, then _⟨Δ⟩^⊥([ v_3,
…, w_n]) ∈^ω(⟨Δ⟩^⊥)
identifies with
(Δ∗ [ v_3, …, w_n], ∂Δ∗ [ v_3, …, w_n]) (M^Δ^⊥, ∂ M^Δ^⊥)_*(0, η_2 ∗ξ_n-3) ∈ H_n+1(^(2), ^(1)).
* If Δ is of type skew-σ^2, then _⟨Δ⟩^⊥([ v_3, …, w_n]) ∈^ω(⟨Δ⟩^⊥) identifies with
(P ∗ [ v_3, …, w_n], ∂ P ∗
[ v_3, …, w_n]) (M^Δ^⊥, ∂ M^Δ^⊥)_*(0, η_2 ∗ξ_n-3) ∈ H_n+1(^(2), ^(1)).
* If Δ is of type σ-additive, then
_⟨Δ⟩^⊥([ v_2,
…, w_n]) ∈^ω(⟨Δ⟩^⊥)
identifies with
(Δ∗ [ v_2, …, w_n], ∂Δ∗ [ v_2,
…, w_n]) (M^Δ^⊥, ∂ M^Δ^⊥)_*(0, η_1 ∗ξ_n-2) ∈ H_n+1(^(2), ^(1)).
Finally, under the identifications made in
<ref> and assuming that
Δ is of type σ, the class _⟨Δ⟩^⊥([ v_2, …, w_n]) ∈^ω(⟨Δ⟩^⊥) identifies with
(Δ∗ [ v_2, …, w_n], ∂Δ∗ [ v_2, …, w_n]) (M^Δ^⊥, ∂ M^Δ^⊥)_*(0, η_0 ∗ξ_n-2) ∈ H_n(^(1), ^(0)).
With these observations and definitions in place, we are now ready to prove
<ref> and <ref>.
Let Δ = {v_1, w_1, v_2, w_2} be a minimal σ^2 simplex in ^(2) such that ω(v⃗_1, w⃗_1) = ω(v⃗_2, w⃗_2) = 1, v_1 < w_1 and v_2 < w_2. Let [ v_3, …,
w_n] ∈(⟨Δ⟩^⊥) denote a formal
symbol. We claim that
((_⟨ v_2,w_2 ⟩^⊥⊕_⟨ v_1,
w_1 ⟩^⊥) ∘∂_σ^2) ([
v_3, …, w_n]) = (∂_n+1^^ω∘_⟨ v_1,w_1,v_2,w_2 ⟩^⊥)([ v_3, …, w_n]).
Using the definition of ∂_σ^2 (see
<ref>) and the last part of
<ref>, it follows that, under the
identifications made in the
<ref>, the
left side term is equal to the following element in H_n(^(1),
^(0)):
({v_2,w_2}∗ [ v_1, w_1, v_3, …,
w_n], ∂{v_2,w_2}∗ [ v_1, w_1,
v_3, …, w_n])
+ ({v_1,w_1}∗ [ v_2, w_2,
v_3, …, w_n], ∂{v_1,w_1}∗ [ v_2, w_2, v_3, …, w_n]).
Using the definition of the connection morphism ∂_n+1^(2,1,0)
and the first part of <ref>, it
similarly follows that the
right hand term is equal to
(∂{v_1,w_1,v_2,w_2}∗ [ v_3, …,
w_n], ∅) =
(∂ M^Δ^⊥, ∅)_*(η_2 ∗ξ_n-3, ∅) ∈ H_n+1(^(1), ^(0)).
Recall that the domain of ∂ M^Δ^⊥ is the combinatorial (n+1)-sphere ∂Δ∗ C_n-2. We decompose the sphere ∂Δ = ∂{v_1,w_1,v_2,w_2} into two combinatorial 2-balls of the form Δ^1 ∗ S^0 given by {v_2,w_2}∗∂{v_1,w_1} and {v_1,w_1}∗∂{v_2,w_2}, as illustrated in
<ref>. This induces a decomposition of ∂Δ∗ C_n-2 into two combinatorial (n+1)-balls of the form (Δ^1 ∗ S^0) ∗ C_n-2≅Δ^1 ∗ C_n-1. This allows us to express the homology class in <ref> as a sum of two terms which are equal to <ref>. We conclude the left and right hand term of <ref> agree.
Consider a minimal skew-σ^2 simplex Δ = {z_0, z_1, z_2, z_3}∈^(2) such that ω(z⃗_0, z⃗_1) = ω(z⃗_1, z⃗_2) = ω(z⃗_2, z⃗_3) = 1. Let
v_0, w_0 = {⟨z⃗_0 + z⃗_2 ⟩, z_3 }, v_1, w_1 = {⟨z⃗_0 + z⃗_2 ⟩, ⟨z⃗_1 + z⃗_3 ⟩},
v_2, w_2 = {z_0 ,⟨z⃗_1 + z⃗_3 ⟩}, where v_i<w_i for 0≤ i ≤ 2,
and let [v_3, w_3, …, v_n, w_n] ∈(⟨Δ⟩^⊥) be a formal symbol. We claim that
((_⟨ z_0,z_1 ⟩^⊥⊕_⟨ z_1, z_2 ⟩^⊥⊕_⟨ z_2, z_3 ⟩^⊥) ∘∂_) ([v_3, …, w_n]) =
(∂_n+1^^ω∘_⟨ z_0,z_1,z_2,z_3 ⟩^⊥)([v_3, …, w_n]).
Using the definition of ∂_ (see <ref>) and the last part of <ref>, it follows that, under the identifications made in the <ref>, the left side term is equal to the following element in H_n(^(1), ^(0)):
({z_0,z_1}∗ [v_0, w_0, v_3, …, w_n], ∂{z_0,z_1}∗ [v_0, w_0, v_3, …, w_n])
+ ({z_1,z_2}∗ [v_1, w_1, v_3,…, w_n], ∂{z_1,z_2}∗ [v_1, w_1, v_3, …, w_n])
+ ({z_2,z_3}∗ [v_2, w_2, v_3, …, w_n], ∂{z_2,z_3}∗ [v_2, w_2, v_3,…, w_n]).
Using the definition of the connection morphism ∂_n+1^(2,1,0) and using the second part of <ref>, it similarly follows that the right hand term is equal to
(∂ P ∗ [ v_3, …, w_n], ∅) =
(∂ M^Δ^⊥, ∅)_*(η_2 ∗ξ_n-3, ∅) ∈ H_n+1(^(1), ^(0)).
Recall that the domain of ∂ M^Δ^⊥ is the combinatorial (n+1)-sphere ∂ P ∗ C_n-2. We decompose ∂ P into five combinatorial 2-balls; three of the form Δ^1 ∗ S^0, given by
{z_0, z_1}∗∂{⟨z⃗_0 + z⃗_1 ⟩, z_3}, {z_1, z_2}∗∂{⟨z⃗_0 + z⃗_1 ⟩, ⟨z⃗_1 + z⃗_3 ⟩}, {z_2, z_3}∗∂{z_0, ⟨z⃗_1 + z⃗_3 ⟩},
and two 2-simplices Δ^2, given by
{z_0, z_1, ⟨z⃗_0 + z⃗_1 ⟩}, {z_1, z_3, ⟨z⃗_1 + z⃗_3 ⟩}.
This induces a decomposition of ∂ P ∗ C_n-2 into five combinatorial (n+1)-balls; three are of the form (Δ^1 ∗ S^0) ∗ C_n-2≅Δ^1 ∗ C_n-1 and two are of the form Δ^2 ∗ C_n-2. Here, we have that S^0 = ∂{v_i, w_i} for i ∈{0,1,2} and we use the identification (Δ^1 ∗ S^0) ∗ C_n-2≅Δ^1 ∗ C_n-1 mapping
Δ^1 to Δ^1, v_i to 1, w_i to 1, j to j+1, j to j+1
for j ∈{1, …, n-2}. This allows us to express the homology class in <ref> as a sum of five terms. The first three correspond to the balls Δ^1 ∗ C_n-1 and are exactly the terms in <ref>. The other two correspond to the balls Δ^2 ∗ C_n-2 and are zero, since the image of the restriction of ∂ M^Δ^⊥ to these balls is entirely contained in ^(0). We conclude that the left and right hand term of <ref> agree.
Let Δ = {z_0, z_1, z_2} be a minimal σ-additive simplex in ^(2) such that z_0 < z_1 <z_2. Let [ v_2, …, w_n] ∈(⟨Δ⟩^⊥). We claim that
((_⟨ z_1, z_2 ⟩^⊥⊕_⟨ z_0, z_2 ⟩^⊥⊕_⟨ z_0, z_1 ⟩^⊥) ∘∂_) ([v_2, …, w_n]) =
(∂_n+1^^ω∘_⟨ z_0,z_1,z_2 ⟩ ^⊥)([ v_2, …, w_n]).
Using the definition of ∂_ (see
<ref>) and the last part of
<ref>, it follows that, under the
identifications made in the <ref>, the
left side term is equal to the following element in H_n(^(1), ^(0)):
({ z_1, z_2 }∗ [v_2, …, w_n], ∂{z_1,z_2}∗ [v_2, …, w_n])
- ({z_0, z_2}∗ [ v_2, …, w_n], ∂{z_0, z_2}∗ [ v_2, …, w_n])
+ ({z_0,z_1}∗ [ v_2, …, w_n], ∂{z_0,z_1}∗ [ v_2, …, w_n]).
Using the definition of the connection morphism ∂_n+1^(2,1,0)
and the third part of <ref>,
it similarly follows that the
right hand term is equal to
(∂{z_0,z_1,z_2}∗ [ v_3, …,
w_n], ∅) =
(∂ M^Δ^⊥, ∅)_*(η_1 ∗ξ_n-2, ∅) ∈ H_n+1(^(1), ^(0)).
Recall that the domain of ∂ M^Δ^⊥ is the combinatorial (n+1)-sphere ∂Δ∗ C_n-1. We decompose ∂Δ = ∂{z_0,z_1,z_2} into three 1-simplices Δ^1 (given by {z_1, z_2}, {z_0, z_2} and {z_0,
z_1}), as illustrated in <ref>. This induces a decomposition of ∂Δ∗ C_n-1 into three combinatorial (n+1)-balls of the form Δ^1 ∗ C_n-1. This allows us to express the homology class <ref> as a sum
of three terms which are equal to <ref>. We conclude
that the left and right hand term of <ref> agree.
Consider a minimal σ simplex Δ = {v_1, w_1}∈^(2) with v_1 < w_1. Let [ v_2, w_2, …, v_n, w_n] ∈(⟨Δ⟩^⊥). We claim that
(_n ∘∂_σ)([ v_2, …,
w_n]) =
(∂_n^^ω∘_⟨ v_1,
w_1 ⟩^⊥)([ v_2, …, w_n]).
Using the definition of ∂_σ (see
<ref>) and the definition of the apartment class map
(see <ref>), it follows that the
left side term is equal to the following apartment class in ^ω_n:
_n([v_1, w_2, v_2, w_2, …, v_n, w_n]) = M_* (ξ_n-1) ∈H_n-1(T^ω_n; ) =^ω_n,
where M (C_n) → T^ω_n is the poset map constructed after <ref> for the symplectic basis M = (v⃗_1, w⃗_1, v⃗_2, w⃗_2, …, v⃗_n, w⃗_n).
Using the last part of <ref> and that, under the identifications made in <ref>, the map
∂^^ω_n identifies with the composition of the connecting
morphism ∂_n^(1,0) and the map s_*, the right hand term can be computed as follows: We start by observing that
∂_n^(1,0)({v_1, w_2}∗ [v_2, …,
w_n], ∂{v_1, w_2}∗ [v_2, …, w_n])
= ∂{v_1, w_2}∗ [v_2, …, w_n]
= ∂ M^Δ^⊥_*(η_0 ∗ξ_n-2) ∈H_n-1(^(0)).
The domain of ∂ M^Δ^⊥ is the combinatorial sphere ∂Δ∗ C_n-1 = ∂{v_1, w_1}∗ C_n-1. Using the isomorphism ∂{v_1, w_1}∗ C_n-1≅ C_n described after <ref> and the resulting identification of η_0 ∗ξ_n-2 and ξ_n-1, we may assume
∂ M^Δ^⊥ C_n →^(0)
and write
∂_n^(1,0)({v_1, w_2}∗ [v_2, …,
w_n], ∂{v_1, w_2}∗ [v_2, …, w_n])
= ∂ M^Δ^⊥_*(ξ_n-1) ∈H_n-1(^(0)).
Now recall from the proof of <ref> that
s_*H_n-1(^(0)) ≅H_n-1((^(0))) →H_n-1(T^ω_n; ) = ^ω_n
is obtained by first passing to the barycentric subdivision and then applying the span map in <ref>. It follows that
s_*(∂ M^Δ^⊥_*(ξ_n-1)) = (s ∘(∂ M^Δ^⊥))_*(ξ_n-1) ∈H_n-1(T^ω_n; ) = ^ω_n,
where (∂ M^Δ^⊥)(C_n) →(^(0)) is the map that ∂ M^Δ^⊥ induces between the simplex posets of C_n and ^(0).
It then suffices to note that, under the previous identification, the map
s ∘(∂ M^Δ^⊥)(C_n) → T^ω_n
is exactly the map
M(C_n) → T^ω_n
used to define the apartment class map _n (compare <ref> et seq.). We conclude that
s_*(∂ M^Δ^⊥_*(ξ_n-1)) = M_* (ξ_n-1).
§.§ A diagram chase and the proof of Theorem 10.20
Using the results in the previous subsections, we are now ready to prove
<ref>. The induction argument is largely formal and relies on the
following diagram chasing lemma whose proof we leave to the reader.
Assume we are given a commutative diagram of abelian groups
A_1,2 [d] [r, bend left = 10] A_1,3 [d]
A_2,1 [r][d] [rru, bend left = 10, "π" xshift = -4ex, swap, sloped] A_2,2[d] [r, "∂_σ"] A_2,3[d]
A_3,1 [r] [d] A_3,2 [r] [d] A_3,3 [d] [r] 0
0 0 0
such that the first two columns A_*,1 and A_*, 2 are exact, the third row A_3,* is exact, and π as well as ∂_σ are surjective maps. Then the third column A_*,3 is exact.
The proof of <ref> is now by induction on the genus n of the symplectic module [2n].
Induction beginning: For n = 1, we consider the group 2
=2. The results for 2 presented hence here are closely
related the work of Church–Putman; in particular to <cit.>,
which was originally proved by Bykovskiĭ <cit.> and generalises work of Manin <cit.> for n = 1.
The sequence
_1 ⊕_1 _1 ^ω_1 ⟶ 0
is exact.
We start by noting that [1]^(2) = [1] and [1]^(1) = [1]. Hence, the following are a special case (m = 0) of the description of the complexes [1][m] and [1][m] given in the proof of <ref>:
* [1]^(2)= [1] only contains simplices of type standard of
dimension 0, σ of dimension 1 and σ-additive of
dimension 2. The complex is isomorphic to _2, a
contractible 2-dimensional simplicial complex.
* [1]^(1) = [1] is the subcomplex of [1]^(2) consisting
of all simplices of type standard and σ, i.e. the
1-skeleton of [1]^(2). The complex is isomorphic to the
subcomplex _2 of
_2, the connected simplicial graph known as the Farey
graph.
* [1]^(0) is the subcomplex of [1]^(2) consisting
of all standard simplices, i.e. the 0-skeleton of
[1]^(2). The complex is isomorphic to the Tits building T^ω_1 (which is a
discrete set in this case).
Using this description of ([1]^(2), [1]^(1), [1]^(0)),
the fact that ⟨Δ⟩^⊥ = {0} if Δ is a
σ-additive or σ simplex in [1]^(2), the convention
that ^ω({0}) = ({0}) =, that ({0}) = ({0}) = 0 and noting that _1 = 0, it follows that the commutative diagram constructed in <ref> has the shape depicted in <ref> for n = 1.
It is easy to check that the first two columns are
exact. By <ref> the bottom row is exact and by <ref>, π_ and
∂_σ are surjections. Therefore,
<ref> implies the claim.
Induction hypothesis: Assume that for any symplectic
submodule
⊊^2n of genus less than n, the sequence
() ⊕()
()
^ω()
⟶
0
is exact.
Induction step: The proof of induction step is now
completely formal.
For n > 1, the sequence _n ⊕_n _n ^ω_n ⟶ 0 is exact.
For n > 1, it follows from the induction hypothesis that the first
two columns of the diagram in
<ref> are exact. <ref> says that the bottom row is exact.
Furthermore, by <ref>, π_σ^2⊕π_⊕π_ and ∂_σ
are surjections. Therefore, <ref>
implies
the claim.
§ THEOREM A: A VANISHING THEOREM
The presentation of ^ω_n we obtained in <ref> now lets us prove
<ref>, which states that the rational cohomology of 2n vanishes in degree n^2-1. By Borel–Serre duality, this is equivalent to H_1(2n; ^ω_n()⊗) being trivial (see <ref>).
In order to show the latter, we want to use <ref> and prove that rationally, the three
modules _n, _n, and
_n (see <ref>, <ref> and <ref>) are flat and have trivial coinvariants.
As before, {e⃗_1, f⃗_1, …, e⃗_n, f⃗_n} denotes the symplectic standard basis of ^2n.
The [2n]-modules _n⊗, _n⊗ and _n⊗ are flat.
The proof is similar to that of <cit.>. We first note that _n⊗ is a cyclic [2n]-module, generated by [e_1,f_1, …, e_n, f_n].
Let H be the subgroup of 2n that sends [e_1,f_1, …, e_n, f_n] to ± [e_1,f_1, …, e_n, f_n]
and let M be the [H]-module whose underlying vector space is and where H acts by ± 1, depending on its action on [e_1,f_1, …, e_n, f_n]. It is not hard to see that H is finite[This can be read off from the presentation of _n, <ref>, using the facts that an element in 2n is uniquely determined by its its image of a basis of ^2n and that every line in ^2n contains at most two elements.].
This implies that M is a projective [H]-module.
Furthermore, we have
_n⊗≅_H^2n M,
so the claim follows just as in <cit.>.
The proofs for _n⊗ and _n⊗ are the same after verifying that these modules are cyclically generated by
[⟨e⃗_1 + f⃗_1⟩, e_1, f_1] ∗ [e_2, f_2, …, e_n, f_n].
and [e_1, f_1, ⟨e⃗_2 - e⃗_1 ⟩ , f_2] ∗ [e_3, f_3, …, e_n, f_n], respectively.
For n≥1, the 2n-coinvariants of _n⊗ vanish.
There is an element ϕ of 2n defined by ϕ(e⃗_1) = f⃗_1, ϕ(f⃗_1) = -e⃗_1, and ϕ(e⃗_i) = e⃗_i , ϕ(f⃗_i) = f⃗_i for i≥ 2. Using the relations in _n, we have
ϕ([e_1,f_1, …, e_n,f_n]) = [f_1,e_1, …, e_n,f_n] = -[e_1,f_1, …, e_n,f_n].
Hence, in the coinvariants (_n⊗)⊗_2n, we have
[e_1,f_1, …, e_n,f_n]⊗ q = - [e_1,f_1, …, e_n,f_n]⊗ q for all q∈.
This implies that [e_1,f_1, …, e_n,f_n]⊗ q is trivial. As noted in the proof of <ref>, the module _n⊗ is generated by [e_1,f_1, …, e_n,f_n], so the claim follows.
For n≥2, the 2n-coinvariants of _n ⊗ vanish.
Let Δ = [⟨e⃗_1 + f⃗_1⟩, e_1, f_1] ∗ [e_2, f_2, …, e_n, f_n]
be the generator of _n ⊗ from the proof of <ref> and let ϕ∈2n be defined by ϕ(e⃗_2) = f⃗_2, ϕ(f⃗_2) = -e⃗_2 and ϕ(e⃗_i) = e⃗_i, ϕ(e⃗_i) = f⃗_i for i≠ 2. Using the relations in _n, we have
ϕ(Δ) = [⟨e⃗_1 + f⃗_1⟩, e_1, f_1] ∗ [f_2, e_2, …, e_n, f_n] = -Δ.
As in the proof of <ref>, this implies that (_n⊗)⊗_2n = 0.
For n≥3, the 2n-coinvariants of _n ⊗ vanish.
Let Δ = [e_1, f_1, ⟨e⃗_2 - e⃗_1 ⟩ , f_2] ∗ [e_3, f_3, …, e_n, f_n]
be the generator of _n ⊗ from the proof of <ref> and let ϕ∈2n be defined by ϕ(e⃗_3) = f⃗_3, ϕ(f⃗_3) = -e⃗_3 and ϕ(e⃗_i) = e⃗_i, ϕ(e⃗_i) = f⃗_i for i≠ 3. Using the relations in _n, we have
ϕ(Δ) = [e_1, f_1, ⟨e⃗_2 - e⃗_1 ⟩ , f_2] ∗ [f_3, e_3, …, e_n, f_n] = -Δ.
As in the proof of <ref>, this implies that (_n⊗)⊗_2n = 0.
We conclude by proving <ref>, which states that H^n^2-i(2n;) = 0 for i≤ 1 and n≥2.
For n = 2, this follows from work of Igusa <cit.>, see also Lee–Weintraub <cit.>. As commented after <ref>, our methods apply for n≥ 3 (and recover known results for n∈{3,4 }).
Let n≥ 3. Using Borel–Serre duality, we see that
H^n^2-i(2n;) ≅ H_i(2n; ^ω_n⊗) .
The latter can be computed using a flat resolution of ^ω_n⊗. From <ref>, we get a partial resolution that is flat by <ref>, <ref> and <ref> and hence can be extended to a flat resolution. Taking coinvariants of this flat resolution yields a chain complex whose homology is
H_*(2n; ^ω_n ⊗).
The theorem follows from <ref>, <ref>, and <ref>.
§ OVERVIEW OF DIFFERENT COMPLEXES
tocsectionReferences
=2em
Benjamin Brück
Department of Mathematics
ETH Zürich
Rämistrasse 101
8092 Zürich, Switzerland
mailto:[email protected]@math.ethz.ch
Peter Patzt
Department of Mathematics
University of Oklahoma
601 Elm Avenue
Norman, OK-73019, USA
mailto:[email protected]@ou.edu
Robin J. Sroka
Department of Mathematics & Statistics
McMaster University
1280 Main Street West
Hamilton, ON L8S 4K1, Canada
mailto:[email protected]@mcmaster.ca
|
http://arxiv.org/abs/2306.02226v1
|
20230604012740
|
Variational convergence of the Scharfetter-Gummel scheme to the aggregation-diffusion equation and vanishing diffusion limit
|
[
"Anastasiia Hraivoronska",
"André Schlichting",
"Oliver Tse"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"math.AP"
] |
ROME: Testing Image Captioning Systems via Recursive Object Melting
Pinjia He
====================================================================
In this paper, we explore the convergence of the Scharfetter–Gummel scheme for the aggregation-diffusion equation using a variational approach. Our investigation involves obtaining a novel gradient structure for the finite volume scheme that works consistently for any nonnegative diffusion constant, which allows us to study the discrete-to-continuum and zero-diffusion limits simultaneously. The zero-diffusion limit for the Scharfetter–Gummel scheme corresponds to the upwind finite volume scheme for the aggregation equation. In both cases, we establish a convergence result in terms of gradient structures, recovering the Otto gradient flow structure for the aggregation-diffusion equation based on the 2-Wasserstein distance.
§ INTRODUCTION
In this paper, we study the convergence of the Scharfetter–Gummel numerical approximation for the aggregation-diffusion equation
ADE∂_t ρ_t = div( ϵ∇ρ_t + ρ_t ∇ V + ρ_t ∇ (W * ρ_t) ) in (0, T)×Ω,
which describes the evolution of a curve of Borel probability measures t↦ρ_t∈(Ω) on a bounded convex domain Ω⊂^d, where ϵ> 0 is a diffusion coefficient, V:^d→ is an external potential, and W:^d→ is an interaction potential. We impose the no-flux boundary condition
ϵ∂_νρ_t + ρ_t ∂_ν (V + W * ρ_t) = 0 on ∂Ω,
where ν denotes the outer normal vector on ∂Ω.
Our strategy employs a variational approach that not only provides the convergence of the Scharfetter–Gummel scheme but also a generalized gradient structure for the cases ϵ>0 and ϵ=0. In particular, the method allows us to prove the convergence of the Scharfetter–Gummel (ϵ>0) and upwind (ϵ=0) approximation to the Otto gradient flow solutions of (<ref>), which we outline in detail below.
The Scharfetter–Gummel flux approximation originates from <cit.>, where the authors construct a numerical scheme for a system modelling semiconductor devices.
Their objective was to develop a robust scheme for the system of equations with discontinuities or rapid variations in the potential. Independently, the same type of flux is introduced in <cit.> for finite-difference schemes. Thereafter, the Scharfetter–Gummel scheme became the preferred finite-volume scheme for the drift-diffusion or convection-diffusion equations. While the original scheme deals with the one-dimensional problem, it has been generalized to higher dimensional problems <cit.> and the flux discretization approach became the basis for numerous other generalizations, e.g. for equations with nonlinear diffusion <cit.> and to systems with source terms <cit.>.
To introduce the Scharfetter–Gummel scheme, we first introduce some common notations for finite-volume methods. Let {(^h,Σ^h)}_h>0 be a family of finite (admissible) tessellations of a bounded and convex set Ω⊂^d, where ^h is the family of cells and Σ^h⊂^h×^h contains pairs (K, L) that share a face, i.e. when K,L∈^h share a part of their boundary with positive (d-1)-dimensional Hausdorff measure, which we denote by (K|L). We further define ^h_K to be the set of cells adjacent K. With a slight abuse of notation, we adopt the notation K|L to denote pairs (K,L)∈Σ^h to distinguish between pairs (K,L)∈^h×^h. The parameter h>0 is the maximal diameter of the cells. We make the definitions precise in Section <ref>. For now, one can keep a Voronoi tessellation in mind as an example of an admissible tessellation.
We illustrate how the Scharfetter–Gummel flux appears in the finite-volume discretization of (<ref>). First, consider the case without interaction potential, i.e. W≡ 0. Rewriting (<ref>) as
∂_tρ_t + div j_t = 0, j_t = -ϵ∇ρ_t - ρ_t ∇ V,
integrating the first equation over a control volume K∈^h, and then applying the divergence theorem yields the discrete continuity equation
CE_h∂_t ρ^h_K + ^h,ρ_K = 0, with ^h,ρ_K ∑_L∈_K^h^h,ρ_K|L,
where the numerical approximation for the flux ^h,ρ_K|L should be well chosen to approximate the continuous flux j. The idea of the Scharfetter–Gummel flux discretization is to solve a cell problem for two adjacent cells K and L with barycenters x_K = _K x x and x_L = _L x x. Then, the cell problem is the one-dimensional boundary value problem: Find u∈ C^2([x_k,x_L]) satisfying
-∂_x (ϵ∂_x u + u q_K|L^h ) = 0 on [x_K, x_L]
u(x_K) = ρ^h_K /|K|, u(x_L) = ρ^h_L/|L|
for all (K,L)∈Σ^h,
where q_K|L^h is an approximation for the gradient of the potential term ∇ V in (<ref>) along a segment connecting x_K and x_L. The solution of (<ref>), which can be explicitly computed, is then used to define the
Scharfetter–Gummel flux <cit.>, defined for all (K|L)∈Σ^h as
_K|L^h,ρϵτ_K|L^h ( (q_K|L^h / ϵ) u^h_K - (- q_K|L^h / ϵ) u^h_L ), u^h_K ρ^h_K/|K|,
where τ_K|L^h |(K|L)| / |x_L - x_K| is called the transmission coefficient and (s) s / (e^s - 1) is the Bernoulli function. The Scharfetter–Gummel scheme then reads
SGE_h∂_t ρ^h_K + ∑_L∈_K^h^h,ρ_K|L = 0, ^h,ρ_K|L=ϵτ_K|L^h ( (q_K|L^h / ϵ) u^h_K-(- q_K|L^h / ϵ) u^h_L ).
We are interested in a generalization of the Scharfetter–Gummel scheme (<ref>) for (<ref>) that includes the interaction term W, which was considered in <cit.>. In this case, the form of the flux is the same as in (<ref>), but we include a discrete approximation of ∇ (W * ρ) = ∫_Ω∇ W (· - y) ρ ( y) of the form
q_K|L^h V^h_L - V^h_K + ∑_M∈^hρ^h_M (W^h_ML - W^h_MK), (K,L)∈Σ^h,
where W^h_MK W(x_K - x_M) for any K, M ∈^h×^h such that K≠ M.
The important property of the numerical flux (<ref>) is that the Bernoulli function interpolates between appropriate discretizations of the pure diffusion and pure drift problems. In the absence of the potential, i.e., q_K|L^h = 0, the flux becomes ϵτ_K|L^h ( u_K - u_L ). More interestingly, in the vanishing diffusion limit ϵ→ 0, the Scharfetter–Gummel scheme converges to
Up_h∂_t ρ^h_K + ∑_L∈_K^h_K|L^h,ρ,Up =0, _K|L^h,ρ,Up= τ_K|L^h ( q_K|L^h,+u^h_K - q_K|L^h,- u^h_L ),
which is the upwind flux discretization for the aggregation equation
AE∂_t ρ = div (ρ∇ (V + W * ρ)) in (0, T) ×Ω.
The convergence of the discrete approximation to the weak solutions of (<ref>) in the absence of an external potential is proven in <cit.>. Moreover, it was shown there that the discrete solutions satisfy an energy-dissipation inequality along the evolution, which is an important structure-preserving property. We aim to go one step further and prove the convergence of a variational structure for (<ref>) to the Otto gradient-flow structure for (<ref>).
§.§.§ Strategy and outline
The goal of this paper is to complete the commutative diagram in Figure <ref> below, where the convergence results correspond to the convergence of gradient-flow structures. To make the goal clear, we briefly explain the gradient structures involved and the type of convergences we are interested in.
The right-hand side of Figure <ref> corresponds to the continuous setting that is rather well understood. The Otto-Wassertein gradient-flow theory <cit.> provides a gradient-flow formulation for the aggregation-diffusion equation (<ref>) with respect to the L^2-Wasserstein metric and the driving energy
(Ω)∋ρ↦_ϵ(ρ) = ϵ∫_Ωϕ( ρ/^d) ^d + ∫_Ω V ρ + 1/2∫_Ω ( W * ρ ) ρ if ρ≪^d,
+∞ otherwise,
where ϕ(s)=s log s -s +1 for s∈_+ and ^d denotes the Lebesgue measure on ^d. Here, we consider gradient flow solutions to (<ref>) in terms of the Energy-Dissipation Balance (EDB), which we now describe. We begin by recalling that (<ref>) can be expressed as
∂_tρ_t + div j_t = 0 in (0, T) ×Ω, CE
j_t = -ρ_t ∇_ϵ'(ρ_t), KR
where (<ref>) suggests that the density-flux pair (ρ,j) satisfies the continuity equation, while (<ref>) describes the relationship between the force -∇_ϵ'(ρ_t) and the flux j_t, which we call the kinetic relation.
By introducing a dual dissipation potential ^* : (Ω) × C_b(Ω;^d) →_+,
^*(ρ, ξ) = 1/2∫_Ω |ξ|^2 ρ,
the kinetic relation (<ref>) may be further expressed as
j_t = D_2^*(ρ_t, -∇_ϵ'(ρ_t)).
Via Legendre-Fenchel duality, we obtain a variational characterization of the kinetic relation:
(ρ_t, j_t) + ^*(ρ_t, -∇_ϵ'(ρ_t)) = ⟨ j_t,-∇_ϵ'(ρ_t)⟩,
where the dissipation potential is the Legendre dual of ^* w.r.t. its second argument, i.e.,
(ρ,j)∈(Ω)×(Ω;^d)↦(ρ,j) = 1/2∫_Ω| j/ρ|^2 ρ,
where (Ω;^d) is the space of finite ^d-valued Radon measures. Under the chain rule
CR
-/ t_ϵ(ρ_t) = ⟨ j_t,-∇_ϵ'(ρ_t)⟩,
along density-flux pairs (ρ,j) satisfying the continuity equation (<ref>), one arrives at a variational expression for the solution of (<ref>). Indeed, integrating (<ref>) over arbitrary intervals [s,t]∈[0,T] and employing the chain rule (<ref>), one obtains the Energy-Dissipation Balance:
EDB_ϵ^[s,t] (ρ, j) ∫_s^t (ρ_r,j_r) + ^*(ρ_r, - ∇'_ϵ(ρ_r)) r + _ϵ(ρ_t) - _ϵ(ρ_s) = 0.
Morally, any pair (ρ,j) satisfying the continuity equation (<ref>) and (<ref>) is said to be an (, , ^*)-gradient flow solution of (<ref>) if it satisfies, additionally, the chain rule (<ref>). Although there are other ways of defining gradient flow solutions to (<ref>). We choose to use the definition based on EDB since this works well in the generalized gradient flow setting <cit.> as seen below.
For λ-convex functionals _ϵ w.r.t. the Wasserstein distance W_2, it is a standard result of evolutionary -convergence for gradient flows <cit.> that, as ϵ→ 0, the gradient flow solutions of (<ref>) converge to the gradient flow solutions of the corresponding aggregation equation (<ref>).
The left-hand side of Figure <ref> corresponds to the discrete setting for which the gradient structure is not well understood. For this reason, our first objective is to present a generalized gradient-flow (GGF) formulation for the Scharfetter–Gummel scheme (<ref>). In particular, we show in Section <ref> that the scheme fits into the (by now, common) `cosh' gradient-structure framework with the discrete driving energy _ϵ,h: (^h) →_+,
_ϵ,h(ρ^h) = ϵ∑_K∈^hϕ(u^h_K)|K| + ∑_K∈^h V^h_K ρ^h_K + 1/2∑_(K, L)∈^h×^h W^h_KLρ^h_K ρ^h_L, u^h_K ρ^h_K/|K|,
and discrete dual dissipation potential _ϵ,h^*: (^h) ×(Σ^h) →_+ defined in (<ref>), where (A) denotes the set of bounded functions on A.
That being said, the `cosh' gradient structure turns out to be ill-suited for proving the desired convergence due to the inclusion of the interaction potential W, which gives rise to a dissipation potential that depends on W and ρ^h. Such phenomenon is known as tilt-dependence of gradient systems and was recently discussed in detail in <cit.>, where it was established that tilt-independent gradient structures give rise to better convergence properties. Using the de-tilting technique <cit.>, we introduce a new tilt-independent gradient structure for the Scharfetter–Gummel scheme in the presence of both external and interaction potentials (cf. Section <ref>) and allows us to pass to the h→ 0 and ϵ→ 0 limits.
We show in Section <ref> that the Scharfetter–Gummel scheme (<ref>) possesses a gradient structure with driving energy _ϵ,h (cf. (<ref>)) and the tilt-independent dual dissipation potential _ϵ,h^* given by
_ϵ,h^*(ρ^h, ξ^h) 2∑_(K,L)∈Σ^hτ_K|L^h α_ϵ^* ⟨*| u^h_K, u^h_L, ξ^h_K|L/2, u^h_K ρ^h_K/K,
where α_ϵ^*:_+×_+×→_+ is defined (see Lemma <ref> for more details) for any ϵ>0 by
α_ϵ^*(a, b, ξ) ϵ∫_0^ξsinh⟨[|]x/ϵΛ_H⟨*|a e^-x/ϵ, b e^x/ϵ x= ϵ^2 α_1^* ⟨[|]a, b, ξ/ϵ.
Hereby the harmonic-logarithmic mean Λ_H : _+ ×_+ →_+ (see also Lemma <ref>) is given as
Λ_H (s, t) 1/Λ( 1/s, 1/t ) with Λ(s, t) = s - t/log s - log t for s t.
Based on these definitions, the two equations in (<ref>) become a discrete continuity equation for the density-flux pair (ρ^h,j^h) and a kinetic relation providing a force-flux relation:
∂_t ρ^h_t + div j^h_t = 0 in (0,T) ×^h, CE_h
j^h_t = D_2 _ϵ,h^* (ρ^h_t, -_ϵ,h'(ρ^h_t)), KR_h
where φ(K,L) = φ(L)-φ(K) is the discrete gradient. Together with the discrete chain rule
CR_h
-/ t_ϵ,h(ρ_t^h) = ⟨ j_t^h,-_ϵ,h'(ρ_t^h)⟩,
the pair (ρ^h,j^h) is shown to satisfy the discrete Energy-Dissipation Balance:
EDB_h_ϵ,h^[s,t] (ρ^h, j^h) ∫_s^t _ϵ,h(ρ^h_r, j^h_r) + _ϵ,h^*(ρ^h_r, -_ϵ,h'(ρ_t^h)) r + _ϵ,h(ρ^h_t) - _ϵ,h(ρ^h_s) = 0,
for any interval [s,t]⊂[0,T].
Our main interest lies in establishing discrete-to-continuum convergence results that connect the left-hand and the right-hand sides of Figure <ref>. For the convergence of (<ref>) to (<ref>) (top horizontal arrow), we define the GGF solutions to (<ref>) as the minimizers of the energy-dissipation functional _ϵ,h corresponding to the tilt-independent structure defined through (<ref>) (cf. Section <ref>). We then follow a similar strategy as in <cit.>, which studies the diffusive limit of random walks on tessellations using variational techniques. However, every step of the strategy requires an adaptation to the new gradient structure. The main challenge here is to prove a -convergence result for the Fisher information, which takes the form
_ϵ,h(ρ^h) _ϵ,h^*(ρ^h, -_ϵ,h'(ρ^h)) =∑_(K,L)∈Σ^hβ_ϵ (u^h_K, u^h_L) τ_K|L^h + _ϵ,h^1 (ρ^h) + _ϵ,h^2 (ρ^h),
where β_ϵ(a,b)α_ϵ^* (a, b, -ϵlog√(b/a)) with α_ϵ^* from (<ref>), and _ϵ,h^1, _ϵ,h^2 are defined in Section <ref>.
The splitting mimics the expanded form of the continuous Fisher information:
_ϵ(ρ) ^*(ρ, - ∇'_ϵ(ρ)) = 2ϵ^2 ∫*∇√(u)^2 x + ϵ∫∇ u ·∇𝖰(ρ) x + 1/2∫*∇𝖰(ρ) ^2 u x,
where 𝖰(ρ) = V + W∗ρ.
The function β_ϵ depending on α_ϵ^* in (<ref>) is only defined by an integral, which makes it more difficult to work with as compared to the Fisher information for the `cosh' structure studied in <cit.>. Nevertheless, it satisfies (see Lemma <ref>) the bounds
ϵ^2/4(a - b)^2/a + b≤β_ϵ (a, b) ≤ϵ^2/2⟨*|√(b) - √(a)^2, a, b ≥ 0,
thereby allowing us to prove a -convergence result for β_ϵ (cf. Section <ref>), albeit under more stringent assumptions on the tessellations compared to <cit.>. Additionally, we will need to establish new convergence results for the other parts of ^h that depend on the interaction term q_K|L^h.
The arrow with ϵ→ 0 on the left side of Figure <ref> refers to the convergence of the Scharfetter–Gummel scheme (<ref>) to the upwind approximation (<ref>) as ϵ→ 0 in terms of the generalized gradient structure. Since the state space is a fixed finite tessellation, this result is not difficult to obtain. On the contrary, the convergence of the upwind scheme (<ref>) to the aggregation equation (<ref>) appears to be very challenging. The difficulty is described in the literature but is still not well studied. The intuitive idea is that the structure of the tessellation can lead to strong oscillations in the solutions of the discrete continuity equation. More specifically, unlike in the 1-dimensional case, one can not expect propagation of the BV-bound, assuming that the initial data is in BV. Indeed, there is a simple example of a 2-dimensional tessellation consisting of lines of squares with size h alternating with lines of squares with size h/2, for which the total variation of the discrete solutions blows up as h^-1/2 even for a constant velocity field (see details in <cit.>).
On the other hand, the convergence results in the strong topology are available on general tessellations for Lipschitz velocity fields <cit.>. When one treats general tessellations and rough velocity fields simultaneously, the convergence is proven in the weak topology <cit.> for time-explicit upwind schemes on Cartesian grids and time-implicit upwind schemes on regular general meshes. A first variational method for Fokker-Planck equations based on upwind dissipation functionals is contained in <cit.>. See also <cit.> for a study on general graphs and their continuum limits.
A new method for proving regularity estimates for solutions of the discrete continuity equations with non-Lipschitz velocity field and non-Cartesian but periodic tessellations is found in <cit.>, which is significant for future research in this area. Given the state-of-art, at the moment, we cannot expect to prove the discrete-to-continuum convergence of the gradient structure for (<ref>) for general tessellations. Nevertheless, we obtain a convergence result for the Cartesian grid. We believe that this result is already worthwhile since it does not require any assumptions on the integrability of the initial data, allowing us to include atomic measures as initial data.
To summarize, the rest of the paper is organized as follows. In Section <ref>, we specify the assumptions on tessellations and potentials and present the main results. We introduce the gradient structure for (<ref>) and two generalized gradient structures for finite volume schemes in Section <ref>. The subsequent sections contain the proofs of the convergence results. Section <ref> is dedicated to the discrete-to-continuum convergence of (<ref>) to (<ref>). The vanishing diffusion limit ϵ→ 0 from (<ref>) to (<ref>) is presented in Section <ref>. We deal with the convergence of (<ref>) to (<ref>) in Section <ref>.
§.§ Acknowledgments
A.H. and O.T. acknowledge support from NWO Vidi grant 016.Vidi.189.102 on "Dynamical-Variational Transport Costs and Application to Variational Evolution". A.S. is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044 – 390685587, Mathematics Münster: Dynamics–Geometry–Structure.
§ ASSUMPTIONS AND MAIN RESULTS
We specify our assumptions on the family of tessellations in Section <ref> and the external and interaction potentials in Section <ref>. The main results of this paper are summarized in Section <ref>.
§.§ Assumptions on tessellations
Let Ω⊂^d be an open bounded convex set. A tessellation (^h,Σ^h) covering Ω consists of a family ^h of mutually disjoint cells (usually denoted by K or L) that are open convex sets and Ω⊂⋃_K∈^h K, and a family Σ^h ={ (K, L)∈^h×^h : ℋ^d-1 (K∩L) > 0 } of pairs of cells with a common face. Here, ℋ^d-1 denotes the (d-1)-dimensional Hausdorff measure.
The common face of a pair (K,L)∈Σ^h is denoted by (K|L). The characterizing size of a tessellation is its maximum diameter:
h max{diam(K), K∈^h}.
The maximum diameter h>0 gives an upper bound on the volumes of the cells |K|≤ C_d h^d and faces |(K|L)| ≤ C_d-1 h^d-1, where C_d, C_d-1>0 are universal constants depending only on the spatial dimension d≥ 1. In our work, it is also necessary to assume lower bounds on the volumes of the cells to prevent the degeneration of cells, which is guaranteed by the following non-degeneracy assumption.
0.92
Non-degeneracy.
There exists ζ∈ (0, 1) such that
* For each K∈^h, there is an inner ball B(x_K, ζ h) ⊂ K with x_K = _K x x;
* For every (K,L)∈Σ^h it holds that |(K|L)| ≥ζ h^d-1.
We now summarize the assumptions on the tessellations used within this paper.
0.92
Admissible tesselations. The family of tessellations {(^h, Σ^h)}_h>0 satisfy
Ass{ for any h>0, all cells K∈^h are open, convex, and mutually disjoint;
{(^h, Σ^h)}_h>0 is non-degenerate with some ζ∈(0, 1) independent of h..
A standard assumption, often embedded in the definition of admissible tessellations in the finite-volume setup, is the following orthogonality assumption.
0.92
Orthogonality. For all (K, L)∈Σ^h, the face (K|L) is orthogonal to the vector x_L - x_K, i.e.
Ort
(K|L) ⊥ (x_L - x_K),
where x_K = _K x x and x_L = _L x x.
We assume (<ref>) throughout this paper, and we indicate explicitly in the corresponding statements when we require the orthogonality assumption (<ref>).
§.§ Assumptions on potentials
We assume the following properties for the potentials.
0.92
Assumptions on V.
The external potential V∈Lip(^d)∩ C^1(^d) is bounded from below.
Assumptions on W.
The interaction potential W^d → nonnegative, i.e. W(x) ≥ 0 for all x∈^d and is symmetric, i.e. W(x) = W(-x). In addition, we assume the interaction potential to be either a pointy potential
Pointy
W ∈Lip(^d) ∩ C^1(^d\{0}),
or a continuously differential potential
C^1
W ∈Lip(^d)∩ C^1(^d).
A typical example of interaction potentials appearing in mathematical models of the collective behaviour of individuals is the Morse potential
W(x) = C_r e^-|x|/ℓ_r - C_a e^-|x|/ℓ_a,
where ℓ_a and ℓ_r represent the attractive and
repulsive potential ranges and C_a and C_r represent their
respective amplitudes. With the choice C_r ≥ C_a > 0 and ℓ_a > ℓ_r, it holds that W(x) ≥ 0 for all x∈^d and W satisfies (<ref>).
As mentioned above we define the discrete potentials accordingly as
V^h_K V(x_K) for K∈^h, and
W^h_KL W(x_L - x_K) for (K,L)∈^h ×^h.
We claim in Lemma <ref> that the assumptions on V and W indicated above imply
that
q_K|L^h = ∇ (V + W * ρ̂^h )(x_K) · (x_L - x_K) + o(h)|_h→ 0,
This equality will play an important role in several statements of this paper. Due to the assumptions on the potentials V and W, we further deduce that
|q_K|L^h| ≤ c_pot h for all (K,L)∈Σ^h,
with c_potLip(V) + Lip(W).
We could have also defined V^h_K _K V(x) x for K∈^h and W^h_KL_K _L W(x - y) x y for (K,L) ∈^h×^h. One can verify that (<ref>) remains true and all the results of this paper hold also with these definitions.
§.§ Main results
To see the scope of the main results, we indicate the corresponding statements on the arrows in Figure <ref>.
Our first statement is that the Scharfetter–Gummel scheme (<ref>) has the generalized gradient structure. This allows us to define the GGF solution to (<ref>) as a pair (ρ^h, j^h) satisfying the continuity equation (<ref>), which is a minimizer for the energy-dissipation functional (<ref>). All components of the energy-dissipation functional are made precise in Section <ref> and Lemma <ref> proving that the structure is indeed correct.
Section <ref> is devoted to the discrete-to-continuum convergence of the Scharfetter–Gummel scheme as h→ 0 for a fixed diffusion coefficient ϵ > 0. To relate the discrete objects with the continuum, we employ the following reconstruction procedure for a density-flux pair (ρ^h, j^h) satisfying (<ref>)
ρ̂^h/^d∑_K∈^hρ^h(K)/|K|_K, ^h ∑_(K,L) ∈Σ^h j^h_K|L σ_K|L^h,
where σ_K|L^h∈(Ω; ^d) are chosen in a way such that for any (ρ^h, j^h) satisfying the discrete continuity equation (<ref>) the lifted pair (ρ̂^h, ^h) satisfies the continuous continuity equation (<ref>). The existence of such measures σ_K|L^h∈(Ω; ^d) was shown in <cit.>.
The main theorems are the following.
Let {(_h,Σ_h)}_h>0 be a family of tessellations satisfying (<ref>) and (<ref>), and assume (<ref>) to hold for the interaction potential W. Further, let {(ρ^h,j^h)}_h>0 be a family of GGF-solutions (<ref>) with
initial data {ρ_in^h}_h>0 having sup_h>0_h(ρ_in^h) < ∞, such that there exists ρ_in∈dom with
ρ̂_in^h/^d→ρ_in/^d in L^1(Ω)
and lim_h→ 0_h(ρ_in^h) = (ρ_in).
Then there exists a (not relabelled) subsequence of admissible continuous reconstructions {(ρ̂^h, ^h)}_h>0 and a limit pair (ρ,j) such that
* (ρ,j) satisfies (<ref>) with the density u ρ/^d ∈ L^1((0, T)×Ω) and
* ρ̂^h_t/^d → u_t in L^1(Ω) for every t∈ [0, T];
* ∫_·^h_t t ⇀^* ∫_· j_t t weakly-* in ((0, T)×Ω).
* the following liminf estimate holds: For any [s,t]⊂[0,T],
_ϵ^[s,t](ρ, j) ≤lim inf_h→ 0_ϵ,h^[s,t](ρ^h, j^h),
where the energy-dissipation functional _ϵ is given by
^[s,t]_ϵ (ρ, j) = ∫_s^t {(ρ_r,j_r) + _ϵ(ρ_r)} r + _ϵ(ρ_t) - _ϵ(ρ_s),
with the dissipation potential given in (<ref>) and Fisher information _ϵ:(Ω)→[0,+∞],
_ϵ(ρ) = 2ϵ^2∫_Ω| ∇√(u)|^2 x + ϵ∫_Ω∇ u ·∇𝖰(ρ) x + 1/2∫_Ω|∇𝖰(ρ)|^2 ρ
if ρ≪^d with u=ρ/^d and +∞ otherwise. Recall that 𝖰(ρ) = V + W∗ρ.
* (ρ,j) is the gradient flow solution of (<ref>) with the energy-dissipation functional .
In Section <ref>, we fix a tessellation (^h, Σ^h) with some h>0 and consider the dependence of the discrete energy-dissipation functional
_ϵ,h^[s,t] (ρ^h, j^h) = ∫_s^t _ϵ,h(ρ^h_r, j^h_r) + _ϵ,h(ρ^h_r) r + _ϵ,h(ρ^h_t) - _ϵ,h(ρ^h_s),
on the diffusion coefficient ϵ > 0. We have the following convergence statement.
Let (^h, Σ^h) be a non-degenerate tessellation with a fixed h>0. Let { (ρ^ϵ,h, j^ϵ,h) }_ϵ>0 be a family of GGF-solutions to (<ref>) with
initial data {ρ_in^ϵ,h}_ϵ>0 having sup_ϵ>0_ϵ,h(ρ_in^ϵ,h) < ∞, such that there exists ρ_in^h∈dom _up,h with
ρ_in^ϵ,h(K) →ρ_in^h(K) for every K∈^h and lim_ϵ→ 0_ϵ,h(ρ_in^ϵ,h) = _up,h(ρ_in^h),
where _up,h:(^h)→ is given by
_up,h(ρ) = ∑_K∈^h V^h_K ρ_K + 1/2∑_(K,L)∈^h×^h W^h_KLρ_K ρ_L.
Then there exists a (not relabelled) subsequence of measure-flux pairs { (ρ^ϵ,h, j^ϵ,h) }_ϵ>0 and the limit pair (ρ^up,h, j^up,h) such that
* (ρ^up,h, j^up,h) satisfies (<ref>) and
* ρ^ϵ,h_t ⇀ρ^up,h_t weakly in (^h) for all t∈ [0,T];
* ∫_· j^ϵ,h_t t ⇀^* ∫_· j^up,h_t t weakly-* in ((0, T) ×Σ^h).
* the following liminf estimate holds: For any [s,t]⊂[0,T],
_up,h^[s,t](ρ^up,h, j^up,h) ≤lim inf_ϵ→ 0_ϵ,h^[s,t](ρ^ϵ,h, j^ϵ,h),
where the energy-dissipation functional I_h,up is given by
_up,h^[s,t](ρ^h, j^h) ∫_s^t {_up,h (ρ^h_r, j^h_r) + _h, up (ρ^h_r)} r + _h, up (ρ^h_t) - _h, up (ρ^h_s),
with driving energy _up,h, dissipation potential
_up,h (ρ^h, j^h) = ∑_(K,L)∈Σ^hτ_K|L^h( u^h_K | j^h,+_K|L/τ_K|L^hu^h_K |^2 + u^h_L | j^h,-_K|L/τ_K|L^hu^h_L |^2 ) ,
and Fisher information
_up,h (ρ^h) = ∑_(K,L)∈Σ^hτ_K|L^h( u^h_K | q_K|L^h,+/2|^2 + u^h_L | q_K|L^h,-/2|^2 ).
* (ρ^up,h, j^up,h) is the GGF-solution to the upwind scheme (<ref>).
In Section <ref>, we make a first step towards a convergence result from the upwind scheme (<ref>) to the aggregation equation (<ref>).
Let {(^h,Σ^h)}_h>0 be a family of Cartesian tessellations with edges of length h>0. Let the interaction potential W satisfy (<ref>). Further, let {(ρ^h,j^h)}_h>0 be a family of GGF-solutions to the upwind scheme (<ref>) with initial data {ρ_in^h}_h>0 having sup_h>0_up,h(ρ_in^h) < ∞, such that there exists ρ_in∈dom _agg with
ρ̂^h_in⇀ ^*ρ_in weakly-* in (Ω) and lim_h→ 0_up,h(ρ_in^h) = _agg(ρ_in),
where _agg:(Ω)→ is given by
_agg(ρ) = ∫_Ω V ρ + 1/2∫_Ω (W*ρ) ρ.
Then there exists a (not relabelled) subsequence of admissible continuous reconstructions {(ρ̂^h, ^h)}_h>0 and a limit pair (ρ,j) such that
* (ρ, j) satisfies (<ref>) and
* ρ̂^h_t ⇀^* ρ_t weakly-* in (Ω) for any t∈ [0, T];
* ∫_·^h_t t ⇀^* ∫_· j_t t weakly-* in ((0, T)×Ω).
* the following liminf estimate holds for any [s,t]⊂[0,T],
_agg^[s,t](ρ, j) ≤lim inf_h→ 0_h,up^[s,t](ρ^h, j^h),
where the energy-dissipation functional is given by
_agg^[s,t](ρ, j) = ∫_s^t {(ρ_r,j_r) + _agg(ρ_r) } r + _agg(ρ_t) - _agg(ρ_s),
with driving energy _agg, dissipation potential given in (<ref>) and Fisher information
_agg(ρ) 1/2∫_Ω | ∇𝖰(ρ) |^2 ρ, 𝖰(ρ) = V + W∗ρ.
* (ρ,j) is the gradient flow solution to the aggregation equation (<ref>).
Finally, and to close the commutative diagram in Figure <ref>, we present the vanishing diffusion limit on the continuous level.
Let the interaction potential W satisfy (<ref>). Let {(ρ^ϵ, j^ϵ)}_ϵ>0 be a family of the gradient flow solutions to the aggregation-diffusion equation (<ref>) the diffusion coefficients ϵ>0 with initial data {ρ_in^ϵ}_ϵ>0 having sup_ϵ>0_ϵ(ρ_in^ϵ) < ∞, such that there exists ρ_in∈dom _agg with
ρ^ϵ_in⇀ ^*ρ_in weakly-* in (Ω) and lim_ϵ→ 0_ϵ(ρ_in^ϵ) = _agg(ρ_in),
Then there exists a limit pair (ρ, j) and a (not relabelled) subsequence such that
* (ρ, j) satisfies (<ref>) and
* ρ^ϵ_t ⇀^* ρ_t weakly-* in (Ω) for any t∈ [0,T];
* ∫_. j^ϵ_t t ⇀^* ∫_. j_t t in ((0, T)×Ω).
* the following liminf estimate holds for any [s,t]⊂[0,T]
_agg^[s,t] (ρ, j) ≤lim inf_ϵ→ 0_ϵ^[s,t] (ρ^ϵ, j^ϵ),
with _agg^[s,t] defined in (<ref>).
* (ρ,j) is the gradient flow solution to the aggregation equation (<ref>).
§ GRADIENT STRUCTURES: DISCRETE AND CONTINUOUS
This section is devoted to defining our notion of (generalized) gradient flow solution to each equation of interest. We begin with the continuous case in Section <ref>, which is the well-known Otto-Wasserstein gradient structure (see <cit.> for a more extensive study on this). We then introduce, in a similar fashion to the continuous case, generalized gradient structures for general finite volume schemes in Section <ref>, and proceed with providing two such structures for the Scharfetter–Gummel scheme in Section <ref>. We end this section with a summary of the discrete structure we consider in the rest of the article.
§.§ Otto-Wasserstein gradient structure for diffusion-type equations
A pair (ρ, j) is said to be in 𝒞ℰ(0, T) if
* ρ∈([0,T];(Ω)) is a curve of nonnegative finite Radon measures defined on Ω, and
* j=(j_t)_t∈[0,T]⊂(Ω;^d) is a measurable family of fluxes with finite action
∫_0^T ∫_Ω| j_t /ρ_t|^2 ρ_t t < ∞,
satisfy the continuity equation (<ref>) in the following sense: For any [s,t]⊂ [0,T],
⟨φ,ρ_t⟩ - ⟨φ,ρ_s⟩ = ∫_s^t ⟨∇φ, j_r⟩ r for all φ∈_c^1(^d).
It is known that if ρ solves (<ref>) with finite action, then ρ is an absolutely continuous curve in (Ω) w.r.t. the 2-Wasserstein distance <cit.>.
A curve ρ∈([0,T];(Ω)) is said to be an (, , ^*)-gradient flow solution of (<ref>) or (<ref>) with initial data ρ_in∈(Ω)∩dom() if
* ρ_0=ρ_in in (Ω);
* there is a measurable family j=(j_t)_t∈[0, T]⊂(Ω; ^d) such that (ρ, j) ∈𝒞ℰ(0, T) with
∫_s^t ∫_Ω(ρ_r, j_r) + (ρ_r) r + (ρ_t) = (ρ_s) for all [s,t]⊂ [0,T],
where
(ρ) inf{lim inf_n→∞^*(ρ_n,-∇'(ρ_n)) : ρ_n⇀ρ weakly in (Ω), sup_n≥ 0(ρ_n) <∞},
i.e. is a lower-semicontinuous envelope of ρ↦^*(ρ,-∇'(ρ));
* the following chain rule inequality holds:
-/ t(ρ_t) ≤(ρ_t, j_t) + (ρ_t) for almost every t∈(0,T).
§.§ Generalized gradient structure for finite volume schemes
We take the point of view that finite volume schemes can be seen as random walks on the graph induced by tessellations. Hence, we consider a random walk on a graph that corresponds to a tessellation (^h, Σ^h). Given an initial law ρ_0^h = ρ_in^h∈(^h), the time marginal law of a random walk satisfies the forward Kolmogorov equation
FKE_h∂_t ρ_t^h = Q^*_h ρ_t^h,
where Q^*_h is the dual of the generator Q_h defined for all bounded functions φ∈(^h) as
(Q_h φ) (K) = ∑_(K,L)∈Σ^h (φ) (K,L) κ^h_K|L, K ∈^h,
where κ : Σ^h →_+ is a bounded jump kernel. We restrict ourselves to random walks satisfying detailed balance, i.e. random walks admitting a stationary measure π^h∈(^h) such that
π^h_K κ^h_K|L = π^h_L κ^h_L|K for all (K, L) ∈Σ^h.
We note that the detailed balance implies, by the ergodic theorem for continuous-time Markov chains, the uniqueness of the stationary measure π^h (see, for instance, <cit.>).
A pair (ρ^h, j^j) is said to be in 𝒞ℰ_h(0, T) if
* ρ^h∈([0,T];(^h)) is a curve of finite measures defined on the graph ^h, and
* j^h = (j_t^h)_t∈[0,T]⊂(Σ^h) is a measurable family of discrete fluxes with finite action
∫_0^T | j^h_t | ( Σ^h) t < ∞,
satisfy the discrete continuity equation (<ref>) in the following sense: For any [s, t] ⊂ [0, T],
∑_K∈^hφ^h_K ρ^h_K(t) - ∑_K∈^hφ^h_K ρ^h_K(s) = ∫_s^t∑_(K,L)∈Σ^h (φ^h)(K, L) j^h_K|L(r) r for all φ^h∈(^h).
A curve ρ^h∈([0,T]; (^h)) is an (_h, _h, _h^*)-generalized gradient flow solution of (<ref>) with initial data ρ_in^h∈(^h)∩dom(_h) if
* ρ^h_0= ρ̅^h in (^h);
* there is a measurable family j^h=(j^h_t)_t∈[0, T]⊂(Σ^h) such that (ρ^h, j^h) ∈𝒞ℰ_h(0, T) with
∫_s^t _h(ρ^h_r, j^h_r) + _h(ρ^h_r) r + _h(ρ^h_t) = _h(ρ^h_s) for all [s,t]⊂ [0,T];
where
_h(ρ^h) inf{lim inf_n→∞_h^*(ρ^h_n,-'_h(ρ^h_n)) : ρ^h_n⇀ρ^h weakly in (^h), sup_n≥ 0_h(ρ^h_n) <∞},
i.e. _h is a lower-semicontinuous envelope of ρ^h↦_h^*(ρ^h,-'_h(ρ^h)).
* the chain rule inequality holds, i.e.
-/ t_h(ρ^h_t) ≤_h(ρ_t^h,j_t^h) + _h(ρ_t^h) for almost every t∈ (0,T).
§.§ Two gradient structures for the Scharfetter–Gummel scheme
Since the Scharfetter–Gummel scheme is a finite volume scheme, it defines a random walk on the state space ^h. Moreover, (<ref>) possesses a generalized gradient flow structure if the Scharfetter–Gummel flux (<ref>) can be recast as the force-flux relation (<ref>) induced by a dual dissipation potential, i.e. if we can express the discrete flux for all K∈^h and (K, L)∈Σ^h as
^h,ρ_K|L = D_2_h^* (ρ^h,-'_ϵ,h(ρ^h) ) (K,L)
with an appropriate dual dissipation potential _ϵ,h^* and the driving energy _ϵ,h defined in (<ref>).
We will see in Section <ref> that in the `cosh' case, the edge activity ϑ^h,ρ depends on the potentials V^h, W^h and ρ^h. This dependence of the dissipation potential on the driving energy can be considered a drawback from the modelling point of view and can cause complications in proving EDP convergence. An in-depth discussion of tilt-dependent gradient systems, where changes in the driving energy can lead to changes in the dissipation potential, is carried out in <cit.>. Fortunately for the Scharfetter–Gummel scheme, it is possible to derive a tilt-independent gradient structure, which is better suited for proving EDP convergence. We present the tilt-independent dissipation potential in Section <ref>.
§.§.§ The cosh gradient structure and its tilt-dependence
Here, we show that the random walk defined by the Scharfetter–Gummel scheme (<ref>) possesses a `cosh' gradient structure.
We follow the strategy introduced in <cit.> and introduce a local equilibrium to arrive at a suitable gradient flow formulation incorporating the aggregation term, such that the scheme would indeed fit into the frame developed in <cit.>.
From the discrete energy _h given in (<ref>), we identify its variational derivative as
'_ϵ,h(ρ^h)_K = ϵ(logρ^h_K - logπ^ϵ,h,ρ_K ),
with
π^ϵ,h,ρ_K = K e^- 𝖰_K^h,ρ/ϵ/Z^ϵ,h,ρ, 𝖰_K^h,ρ = V_K^h + ∑_M∈^h W_KM^h ρ_M^h,
and Z^ϵ,h,ρ = ∑_K∈^hK e^-𝖰_K^h,ρ/ϵ is the normalization such that π^ϵ,h,ρ∈(^h).
The `cosh' dual dissipation potential is given for all ρ^h ∈(^h) and ξ^h ∈(Σ^h) by
_ϵ,h^*(ρ^h, ξ^h) = 1/2∑_(K,L)∈Σ^hΨ^*_ϵ (ξ^h_KL) √(u̅^h_K u̅^h_L) κ^ϵ,h,ρ_K|Lπ^ϵ,h,ρ_K, u̅_K^h = ρ_K^h/π^ϵ,h,ρ_K ,
where Ψ_ϵ^*(s) = 4 ϵ^2 (cosh(s/2 ϵ) - 1).
The idea is then to choose a jump kernel κ^ϵ,h,ρ : Σ^h → [0, ∞) in such a way that it satisfies the local detailed balance condition
κ^ϵ,h,ρ_K|Lπ^ϵ,h,ρ_K = κ^ϵ,h,ρ_L|Kπ^ϵ,h,ρ_L for all (K,L)∈Σ^h and all ρ^h ∈(^h) .
and allows representing the flux in the gradient form (<ref>).
One possibility is to define the jump kernel as
κ^ϵ,h,ρ_K|L1/|K|τ_K|L^h/exp⟨[|]-𝖰_K^h,ρ/ϵ2 q_K|L^h / ϵ/exp(𝖰^h,ρ_L / ϵ) - exp(𝖰^h,ρ_K / ϵ), (K,L)∈Σ^h,
where we recall that τ_K|L^h= |(K|L)| / |x_L - x_K| is the transmission coefficient and
q_K|L^h V^h_L - V^h_K + ∑_M∈^hρ^h_M (W^h_ML - W^h_MK) = 𝖰_L^h,ρ-𝖰_K^h,ρ, (K,L)∈Σ^h.
Notice that the pair (κ^ϵ,h,ρ, π^ϵ,h,ρ) satisfies the local detailed balance condition (<ref>), since τ_K|L^h = τ_L|K and q_K|L^h = - q_L|K.
The edge conductivity is then given by
ϑ^ϵ,h,ρ_K|Lτ_K|L^h/Z^ϵ,h,ρ2 q_K|L^h / ϵ/exp(𝖰^h,ρ_L / ϵ) - exp(𝖰^h,ρ_K / ϵ).
The kernel defined in (<ref>) satisfies the bound
sup_h>0sup_K∈^h h^2 ∑_L∈^h_Kκ^ϵ,h,ρ_K|L≤ c_κ < ∞, where ^h_K{L∈^h: (K,L)∈Σ^h},
provided {(^h, Σ^h)}_h>0 satisfy (<ref>). Indeed, for any (K,L)∈Σ^h, it holds that
κ^ϵ,h,ρ_K|L = |(K|L)|/|K||x_K-x_L|2 q_K|L^h/ϵ/exp(q_K|L^h/ϵ) - 1 ≤C_d-1 h^d-1/C_d ζ^d+1 h^d+1(1 - q_K|L^h/2 + o(h) ) = O(h^-2).
It is not difficult to see that the non-degeneracy assumption (<ref>) implies that <cit.>
sup_h>0sup_K∈^h#^h_K < ∞,
and thus also the asserted bound (<ref>).
To apply the strategy from <cit.> directly, it is left to show that the choice of κ^ϵ,h,ρ in (<ref>) indeed gives rise to the Scharfetter–Gummel flux (<ref>).
For any ρ^h∈(^h), K∈^h, and (K,L)∈Σ^h, we have the identity (<ref>),
where ^h,ρ is the Scharfetter–Gummel flux given in (<ref>) and _ϵ,h^* is the `cosh' dual dissipation potential with edge conductivity ϑ^ϵ,h,ρ defined in (<ref>).
In particular, the Scharfetter–Gummel scheme (<ref>) possesses the `cosh' gradient flow structure with (<ref>) as the driving energy.
We begin by rewriting the Scharfetter–Gummel flux in (<ref>) using the density u̅^h = ρ^h / π^ϵ,h,ρ with the reference measure π^h,ρ depending on 𝖰^h,ρ:
_K|L^h,ρ = ϵτ_K|L^h/Z^h,ρ( ( q_K|L^h / ϵ) u̅^h_K e^-𝖰^h,ρ_K/ ϵ - (- q_K|L^h / ϵ ) u̅^h_L e^-𝖰^h,ρ_L/ ϵ).
The expression (<ref>) can be simplified, since
(q_K|L^h / ϵ)exp(-𝖰^h,ρ_K / ϵ)
= q_K|L^h exp(-𝖰^h,ρ_K / ϵ)/ϵ( exp(q_K|L^h / ϵ) - 1 )
= q_K|L^h/ϵ⟨[|]exp(𝖰^h,ρ_L / ϵ) - exp(𝖰^h,ρ_K / ϵ)
and, similarly,
(-q_K|L^h / ϵ) exp(-𝖰^h,ρ_L / ϵ) = q_K|L^h/ϵ⟨[|]exp(𝖰^h,ρ_L / ϵ) - exp(𝖰^h,ρ_K / ϵ) ,
therefore
_K|L^h,ρ = τ_K|L^h/Z^h,ρ q_K|L^h/exp(𝖰^h,ρ_L / ϵ) - exp(𝖰^h,ρ_K / ϵ)⟨*|u̅^h_K - u̅^h_L = ϵ/2⟨*|u̅^h_K - u̅^h_L ϑ_K|L^ϵ,h,ρ .
On the other hand, we note that for every (K,L)∈Σ^h and ξ^h∈(Σ^h):
D_2^*_ϵ,h(ρ^h,ξ^h)(K,L) = ϵsinh⟨*|ξ^h_K|L/2ϵ√(u̅^h_K u̅^h_L) ϑ^ϵ,h,ρ_K|L.
Recall from (<ref>) and (<ref>) that '_ϵ,h(ρ^h)(K) = ϵlog(u̅^h_K). Inserting ξ^h = - '_ϵ,h(ρ^h), we obtain
D_2 _ϵ,h^* (ρ^h, - '_ϵ,h(ρ^h)) (K, L) = ϵsinh⟨[|]1/2logu̅^h_K/u̅^h_L√(u̅^h_K u̅^h_L) ϑ^ϵ,h,ρ_KL = _K|L^h,ρ,
i.e. identity (<ref>) holds as asserted.
Since the classical Scharfetter–Gummel scheme has the `cosh' gradient-flow formulation, one can ask if it is possible to use the framework of <cit.> to prove the convergence. The necessary assumptions on the invariant measure π^ϵ,h,ρ and the jump intensities κ^ϵ,h,ρ hold true based on the notion of local detailed balance as defined in (<ref>). However, the zero-local-average assumption
∑_L∈^h_Kϑ^ϵ,h,ρ_K|L (x_K - x_L) = 0 for all K∈^h with K∩∂Ω = ∅ does not hold.
In addition, the nonlinear dependency of ϑ^ϵ,h,ρ on ρ seems to make satisfying (<ref>), even only asymptotically, very hard and may require strong assumptions on the tessellations to work around.
As a last remark, we emphasize that the edge conductivity ϑ^ϵ,h,ρ defined in (<ref>) depends non-uniformly on the diffusion parameter ϵ>0, which makes it difficult to pass to the limit ϵ→ 0.
The disadvantages of the `cosh' gradient structure mentioned in this section can be seen as due to tilt-dependence as defined in <cit.>. To clarify this further, we decompose the free energy into entropy and potential energies by writing
_ϵ,h(ρ^h) = ϵ_h(ρ^h) + _h^V(ρ^h) + _h^W(ρ^h),
where V^h:_h → and W^h:_h×_h→ symmetric are given and we set
_h(ρ_h) ∑_K∈^hϕ(u^h_K )|K| , where u^h_K ρ^h_K/|K| ;
_h^V(ρ^h) ∑_K∈^h V^h_K ρ^h_K and _h^W(ρ^h) 1/2∑_K, L∈^h×^h W^h_KLρ^h_K ρ^h_L .
Then, we can provide a gradient structure for the Scharfetter–Gummel scheme for all possible potential energies V^h and interaction energies W^h altogether by introducing the set of tilts
_h *_h^V + _h^W | V^h : ^h → , W^h: ^h×^h → symmetric .
We can then recast Lemma <ref> as a derivation of a gradient structure with tilting <cit.> of the type (^h,Σ^h,,_h,_ϵ,h,_h). By recalling that for _h^V + _h^W ∈_h, we find 𝖰^h,ρ = (_h^V)'(ρ^h) + (_h^W)'(ρ^h) as defined in (<ref>) and obtain from (<ref>) the dissipation potential
_ϵ,h(ρ^h,j^h;_h^V + _h^W) 1/2∑_(K,L)∈Σ^hΨ_ϵ ⟨*|j_KL^h/√(u̅^h_K u̅^h_L)ϑ_K|L^ϵ,h,ρ√(u̅^h_K u̅^h_L)ϑ_K|L^ϵ,h,ρ, u̅_K^h = ρ_K^h/π^ϵ,h,ρ.
In particular, it depends on the potential energies V^h,W^h through ϑ^ϵ,h,ρ defined in (<ref>) and hence is tilt-dependent. Its undesirable properties explained in Remark <ref> are a direct consequence of the dependency of the gradient structure on the potentials and in particular on the diffusivity ϵ>0.
§.§.§ Tilt-independent gradient structure
In this section, we introduce the tilt-independent gradient structure, which we will study in this manuscript and is one of the main contributions of this article. The gist of this structure is that the dual dissipation potential does not depend on potentials V^h and W^h and more importantly also does not degenerate for small diffusivity ϵ≪ 1.
Based on the cell formula (<ref>), the Scharfetter–Gummel flux in (<ref>) was recast as a kinetic relation for a general force ξ^h∈(Σ^h) in <cit.>, for which we can derive a suitable dual dissipation potential _ϵ,h^*. For doing so, we notice that along a solution of the scheme, we have the force
ξ^h_K|L = - '_ϵ,h(ρ^h)(K,L) = - ⟨[|]ϵlogu^h_L/u^h_K + q_K|L^h , (K,L)∈Σ^h,
and therefore, we find the relation
q_K|L^h = ϵlogu^h_K/u^h_L - ξ^h_K|L = ϵ( log⟨[|]u^h_K e^-ξ^h_K|L / 2ϵ - log⟨[|]u^h_L e^ξ^h_K|L / 2ϵ).
By substituting this relation into (<ref>), we arrive, after some simplifications, at the identity
^h,ρ_K|L
= ϵsinh⟨*|ξ^h_K|L/2ϵΛ_H⟨*|u_K^h e^-ξ^h_K|L/2ϵ,u_L^h e^ξ^h_K|L/2ϵ |K|
!= D_2 _ϵ,h^* (ρ^h, ξ^h) (K,L) ,
where the last equality is a requirement for the new dual dissipation potential and Λ_H denotes the harmonic-logarithmic mean defined in (<ref>).
From the kinetic relation (<ref>) relating the force ξ^h with the flux, one obtains the dissipation potential _h^* as given in (<ref>) with the function α_ϵ^* in (<ref>), by simply integrating over the force. Although α_ϵ^* is only defined as an integral, it has many beneficial properties, which are essential for the analysis that we collect Lemma <ref> in Appendix <ref>.
Altogether, we obtained yet another gradient structure for the Scharfetter–Gummel scheme.
Since the derivation of the kinetic relation (<ref>) might seem to look ad-hoc, we provide a different derivation of the dissipation potential _ϵ,h^* from the `cosh' dissipation potential _ϵ,h^* defined in (<ref>). To do so, we perform a `de-tilting' technique as explained in <cit.>.
In this way, we can show that we arrived at a tilt-independent gradient structure for the Scharfetter–Gummel scheme.
The Scharfetter–Gummel with flux-force relation (<ref>) is induced by a gradient structure with tilting (^h,Σ^h,, _h,_ϵ,h,_h) with tilt set _h given in (<ref>). Moreover, the dissipation potential _ϵ,h is tilt-independent and given by
_ϵ,h(ρ^h, j^h) = 2∑_(K,L)∈Σ^hτ_K|L^h α_ϵ⟨*| u^h_K ,u^h_L, j^h_K|L/τ_K|L^h , u_K^hρ_K^h/|K|,
where α_ϵ is the Legendre dual of α_ϵ^* given in (<ref>) with respect to the third variable.
We follow the construction explained in <cit.>. To do so, we need to make the tilt-dependence of the dual dissipation potential _h^* explicit, for which use the primal dissipation potential defined in (<ref>) and can rewrite (<ref>) as
^*_ϵ,h(ρ^h,ξ^h; _h^V+_h^W) =
1/2∑_(K,L)∈Σ^hΨ^*_ϵ(ξ^h_K|L) √(u̅^h_K u̅^h_L)ϑ_K|L^ϵ,h,ρ, u̅_K^h = ρ_K^h/π_K^ϵ,h,ρ.
Note, that the tilt-dependence comes through ϑ^ϵ,h,ρ in terms of 𝖰^h,ρ. By inspecting <cit.>, we have to verify the identity
D_2 _ϵ,h^*(ρ^h,ξ^h)(K,L) != D_2_ϵ,h^*⟨*|ρ^h,ξ^h; -ξ^h-ϵ_h(ρ)(K,L) .
To do so, we fix (K,L)∈Σ^h and identify q_K|L^h = 𝖰^h,ρ in ϑ^ϵ,h,ρ_KL to obtain
√(u̅^h_K u̅^h_L)ϑ_K|L^ϵ,h,ρ = τ_K|L^h √(u_K^h u_L^h)𝖰^h,ρ_KL/exp⟨[|]𝖰^h,ρ_KL/(2ϵ)-exp⟨[|]-𝖰^h,ρ_KL/(2ϵ).
By substituting 𝖰^h,ρ_KL = q_K|L^h = -ξ_K|L^h- ϵlogρ^h(K,L), which amounts in using the identity (<ref>), we observe that
D_2_ϵ,h^*⟨*|ρ^h,ξ^h; - ξ^h -_h(ρ)(K,L)
= ϵτ_K|L^h sinh⟨[|]ξ^h_K|L/2ϵ√(u_K^h u_L^h)log⟨[|]u^h_K e^-ξ^h_K|L / 2ϵ - log⟨[|]u^h_L e^-ξ^h_K|L / 2ϵ/e^-ξ^h_K|L/2ϵ -log√(u^h)(K,L) -e^ξ_K|L/2ϵ+log√(u^h)(K,L)
= α_ϵ⟨*|u^h_K,u^h_L, ξ^h_K|L/2 = D_2_ϵ,h^*(ρ^h,ξ^h)(K,L),
which verifies the claimed identity (<ref>) and the remaining statements from Lemma <ref> follow as argued in <cit.>.
§ VARIATIONAL CONVERGENCE FOR THE TILT-INDEPENDENT STRUCTURE
The strategy of proving the discrete-to-continuum EDP convergence comprises two main steps:
* Prove compactness for the family of the GGF solutions (ρ^h, j^h) of (<ref>) defined in Defintion <ref>. This allows us to extract a subsequence converging to a limiting pair (ρ, j).
* Prove liminf inequalities for all the functionals in the energy-dissipation functional _h and recover a limiting energy-dissipation functional :
(ρ, j)≤lim inf_h→ 0_h(ρ^h, j^h).
In Section <ref>, we prove the compactness results required by (1). To establish the liminf inequality for _h from (2), the main effort relates to the Fisher information. Thus, Section <ref> is dedicated to the -convergence of the Fisher information. We conclude with the proof of Theorem <ref> in Section <ref>.
§.§ Compactness
We consider a family {(ρ^h,j^h)}_h>0 of (_h, _h, _h^*)-generalized gradient flow solutions to (<ref>), where the corresponding functionals are defined in (<ref>), (<ref>), and (<ref>) respectively. We also assume the initial data {ρ^h_in}_h>0 to be well-prepared. We set J^h∫_·_t^h t.
The family { J^h }_h>0 is weakly-* compact in ([0,T]×Ω; ^d) and the family { t ↦ | ^h_t | (Ω) }_h>0 is equi-integrable.
In particular, there exists a Borel family (j_t)_t∈[0,T]⊂(Ω;^d) such that
J^h=∫_· _t^h t ⇀^* ∫_· j_t t weakly-* in ([0,T]×Ω; ^d)
for a (not relabelled) subsequence.
The proof is similar to the proof of the related compactness statement for the `cosh' gradient structure <cit.>. For completeness, we present the full proof here.
For almost every t∈(0,T), the reconstruction of the flux is defined as
_t^h = ∑_(K,L) ∈Σ^h j^h_K|L(t) σ_K|L^h,
with σ_K|L^h∈(Ω; ^d) such that |σ_K|L^h|(Ω) ≤ 2dh. The existence of the required σ_K|L^h is proven in <cit.>. We begin by noticing that for almost every t∈(0,T) and any β∈,
_ϵ,h(ρ_t^h, j_t^h) = sup_ξ^h∈(Σ^h){∑_(K,L)∈Σ^hξ_K|L^h j_K|L^h(t) - 2∑_(K,L)∈Σ^hτ_K|L^h α_ϵ^* ⟨*| u^h_K(t), u^h_L(t), ξ^h_K|L/2}
≥β |_t^h|(Ω) - 2∑_(K,L)∈Σ^hτ_K|L^h α_ϵ^* ⟨*| u^h_K(t), u^h_L(t), β sign(j_J|L^h)|σ_K|L^h|(Ω)/2,
where we simply take ξ_K|L^h=β sign(j_K|L^h)|σ_K|L^h|(Ω). Due to Lemma <ref><ref>, we obtain
α_ϵ^* ⟨*| u^h_K(t), u^h_L(t), β sign(j_K|L^h)|σ_K|L^h|(Ω)/2≤1/4√(u_K^h(t) u_L^h(t)) Ψ_ϵ^*⟨*|β|σ_K|L^h|(Ω),
and consequently,
_ϵ,h(ρ_t^h, j_t^h) ≥β |_t^h|(Ω) - c_κ/2h^2Ψ_ϵ^*(2β dh),
with the constant c_κ>0 as defined in (<ref>).
Using the fact that Ψ_ϵ^*(s r) ≤ r^2 Ψ_ϵ^*(s) for s,r∈ with |r|≤ 1, where Ψ_ϵ^* is a convex function having superlinear growth and minimizing the previous inequality over β∈, we obtain
_ϵ,h(ρ_t^h, j_t^h) ≥c_κ/2sup_β∈{β |_t^h|(Ω)/d c_κ - Ψ_ϵ^*(β )} = c_κ/4Ψ̃_ϵ⟨*||_t^h|(Ω)/d c_κ,
where Ψ̃_ϵ is the Legendre dual of Ψ̃_ϵ^* which, again, is a convex function having superlinear growth. Since (j_t^h)_t∈[0,T] has uniform-in-h finite action, we then obtain
sup_h>0∫_0^T Ψ̃_ϵ⟨*||_t^h|(Ω)/d c_κ t ≤2/c_κsup_h>0∫_0^T_ϵ,h(ρ_t^h, j_t^h) t ≤2/c_κsup_h>0_ϵ,h(ρ^h_in)<∞,
therewith deducing the equi-integrability of the family { t ↦ | ^h_t | (Ω) }_h>0.
One also easily deduces from the previous inequality that
sup_h> 0|J^h|([0,T]×Ω) ≤ 2 d ⟨*|sup_h> 0∫_0^T _ϵ,h(ρ_t^h, j_t^h) t + c_κ T/2Ψ_ϵ^*(1) <∞,
which implies the existence of some J∈((0, T)×Ω) and some subsequence for which J^h ⇀^* J weakly-* in ((0, T)×Ω). Finally, Due to the equi-integrablity of { t ↦ | ^h_t | (Ω) }_h>0, we deduce that J has the representation J = ∫_· j_t t for a Borel family (j_t)⊂(Ω; ^d).
Let ρ^h ∈(^h) with ^0_h(ρ^h) < ∞, where
_ϵ,h^0 (ρ^h) 2∑_(K,L)∈Σ^hβ_ϵ ( u^h_K, u^h_L ) τ_K|L^h, u_K^h = ρ_K^h/|K|.
Then the reconstructed density û^h satisfies
| D û^h | (Ω) ≤ C √(^0_ϵ,h(ρ^h)),
for some constant C>0 independent of h>0.
Since û^h is a piece-wise constant function on the cells ^h, one can show that
Dû^h = ∑_(K, L)∈Σ^h u^h_K n_KL^d-1|_(K|L) = 1/2∑_(K, L)∈Σ^h (u^h_K - u^h_L) n_KL^d-1|_(K|L).
Therefore, using the Cauchy-Schwarz inequality yields
|Dû^h| (Ω) ≤1/2∑_(K, L)∈Σ^h |u^h_K - u^h_L| |(K|L)|
≤1/2∑_(K, L)∈Σ^h |u^h_K - u^h_L| h τ_K|L^h
≤( ∑_(K,L)∈Σ^h|u^h_L - u^h_K|^2/u^h_L + u^h_Kτ_K|L^h )^1/2⟨*|∑_(K,L)∈Σ^h (u^h_K + u^h_L) h^2 τ_K|L^h ^1/2≤ C √(_ϵ,h^0(ρ^h)),
for some constant C>0 independent of h>0 and Lemma <ref><ref> was used in the last inequality.
With Lemma <ref> and Lemma <ref> at hand, we can prove the strong compactness result.
Let the family of curves {ρ^h}_h>0 be the GGF-solutions of (<ref>) with (_h, _h, _h^*) defined in (<ref>), (<ref>), and (<ref>) respectively. Let sup_h>0_h(ρ^h_in) < ∞. Then there exists u ∈ L^1( (0, T); L^1(Ω)) and a (not relabelled) subsequence such that
û^h_t → u_t in L^1(Ω) for almost every t∈(0,T).
The proof of the proposition can be found in <cit.>.
§.§ Γ-convergence of the Fisher information
The aim of this section is to prove a -convergence result for the discrete Fisher information ρ^h ↦_ϵ,h(ρ^h)^*_ϵ,h(ρ^h,-'_ϵ,h(ρ^h)), where
-'_ϵ,h(ρ^h)(K,L) = 2 ϵlog√(u^h_K / u^h_L) - q_K|L^h.
It will be crucial, that we have the decomposition of α_ϵ^* from Lemma <ref><ref> to get the representation of _h as the sum of three terms
_ϵ,h(ρ^h) = _ϵ,h^0(ρ^h) + _ϵ,h^1 (ρ^h) + _ϵ,h^2 (ρ^h),
where _ϵ,h^0 is given in (<ref>) and_ϵ,h^1 (ρ^h) ϵ/2∑_(K, L)∈Σ^h (u^h_L - u^h_K) q_K|L^h τ_K|L^h,
_ϵ,h^2 (ρ^h) 1/2∑_(K, L)∈Σ^h |q_K|L^h|^2 𝕙_ϵ (u^h_K, u^h_L, q_K|L^h) τ_K|L^h.
This representation resembles the expansion of the continuous counterpart. Indeed, we expect the limit functional to be
_ϵ(ρ) = ^*⟨[|]ρ, -∇ (ϵlog u + 𝖰(ρ) )
= ϵ^2/2∫*∇log⟨*|u e^𝖰(ρ)/ϵ^2 ρ
= 2ϵ^2 ∫*∇√(u)^2 x + ϵ∫∇ u ·∇𝖰(ρ) x + 1/2∫*∇𝖰(ρ) ^2 u x
_ϵ^0(ρ) + _ϵ^1(ρ) + ^2(ρ),
where we use the notation 𝖰(ρ) = V + W*ρ as in the introduction.
The main result of this section is the following theorem.
Assume that a family of tessellations {(^h, Σ^h)}_h>0 satisfies the orthogonality (<ref>).
Up to passing to a subsequence, the family of functionals {_ϵ,h}_h>0 has a -limit _ϵ w.r.t. the L^2-topology taking the form
_ϵ(ρ) =
2ϵ^2 ∫_Ω| ∇√(u)|^2 x + ϵ∫_Ω∇ u ·∇𝖰(ρ) x + 1/2∫_Ω| ∇𝖰(ρ) |^2 ρ if √(u)∈ H^1(Ω),
+∞ otherwise.
The proof of Theorem <ref> consists of the -convergence result for _ϵ,h^0 and continuous convergence results for _ϵ,h^1 and _ϵ,h^2. Although we use the orthogonality assumption (<ref>) to get the complete result, the convergence of _ϵ,h^0 and _ϵ,h^2 can be established without (<ref>) at the cost of the tensor appearing in the limit. Unfortunately, it is not clear how to identify the limit of _ϵ,h^1 without (<ref>).
We begin with _ϵ,h^0. According to Lemma <ref><ref> the function β satisfies the following bounds
1/8(a - b)^2/a + b2≤β_ϵ(a, b) ≤1/2 (√(a) - √(b))^2 for a,b>0.
The appearance of such bounds is possible to understand intuitively by noting that in the continuous setting, thanks to the chain rule the following two formulations are equivalent
1/8|∇ u |^2 /u
=
1/2| ∇√(u)|^2 for √(u)∈ H^1(Ω).
We now recognize the lower bound for β_ϵ as a discretization for the second formulation. We can also expect that (<ref>) has the same -limit as the quadratic functional.
The proof of -convergence for _ϵ,h^0 follows the localization method. The corresponding theory is covered in <cit.>, and for the application of the localization method in the setting close to ours, see <cit.>. The method is based on considering the localized version of the functional _ϵ,h^0 restricted to an open set A⊂Ω
_ϵ,h (v^h, A) ∑_(K, L)∈Σ^h|_Aβ_ϵ( (v^h_K)^2, (v^h_L)^2 ) τ_K|L^h,
where Σ^h|_A { (K, L) ∈Σ^h: K,L∈^h|_A } and ^h|_A { K ∈^h : K∩ A ≠∅}.
We define for any open set A⊂Ω
_ϵ,sup(v, A) -lim sup_h→ 0_ϵ,h (v, A) = inf{lim sup_h→ 0_ϵ,h (v_h, A) : v_h → v }.
In the next lemma, we summarize the properties of _ϵ,sup, which is necessary to apply the representation theorem from <cit.>. Specifically, we prove that _ϵ,sup is an inner regular, subadditive, and local functional satisfying the lower and upper Sobolev bounds. The proof follows very closely the strategy from <cit.> and leverages the quadratic comparison of the function β_ϵ noted above in (<ref>).
The functional _ϵ,sup defined in (<ref>) has the following properties
* Inner regularity: For any v∈ H^1(Ω, μ) and for any A∈ it holds that
sup_A' A^μ_ϵ,sup(v, A') = ^μ_ϵ,sup(v, A);
* Subadditivity: For any v∈ H^1(Ω, μ) and for any A, A', B, B' ∈ such that A' A and B' B it holds that:
^μ_ϵ,sup(v, A'∪ B') ≤^μ_ϵ,sup(v, A) + ^μ_ϵ,sup(v, B);
* Locality: For any A∈ and any v, ψ∈ H^1(Ω, μ) such that v=ψ μ-a.e. on A there holds
^μ_ϵ,sup(v, A) = ^μ_ϵ,sup(v, A).
* Sobolev bounds: For any v∈ H^1(Ω) and an open set A⊂Ω
c∫_A | ∇ v |^2 x ≤_ϵ,sup(v, A) ≤ C∫_A | ∇ v |^2 x,
for some c, C>0 independent of v and A.
In the following, we drop the subscript ϵ.
Upper bound. By the upper bound shown in Lemma <ref>(f), it holds that
_sup(v, A) ≤ϵ^2/2∑_(K,L)∈Σ^h|_A( v^h_L - v^h_K )^2 τ_K|L^h.
Then the required upper bound follows from <cit.>.
Properties of _sup as a set functional. The proof of inner regularity, subadditivity, and locality for _sup follows very closely the corresponding proofs in <cit.>.
Lower bound. Let {v_h}_h>0∈ L^2(Ω) be a sequence with v^h → v in L^2(Ω) such that
_sup(v, A) = lim sup_h→ 0_h(v_h, A).
We fix an arbitrary r>0 and denote A_r { x∈ A : dist(x, ∂ A) > r }. Let η∈^d be such that |η| < r, then by the argument as in <cit.>.
∫_A_r | v_h(x+η) - v_h(x) |^2 x ≤ C |η|^2 ∑_(K, L)∈Σ^h|_A_r| v_h(L) - v_h(K) |^2 τ_K|L^h.
Using the lower bound for β_ϵ from Lemma <ref><ref>
(v_h,A_r) =∑_(K, L)∈Σ^h|_A_rβ_ϵ((v_h(K))^2, (v_h(L))^2) τ_K|L^h ≥ϵ^2/4∑_(K, L)∈Σ^h|_A_r | v_h(L) - v_h(K) |^2 τ_K|L^h
and passing to the limit superior as h→0 then yields
_sup(v, A_r) ≥ c v(·+η) - v ^2_L^2(A_r)/|η|^2≥ c∫_A_r| ∇ v |^2 x for v∈ H^1(Ω).
Due to the inner regularity property, we conclude _sup(v, A) ≥ c∫_A | ∇ v |^2 x for v∈ H^1(Ω).
We aim to find an integral representation for _sup in the form
_ϵ,sup(v, A) = ∫_A f_ϵ(x, v, ∇ v) x, v∈ H^1(A).
We will prove that the functions ϕ^h_x,w,ξ (K) = w + ⟨ξ, x_K - x ⟩ with some fixed x∈Ω, w∈, ξ∈^d and x_K = _K x x are almost minimizers for _ϵ,h.
The family of functions {ϕ^h_x,w,ξ}_h>0 with x∈Ω, w∈, ξ∈^d are almost minimizers for _ϵ,h, i.e.
lim_h→ 0( _ϵ,h (ϕ^h_x,w,ξ, Q_r(x)) - M_ϵ,h (ϕ^h_x,w,ξ, Q_r(x)) ) = 0,
for a cube Q_r(x) with the edge length r>0 and the center in x∈Ω and where
M_ϵ,h (v^h, A) inf{_ϵ,h (w^h, A) : w^h on ^h|_A with w^h = φ^h on ^h|_A^c}.
Let ψ^h be the minimizer for M_ϵ,h (ϕ^h_x,w,ξ, Q_r(x)). The convexity of _ϵ,h yields
0 ≤_ϵ,h (ϕ^h_x,w,ξ, Q_r(x)) - _ϵ,h (ψ^h, Q_r(x))
≤ D_ϵ,h(ϕ^h_x,w,ξ, Q_r(x)) [ϕ^h_x,w,ξ - ψ^h].
We now calculate the variation of _ϵ,h(·, A) at some v^h∈^^h, fixed open set A⊂Ω in the directions w^h∈^^h such that w^h_K = 0 for K∈^h|_A^c. Here, we use Lemma <ref><ref> that states the existence of the directional derivatives for _+ ×_+ ∋ (a, b) ↦β_ϵ (a^2, b^2):
D_ϵ,h (v^h, A)[w^h]
= 2∑_(K,L)∈Σ^h|_A[ ∂_1 β_ϵ((v^h_K)^2, (v^h_L)^2) v^h_K w^h_K + ∂_2 β_ϵ((v^h_K)^2, (v^h_L)^2) v^h_L w^h_L ] τ_K|L^h
= 4 ∑_(K,L)∈Σ^h|_A w^h_K v^h_K ∂_1 β_ϵ((v^h_K)^2, (v^h_L)^2) τ_K|L^h.
where we used the fact that
∂_1 β_ϵ (a, b) = ϵ^2/4∫_b^a b/z Λ(z, b) z = ∂_2 β_ϵ (b, a).
We denote for the moment
γ(a, b) a ∂_1 β_ϵ (a^2, b^2)
= ϵ^2/4∫_b^2^a^2a b^2/z Λ(z, b^2) z
and perform Taylor expansion in the first variable
γ(a, b) = γ(b, b) + ∂_1 γ(a, b)|_a=b (b - a) + ∂_1^2 γ(a, b)|_a=b (b - a)^2 + o( (a-b)^2).
Direct calculations provide
∂_1 γ(a, b) = ϵ^2/4(∫_b^2^a^2b^2/z Λ(z, b^2) z - a b^2/a^2 Λ(a^2, b^2) 2a )
= ϵ^2/4(∫_b^2^a^2b^2/z Λ(z, b^2) z - 2 b^2/Λ(a^2, b^2)),
and thus, ∂_1 γ(a, b)|_a=b = - ϵ^2 / 2. Calculating the second derivative, we obtain
∂_1^2 γ(a, b) = ϵ^2/4( b^2/a^2 Λ(a^2, b^2) 2a + 2 b^2/Λ^2(a^2, b^2)∂_1 Λ(a^2, b^2) 2a )
= ϵ^2/42 b^2/a Λ(a^2, b^2)( 1 - 2 a^2 - Λ(a^2, b^2)/a^2 - b^2) a → b⟶ 0.
Therefore,
γ(a, b) = -ϵ^2/2 (b - a) + o( (a-b)^2 ).
Inserting this expansion into the variation of _ϵ,h yields
D_ϵ,h(ϕ^h_x,w,ξ, Q_r(x))[w^h]
= 4 ∑_(K,L)∈Σ^h|_Q_r(x) w^h_K ( -ϵ^2/2⟨ξ, x_L - x_K ⟩ + o( h^2 ) ) τ_K|L^h.
Since for any admissible tessellation, ∑_L∈^h_K (x_L - x_K) τ_K|L^h = 0 for K∈^h|_A \^h|_A^c, we obtain
| D_ϵ,h(ϕ^h_x,w,ξ, Q_r(x))[w^h] | ≤ o(1)_h→ 0 C ∑_K∈^h|_Q_r(x) |w^h_K | |K|,
which proves the assertion.
We now split the functional _ϵ,h into the quadratic part and the error term, i.e.
_ϵ,h(v^h) = ϵ^2/2∑_(K,L)∈Σ^h( v^h_L - v^h_K )^2 τ_K|L^h - ∑_(K,L)∈Σ^h e_ϵ (v^h_K, v^h_L) τ_K|L^h,
where we denote e_ϵ(a,b)=ϵ^2/2( a - b )^2 - β_ϵ(a^2, b^2). The first observation to make is that the error term vanishes in the -limit.
Let x∈Ω, w∈, ξ∈^d be fixed. For the discrete functions ϕ^h_x,w,ξ (K) = w + ⟨ξ, x_K - x ⟩ for all K∈^h, the following convergence holds
lim_h→ 0∑_(K,L)∈Σ^h|_Q_r(x) e ( ϕ^h_x,w,ξ(K), ϕ^h_x,w,ξ(L) ) τ_K|L^h = 0.
We recall that e_ϵ(a, b) = ϵ^2/2 (a - b)^2 - β_ϵ (a^2, b^2). Lemma <ref><ref> yield the following bound
e_ϵ(a, b) ≤ϵ^2/2 (a - b)^2 - ϵ^2/4(a^2 - b^2)^2/a^2 + b^2
= ϵ^2/4 (a - b)^2 2(a^2 + b^2) - (a + b)^2/a^2 + b^2
= ϵ^2/4(a - b)^4/a^2 + b^2.
Without loss of generality, we assume that w=0. If ϕ^h_x,ξ(K) = ϕ^h_x,ξ(L) = 0, then we clearly have that e( ϕ^h_x,ξ(K), ϕ^h_x,ξ(L))=0 and we do not need to take these terms into account. Thus, we only need to consider the edges Σ^h|_Q_r(x) for which ϕ^h_x,ξ(K) ≥ 0, ϕ^h_x,ξ(L) > 0 or ϕ^h_x,ξ(K) > 0, ϕ^h_x,ξ(L) ≥ 0.
Let δ > 0 be arbitrary and define
Σ^h_δ{ (K, L)∈Σ^h|_Q_r(x): min⟨*| | ϕ^h_x,ξ(K) |, | ϕ^h_x,ξ(L) | > δ |ξ| }.
Using the non-degeneracy of the tessellation, we get
∑_(K,L)∈Σ^h_δ e ⟨*|ϕ^h_x,ξ(K), ϕ^h_x,ξ(L) τ_K|L^h
≤ C ϵ^2/4∑_(K,L)∈Σ^h_δ|ξ|^4 h^4/|ξ|^2 δ^2 h^d-2≤ C ϵ^2 |ξ|^2 h^2/δ^2 |Ω|.
The remainder of the sum can be bounded with the inequality e_ϵ(a, b) ≤ϵ^2/2 (a - b)^2 to obtain
∑_(K,L)∈Σ^h \Σ^h_δ e ⟨*|ϕ^h_x,ξ(K), ϕ^h_x,ξ(L) τ_K|L^h
≤ϵ^2/2∑_(K,L)∈Σ^h \Σ^h_δ| ⟨ξ, x_L - x_K ⟩|^2 τ_K|L^h
≤ C ϵ^2/2 |ξ|^2 h^d | Σ^h \Σ^h_δ|.
If (K,L)∈Σ^h \Σ^h_δ, then either |⟨ξ, x_K - x ⟩| ≤ |ξ| δ or |⟨ξ, x_L - x ⟩| ≤ |ξ| δ, and therefore,
| Σ^h \Σ^h_δ |
≤ C_| { K ∈^h|_Q_r(x) : |⟨ξ, x_K - x ⟩ | ≤ |ξ| δ}| C_ |^h_δ|.
The inequality |⟨ξ, x_K - x ⟩| ≤ |ξ| δ means that the point x_K lies within distance δ from the line passing through x and having the direction vector ξ. Employing the non-degeneracy assumption again, we get
| Σ^h \Σ^h_δ | ≤ C_ |^h_δ| ≤ C_C_d-1δ 2√(d) r^d-1/C_d (ζ h)^d = C δ r^d-1/h^d.
Hence, the sum over all (K, L)∈Σ^h has the following bound
∑_(K,L)∈Σ^h e ⟨*|ϕ^h_x,ξ(K), ϕ^h_x,ξ(L) τ_K|L^h
≤ C ϵ^2 |ξ|^2 ( h^2/δ^2 + δ r^d-1).
For d ≥ 2 we choose δ(h) = √(h) for all h > 0 to obtain the asserted limit.
Inserting the functions ϕ^h_x,w,ξ into the quadratic part of _ϵ,h yields
_ϵ,h(ϕ^h_x,w,ξ)
= ϵ^2/2∑_K∈^h⟨ξ, ∑_L∈^h_Kτ_K|L^h/|K| (x_L - x_K) ⊗ (x_L - x_K) ξ⟩ |K| = ϵ^2/2∫_^d⟨ξ, ^h(x) ξ⟩ x
with the tensor
^h(x) ∑_K∈^h_K(x) ∑_L∈^h_Kτ_K|L^h/|K| (x_L - x_K) ⊗ (x_L - x_K).
The properties of ^h are summarized in the following proposition.
The diffusion tensor (<ref>) has the following properties:
* ^h(x) is symmetric and positive-definite for any x∈Ω;
* {^h}_h>0 is bounded in L^∞ (Ω; ^d× d):
for all the components ^h_ij it holds that sup_h>0^h_ij_L^∞(Ω) < ∞;
* {^h}_h>0 has a weakly-* limit in the σ(L^∞, L^1) topology, i.e. there exist a subsequence and a tensor ∈ L^∞ (Ω; ^d× d) such that
lim_h→ 0∫_Ω^h_ij f x = ∫_Ω_ij f x for all f∈ L^1(Ω).
Proposition <ref> guarantees that there exists a limiting tensor , but, for an arbitrary tessellation, is not necessarily the identity. In the next proposition, we show that (<ref>) is a sufficient condition to ensure that a family of tessellations converges to the identity matrix.
Let a family of tessellations { (^h, Σ^h )}_h>0 satisfy the orthogonality assumption (<ref>), then the family of tensors {^h}_h>0 defined in (<ref>) is such that
^h_ij⇀^* 2 δ_ij, weakly-* in σ(L^∞, L^1)
up to a subsequence. Thus, =2.
Consider a function ϕ^i(x) = x^i for x∈Ω, i=1,…,d. The projection of ϕ^i on ^h is given by ϕ^i,h_K = x^i_K for K∈^h and corresponding piece-wise constant reconstruction is
ϕ̂^i,h (x) = ∑_K∈^h x^i_K _K(x).
It is not difficult to show that the family {ϕ̂^i,h}_h>0 is bounded uniformly in BV(Ω). Firstly,
ϕ̂^i,h_L^1(Ω) = ∑_K∈^h |x^i_K| |K|
≤sup_x∈Ω |x^i| |Ω|.
Secondly, as in the proof of Lemma <ref>, we have the uniform bound on translations
∫_Ωψ(x)( ϕ̂^i,h(x - η) - ϕ̂^i,h(x) ) x ≤∑_(K,L)∈Σ^h|ϕ^i,h_L - ϕ^i,h_K| |(K|L)| |η|
≤ C |η| ∑_K∈^h |K| = C |η| |Ω|,
for an arbitrary ψ∈ C^1_c(Ω). Therefore, we can conclude that
|D ϕ̂^i,h|(Ω) ≤ C |Ω| for all h>0,
for some constant C>0 independent of h>0.
This BV bound implies that (up to a subsequence) there exists ϕ^i∈ BV(Ω) such that ϕ̂^i,h→ϕ^i in L^1(Ω) and Dϕ̂^i,h⇀^* Dϕ^i weakly-* in (Ω; ^d). On the other hand, we know that ϕ̂^i,h→ x^i in L^1(Ω). Therefore,
∫_Ωφ (D_jϕ̂^i,h)( x) = ∫_Ω∂_jφ ϕ̂^i,h x ⟶∫_Ω∂_jφ x^i x = -∫_Ωφ δ_ij x
for all φ∈ C_c^1(Ω), which consequently yields D_j ϕ̃^i= δ_ij.
On the other hand, using the piecewise constant structure of ϕ̂^i,h, we can write its distributional derivative explicitly as
D ϕ̂^i,h = 1/2∑_(K, L)∈Σ^h (x^i_L - x^i_K) ν_KL^d-1|_(K|L),
where ν_KL denotes the outer normal of the face (K|L).
Due to the orthogonality assumption, we have that ν_KL = (x_L - x_K)/|x_L - x_K|, and hence
D ϕ̂^i,h = 1/2∑_(K, L)∈Σ^hτ_K|L^h (x^i_L - x^i_K) (x_L - x_K) ^d-1|_(K|L)/|(K|L)|.
Notice that D ϕ̂^i,h is related to the tensor ^h in the following way: For any φ∈ C_c^1(Ω),
∫_Ωφ(x) D_jϕ̂^i,h( x)
= 1/2∑_(K,L)∈Σ^hτ_K|L^h (x^i_L - x^i_K) (x_L^j - x_K^j) _(K|L)φ(y) ^d-1( y)
= 1/2∑_(K,L)∈Σ^hτ_K|L^h (x^i_L - x^i_K) (x_L^j - x_K^j) φ(x_K) + o(1)
= 1/2∑_K ∫_K ∑_L∈^h_Kτ_K|L^h/|K| (x^i_L - x^i_K) (x_L^j - x_K^j) φ(x) x + o(1)
= 1/2∫_Ω^h_ij(x) φ(x) x + o(1).
Therefore, passing to the limit then yields _ij = 2 δ_ij. In particular, = 2.
In the remainder of this section, we will assume that the family of tessellations satisfy (<ref>).
We are now in the position to summarize the convergence statement for _ϵ,h^0.
Up to a subsequence, the family of functionals {_ϵ,h^0}_h>0 has a -limit _ϵ with respect to the L^2-topology taking the form
_ϵ^0(ρ) =
2ϵ^2∫_Ω| ∇√(u)|^2 x if √(ρ/ x)√(u)∈ H^1(Ω),
+∞ otherwise.
To complete the proof of Theorem <ref>, we present the continuous convergence results for _ϵ,h^1 and _ϵ,h^2. As preparation, we establish the relation between q^h and the continuous potentials V and W.
Let W satisfy (<ref>) and the family {ρ^h∈(^h)}_h>0 be such that
ρ̂^h/^d→ρ/^d in L^1(Ω), with sup_h>0∫_Ωϕ⟨*|ρ̂^h/^d^d < ∞.
where ϕ(s) = s log s - s + 1 is the entropy density.
Then the following relation holds:
q_K|L^h = ∇𝖰 (ρ̂^h) (x_KL) · (x_L - x_K) + o(h), for any x_KL∈ K∪ L,
where 𝖰(ρ) = V + W∗ρ. Moreover, q_K|L^h has the following two integral approximations
q_K|L^h = _K ∇𝖰 (ρ̂^h) (x) x · (x_L - x_K) + o(h)
and
q_K|L^h = _(K|L)∇𝖰 (ρ̂^h) (x) ^d-1( x) · (x_L - x_K) + o(h).
Since ∇ V is uniformly continuous on Ω, we obtain that
V(x_L) - V(x_K) = ∇ V(x_KL) · (x_L - x_K) + o(h),
where x_KL is some point in K∪ L.
The part of q_K|L^h related to the interaction potential is
∑_M∈^hρ^h_M ( W(x_L - x_M) - W(x_K - x_M) )
= ∑_M∈^h
M≠ K,Lρ^h_M ( W(x_L - x_M) - W(x_K - x_M) )
+ (W(x_L - x_K) - W(0)) ρ^h_K + (W(0) - W(x_K - x_L)) ρ^h_L.
The later terms are bounded as
|W(x_L - x_K) - W(0)| ρ^h_K + |W(0) - W(x_K - x_L)| ρ^h_L ≤ 2 h Lip(W) sup_x∈Ωρ̂^h(B_h(x)).
We intend to show that sup_x∈Ωρ̂^h (B_h(x)) → 0. Using the Legendre-duality, we obtain
∫_Ωϕ(û^h(z)) z ≥βρ̂^h(B_h(x)) - ϕ^*(β) ^d(B_h(x)) for any β>0,
where ϕ(s) = s log s - s + 1 is the entropy density. In particular, we obtain
sup_x∈Ωρ̂^h(B_h(x)) ≤1/β{sup_h>0∫_Ωϕ(û^h(z)) z + ϕ^*(β) C_d (3h)^d} for any β>0.
Therefore, the limsup as h→ 0 yields
0≤lim sup_h→ 0sup_x∈Ωρ̂^h(B_h(x)) ≤1/βsup_h>0∫_Ωϕ(û^h(z)) z.
Since β>0 was arbitrary, we can send β→∞ to obtain the required limit, and thus
(W(x_L - x_K) - W(0)) ρ^h_K + (W(0) - W(x_K - x_L)) ρ^h_L = o(h).
For M≠ K,L, we choose an arbitrary x_KL∈ K ∪ L to obtain
W(x_L - x_M) - W(x_K - x_M) = ∫_0^1 ∇ W ((1-λ) x_K + λ x_K - x_M) λ· (x_L - x_K)
= ∇ W (x_KL - x_M) · (x_L - x_K) + o(h).
We now return to the whole expression for q_K|L^h and write
q_K|L^h = ∇ V(x_KL) · (x_L - x_K) + ∑_M∈^h, M≠ K,Lρ^h_M _M ∇ W (x_KL - x) x · (x_L - x_K) + o(h)
= ∇ V(x_KL) · (x_L - x_K) + ∫_Ω\K∪ L∇ W (x_KL - x) ρ̂^h ( x) · (x_L - x_K) + o(h)
= ∇𝖰 (ρ̂^h) (x_KL) · (x_L - x_K) - ∫_K∪ L∇ W (x_KL - x) ρ̂^h ( x) · (x_L - x_K) + o(h).
In a similar way as above, we obtain
| ∫_K∪ L∇ W (x_KL - x) ρ̂^h ( x) | ≤Lip (W) sup_x∈Ωρ̂^h(B_3h (x)) 0,
therefore,
q_K|L^h = ∇𝖰 (ρ̂^h) (x_KL) · (x_L - x_K) + o(h).
To show the integral representations (<ref>) and (<ref>), we note that ∇𝖰 (ρ̂^h) converges uniformly to ∇𝖰 (ρ). Indeed,
| ∇𝖰 (ρ̂^h)(x) - ∇𝖰 (ρ)(x) |
≤| ∫_Ω∇ W (x - y) (ρ̂^h - ρ) ( y) | ≤Lip (W) û - u _L^1(Ω).
The uniform convergence implies that the family {∇𝖰 (ρ̂^h) }_h>0 is uniformly equicontinuous. Hence,
| ∇𝖰(ρ̂^h)(x_KL) - _K ∇𝖰(ρ̂^h)(x) x |
≤_K | ∇𝖰(ρ̂^h)(x_KL) - ∇𝖰(ρ̂^h)(x) | x = o(1)
and (<ref>) follows. The same argument works for (<ref>).
Let the family {ρ^h
∈(^h) }_h>0 be such that sup_h>0_ϵ,h^0(ρ^h) < ∞. Moreover, suppose that there exists u∈ W^1,1(Ω) such that
ρ̂^h/^d→ uρ/^d in L^1(Ω), and Dû^h ⇀^* ∇ u weakly-* in (Ω; ^d).
Then
lim_h→ 0_ϵ,h^1(ρ^h) = ϵ∫_Ω∇ u ·∇𝖰(ρ) x.
First, we show that _ϵ,h^1 is uniformly bounded. Using the Cauchy-Schwartz inequality yields
_ϵ,h^1(ρ^h) = ϵ/2∑_(K,L)∈Σ^h (u^h_L - u^h_K) q_K|L^h τ_K|L^h
≤ c_pot√(_ϵ,h^0)( ∑_(K,L)∈Σ^h (u^h_L + u^h_K) h^2 τ_K|L^h )^1/2
where we used the estimate (<ref>). Since ∑_L∈^h_K h^2 τ_K|L^h ≤ C_τ |K|, we then obtain the uniform bound.
Similarly, one can show that
sup_h>0∑_(K,L)∈Σ^h |u^h_L - u^h_K| |(K|L)| < ∞.
We aim to rewrite _ϵ,h^1 in an integral form, which will be convenient for passing to the limit h→ 0. We begin by observing that τ_K|L^h can be rewritten as
τ_K|L^h = |(K|L)|/|x_L - x_K| = 1/|x_L - x_K|^d-1 ((K|L)).
Inserting this expression for τ_K|L^h into _ϵ,h^1 yields
_ϵ,h^1(ρ^h) = ϵ/2∑_(K,L)∈Σ^h (u^h_L - u^h_K) q_K|L^h/|x_L - x_K|∫_(K|L)^d-1 ( x).
The representation (<ref>) for q_K|L^h derived in Lemma <ref> yields
q_K|L^h/|x_L - x_K|∫_(K|L)^d-1
= ∫_(K|L) ∇𝖰 (ρ̂^h) (x) ^d-1( x) ·ν_KL + |(K|L)| o(1)|_h→ 0,
where ν_KL=(x_L - x_K)/|x_L - x_K| is the outer normal of the face (K|L). Inserting the obtained expression into _ϵ,h^1, we have
_ϵ,h^1(ρ^h) = ϵ/2∑_(K,L)∈Σ^h (u^h_L - u^h_K) ∫_(K|L)∇𝖰 (ρ̂^h) ^d-1·ν_KL
+ o(1)|_h→ 0∑_(K,L)∈Σ^h (u^h_L - u^h_K) |(K|L)|,
where he last sum is bounded uniformly in h>0 by (<ref>).
Altogether, we arrive at
_ϵ,h^1(ρ^h) = ϵ/2∫_Ω∇𝖰 (ρ̂^h)(x) ·∑_(K,L)∈Σ^h (u^h_L - u^h_K) ν_KL ^d-1|_(K|L) ( x) + o(1)|_h→ 0.
In this expression, one may already recognize the distributional derivative of the density û^h. Indeed, from the definition of û^h, we get
Dû^h = ∑_K∈^h u^h_K D_K = ∑_K∈^h u^h_K n_K ^d-1|_∂ K,
where n_K is the inner normal for the cell K∈^h. It holds that
n_K ^d-1|_∂ K = ∑_L∈^h_K n_KL^d-1|_(K|L) for K∈^h,
where n_KL is an inner normal to the face (K|L). Using symmetry, we find
Dû^h = ∑_(K, L)∈Σ^h u^h_K n_KL^d-1|_(K|L) = 1/2∑_(K, L)∈Σ^h (u^h_K - u^h_L) n_KL^d-1|_(K|L).
If (^h, Σ^h) possesses the orthogonality property, i.e.
n_KL = x_K - x_L/|x_K - x_L| = - ν_KL,
we can write
_ϵ,h^1(ρ^h) = ϵ∫_Ω∇𝖰 (ρ̂^h)(x) · Dû^h ( x) + o(1)|_h→ 0.
Moreover, since ∇𝖰 (ρ̂^h) converges to ∇𝖰 (ρ) uniformly as h→ 0, we further obtain
_ϵ,h^1(ρ^h) = ϵ∫_Ω∇𝖰 (ρ)(x) · Dû^h ( x) + o(1)|_h→ 0.
Passing h→ 0 and using the convergence Dû^h ⇀^* ∇ u in (Ω; ^d) then yields the assertion.
Let the family {ρ^h∈(^h) }_h>0 be such that
ρ̂^h/^d→ uρ/^d in L^1(Ω), with u∈(Ω; ^d).
Then
lim_h→ 0_ϵ,h^2(ρ^h) = 1/2∫_Ω| ∇𝖰(ρ) |^2 ρ.
Using the symmetry, we rewrite _ϵ,h^2(ρ^h) as
_h^2(ρ^h) = ∑_(K, L)∈Σ^hτ_K|L^h |q_K|L^h|^2 u^h_K ∫_0^1 𝔥(-λ q_K|L^h/ϵ) (1-λ)λ.
The function 𝔥 has the following Taylor expansion for s≪ 1
𝔥 (s) = 1/2 + s/6 + o(s^2).
Taking into account that |q_K|L^h| ≤ c_pot h (cf. estimate (<ref>)), we have that
∫_0^1 𝔥(-λ q_K|L^h/ϵ) (1-λ)λ
= 1/4 + O(h/ϵ)|_h→ 0 .
Substituting the last expression into _ϵ,h^2 yields
_ϵ,h^2(ρ^h) = 1/4∑_(K, L)∈Σ^hτ_K|L^h |q_K|L^h|^2 u^h_K + o(1)_h→ 0.
Now, notice that
| ( ∇𝖰⟨ρ̂^h|(x_K) · (x_L - x_K) )^2 - _K ( ∇𝖰⟨ρ̂^h|(x) · (x_L - x_K) )^2 x |
≤ C h^2 sup_x∈ K| ∇𝖰⟨ρ̂^h |(x_K) - ∇𝖰⟨ρ̂^h |(x) | = o ⟨ h^2 |.
Using the representation (cf. (<ref>))
q_K|L^h = _K ∇𝖰⟨ρ̂^h |(x) x · (x_L - x_K) + o(h),
we can then rewrite _ϵ,h^2 as
_ϵ,h^2(ρ^h) = 1/4∑_(K, L)∈Σ^h u^h_K τ_K|L^h _K ( ∇𝖰⟨ρ̂^h |(x) · (x_L - x_K) )^2 x + o(1)_h→ 0
= 1/4∫_Ωû^h(x) ∑_(K, L)∈Σ^hτ_K|L^h/|K|_K(x) ( ∇𝖰⟨ρ̂^h |(x) · (x_L - x_K) )^2 x + o(1)_h→ 0
= 1/4∫_Ωû^h(x) ⟨∇𝖰⟨ρ̂^h |(x), ^h(x) ∇𝖰⟨ρ̂^h |(x) ⟩ x + o(1)_h→ 0,
where we recall the tensor
^h(x) = ∑_K∈^h_K (x) ∑_L∈^h_Kτ_K|L^h/|K| (x_L - x_K) ⊗ (x_L - x_K).
The product ⟨∇𝖰⟨ρ̂^h |(x), ^h(x) ∇𝖰⟨ρ̂^h |(x) ⟩ has an L^∞ bound uniformly in h>0, since for any x∈Ω, there is some K for which x∈ K and
| ⟨∇𝖰⟨ρ̂^h|(x), ^h(x) ∇𝖰⟨ρ̂^h |(x) ⟩|
≤∑_L∈_K^hτ_K|L^h/|K|( ∇𝖰⟨ρ̂^h |(x) · (x_L - x_K) )^2
≤ c_pot^2 ∑_L∈_K^h|(K|L)| |x_K-x_L|/|K|≤ c_pot^2C_d-1/C_dζ^d+1sup_h>0sup_K∈^h#_K^h <∞.
It is left to how that, for any f∈ L^1(Ω), we have the convergence
lim_h→ 0∫_Ω f ⟨∇𝖰 ( ρ̂^h ), ^h ∇𝖰 ( ρ̂^h ) ⟩ x = ∫_Ω f | ∇𝖰 ( ρ ) |^2 x.
We consider the limit component-wise
lim_h→ 0∫_Ω f ∂_i 𝖰 ( ρ̂^h ) ∂_j 𝖰 ( ρ̂^h ) ^h_ij x
= lim_h→ 0∫_Ω f ∂_i 𝖰 ( ρ ) ∂_j 𝖰 ( ρ ) ^h_ij x
+ lim_h→ 0∫_Ω f [ ∂_i 𝖰 ( ρ̂^h ) ∂_j 𝖰 ( ρ̂^h ) - ∂_i 𝖰 ( ρ ) ∂_j 𝖰 ( ρ) ] ^h_ij x,
where f ∂_i 𝖰 ( ρ ) ∂_j 𝖰 ( ρ ) ∈ L^1(Ω) and, since ^h_ij⇀^* 2 δ_ij in σ(L^∞, L^1) by Proposition <ref>, the first term converges to the expected limit. For the error term, we notice that
| ∫_Ω f [ ∂_i 𝖰 ( ρ̂^h ) ∂_j 𝖰 ( ρ̂^h ) - ∂_i 𝖰 ( ρ ) ∂_j 𝖰 ( ρ) ] ^h_ij x|
≤∂_i 𝖰 ( ρ̂^h ) ∂_j 𝖰 ( ρ̂^h ) - ∂_i 𝖰 ( ρ ) ∂_j 𝖰 ( ρ) _supf_L^1^h_ij_L^∞→ 0 as h→ 0,
due to the uniform convergence of ∇𝖰 ( ρ̂^h ) to ∇𝖰 ( ρ ).
§.§ EDP convergence
A pair (ρ^h, j^h)∈𝒞ℰ_h(0,T) is said to converge to a pair (ρ, j)∈𝒞ℰ(0,T) if the pair of reconstructions (ρ̂^h, ^h)∈𝒞ℰ(0,T) defined as in (<ref>) converges in the following sense:
* ρ̂^h_t/^d →ρ_t/^d in L^1(Ω) for almost every t∈[0, T],
* ∫_·_t^h t ⇀^* ∫_· j_t t in ((0, T)×Ω).
We begin by summarizing the liminf inequalities for the tilt-independent gradient structure.
Let (ρ^h, j^h)∈𝒞ℰ_h(0,T) converge to (ρ, j)∈𝒞ℰ(0,T) in the sense of Definition <ref>. Then the following liminf inequalities hold for
* the dissipation potential:
∫_0^T 1/2∫_Ω| j_t/ρ|^2 ρ t≤lim inf_h→ 0∫_0^T _ϵ,h(ρ^h_t, j^h_t) t ;
* the Fisher information:
∫_0^T _ϵ(ρ_t) t≤lim inf_h→ 0∫_0^T _ϵ,h(
ρ^h_t) t ;
* the energy functional:
_ϵ(ρ_t)≤lim inf_h→ 0_ϵ,h(ρ^h_t) for all t∈[0,T].
(i) We need to show that the following limsup inequality holds for any φ∈_b^2(Ω):
lim sup_h→ 0^*_ϵ,h(ρ^h, φ^h) ≤1/2∫_Ω |∇φ|^2 ρ,
where {φ^h}_h>0 is defined by φ^h(K) φ(x_K) for K∈^h. Then the desired liminf inequality follows by the duality argument from <cit.>.
From Lemma <ref><ref>, it follows that
_ϵ,h^*(ρ^h, φ^h) = 1/2∑_(K,L)∈Σ^h | φ^h_KL |^2 Λ_H(u^h_K, u^h_L) τ_K|L^h + 1/ϵ∑_(K,L)∈Σ^h O (| φ^h_KL |^3 ) τ_K|L^h.
We note that O (| φ^h_KL |^3 ) = O(h^3) and, therefore,
1/ϵ∑_(K,L)∈Σ^h O (| φ^h_KL|^3 ) τ_K|L^h
= 1/ϵO(h).
Using the inequality Λ_H(a, b) ≤ (a + b) /2, we arrive at
_ϵ,h^*(ρ^h, φ^h) ≤1/2∑_(K,L)∈Σ^h| φ^h_KL|^2 τ_K|L^h/|K|ρ^h_K + O (h ).
With this bound at hand, it is enough to make minor modifications of the proof of <cit.> for the tilt-independent dissipation potential with κ^h_KL = τ_K|L^h / |K| to obtain (<ref>).
(ii) The asserted liminf inequality follows from Theorem <ref>
and Fatou's lemma.
(iii) As the following calculations hold for any t∈ [0,T], we drop the
subscript t. The relation between the continuous and discrete potentials yields the representation of _ϵ,h in the integral form
_ϵ,h(ρ^h) = _ε(ρ̂^h) + O(h).
Since _ϵ is lower semicontinuous w.r.t. the narrow convergence, we then easily conclude that
_ϵ(ρ^h)≤lim inf_h→ 0_ϵ,h(ρ^h),
which completes the proof.
Consider a family {(ρ^h
, j^h)}_h>0 of GGF-solutions to Scharfetter–Gummel scheme (<ref>), for a fixed ϵ>0, according to Definition <ref> and the tilt-independent structure introduced in Section <ref>. Further, let {(ρ̂^h, ^h)}_h>0 be the family of reconstructed pairs as defined in (<ref>). Then, the existence of a subsequential limit pair (ρ, j) ∈𝒞ℰ(0,T) and the convergence specified in Theorem <ref>(1) follows from the compactness arguments of Section <ref>.
The liminf inequality from assertion (2) is proven in Theorem <ref>, which immediately implies that _ϵ^[s,t](ρ, j) ≤lim inf_h→ 0_ϵ,h^[s,t](ρ^h, j^h)= 0 for every [s,t]⊂[0,T]. On the other hand, the chain rule <cit.> yields _ϵ^[s,t](ρ, j) ≥ 0 for every [s,t]⊂[0,T]. Therefore, the limit pair (ρ, j) is the (, , ^*)-gradient flow solution of (<ref>) in the sense of Definition <ref>.
§ VANISHING DIFFUSION LIMIT
This section deals with the vanishing diffusion limit for both the discrete and continuous cases, i.e. Theorems <ref> and <ref>. Although the result for the continuous case seems to be obvious, we did not find a reference containing a proof of the statement. For this reason, and for the sake of completeness, we include a proof of the statement in Section <ref>. We begin with the discrete case.
§.§ Discrete Case
We fix a tessellation (^h, Σ^h) with some h>0 and consider the vanishing diffusion limit ϵ→ 0. To simplify notation, we drop the superscript h. As mentioned in the introduction, we expect that the Scharfetter–Gummel flux (<ref>) converges to the upwind flux
lim_ϵ→ 0_K|L^ρ = _K|L^ρ,upτ_K|L^h ( q_K|L^h,+u_K - q_K|L^h,- u_L ), (K,L)∈Σ^h.
The result of this section concerns the convergence of the Scharfetter–Gummel scheme (<ref>) to the upwind scheme (<ref>) in the sense of the EDP convergence. Recall that if a pair (ρ^ϵ,h, j^ϵ,h)∈𝒞ℰ_h(0, T) is a GGF-solutions of (<ref>), then (ρ^ϵ,h, j^ϵ,h) is the minimizer for the energy-dissipation functional
_ϵ,h^[s,t](ρ^ϵ,h, j^ϵ,h) = ∫_s^t {_ϵ,h (ρ^ϵ,h_r, j^h,ϵ_r) + _ϵ,h (ρ^ϵ,h_r) } r + _ϵ,h (ρ^ϵ,h_t) - _ϵ,h (ρ^ϵ,h_s)
with _ϵ,h, _ϵ,h, and _ϵ,h defined in (<ref>), (<ref>), and (<ref>) respectively. The objective of this section is to get a compactness statement for {(ρ^ϵ,h, j^ϵ,h)}_ϵ>0 and to find the counterparts to _ϵ,h, _ϵ,h, and _ϵ,h for ϵ = 0. Then we complete the proof of Theorem <ref>.
Note that since (^h, Σ^h) is fixed and non-degenerate, we have the following useful bounds
sup_K∈^h∑_L∈_K^hτ_K|L^h/|K| c_ < ∞.
We begin with the compactness result. Consider a measure J^ϵ∈([0,T]×Σ^h) defined on product measurable sets A× B⊂ [0, T]×Σ^h as
J^ϵ (A× B) ∫_A j_t^ϵ(B) t=∫_A∑_(K,L)∈ B j^ϵ_K|L(t) t.
Let a family of pairs {(ρ^ϵ, j^ϵ)}_ϵ>0⊂𝒞ℰ_h (0, T) satisfy
c_0sup_ϵ>0∫_0^T _ϵ,h(ρ_t^ϵ, j_t^ϵ) t<∞.
Then the family { J^ϵ}_ϵ>0 is bounded in total variation. Moreover,
|J^ϵ|(A×Σ^h) ≤√(c_0c_^1(A)) for any measurable set A⊂[0,T].
Following the initial arguments of the proof of Lemma <ref>, we obtain for any β∈,
_ϵ,h(ρ_t^ϵ, j_t^ϵ) ≥β∑_(K,L)∈Σ^h |j_K|L^ϵ|(t) - 2∑_(K,L)∈Σ^hτ_K|L^h α_ϵ^* ⟨*| u_K(t), u_L(t), βsign(j_K|L^ϵ)/2.
If either a=0 or b=0, then α^*(a,b,x)=0 for any x∈. If a=b, then
α_ϵ^*(a,a,ξ) = a ∫_0^ξ x x = aξ^2/2 = α_0^*(a,a,ξ) for all ξ∈,
We will now reduce the other cases to this case. Indeed, using the 1-homogeneity and concavity of Λ_H, we have for any ξ∈ that
∑_(K,L)∈Σ^hτ_K|L^h α_ϵ^* ⟨*| u_K, u_L, ξ = ∑_(K,L)∈Σ^hα_ϵ^* ⟨*|τ_K|L^h u_K, τ_K|L^h u_L, ξ
≤α_ϵ^* ⟨*|∑_(K,L)∈Σ^hτ_K|L^h u_K, ∑_(K,L)∈Σ^hτ_K|L^h u_L, ξ
= α_ϵ^*⟨*|1,1,ξ∑_(K,L)∈Σ^hτ_K|L^h u_K ≤ c_ξ^2/2.
Consequently, and after integration over any measurable set A⊂[0,T], we obtain the estimate
∫_0^T _ϵ,h(ρ_t^ϵ, j_t^ϵ) t ≥β |J^ϵ|(A×Σ^h) - c_/2β^2^1(A).
Taking the supremum over β∈, we arrive at the asserted estimate.
Let a family of pairs {(ρ^ϵ, j^ϵ)}_ϵ>0⊂𝒞ℰ_h (0, T) satisfy
c_0sup_ϵ>0∫_0^T _ϵ,h(ρ_t^ϵ, j_t^ϵ) t<∞.
Then there exist a limit pair (ρ, j) ∈𝒞ℰ_h(0, T) and a (not relabelled) subsequence such that
ρ^ϵ_t ⇀ρ_t in (^h) for all t∈ [0,T],
J^ϵ⇀^* J=∫_· j_t t weakly-* in ([0,T]×Σ^h).
The convergence for J^ϵ follows the same lines as in the proof of Lemma <ref>.
We now prove the convergence for {ρ^ϵ}_ϵ>0. Since (ρ^ϵ, j^ϵ)∈𝒞ℰ_h(0, T), then
| ∑_K∈^hφ_K (ρ^ϵ_K(t) - ρ^ϵ_K(s) ) | = | ∫_s^t ∑_(K,L)∈Σ^hφ (K,L) j^ϵ_KL(r) r |
≤ 2φ_∞ |J^ϵ|q([s,t]×Σ^h) for any [s, t] ⊂ [0, T].
Taking supremum over all φ∈(^h) with φ_∞≤ 1, we make use of Lemma <ref> to obtain
ρ_t^ϵ - ρ_t^ϵ_TV≤ C √(|t - s|).
By the Ascoli-Arzelá theorem, there exists a (not relabelled) subsequence of {ρ^ϵ}_ϵ>0 and a limit curve ρ∈([0,T];(^h)), such that the asserted convergence holds.
Since ^h and Σ^h are finite discrete spaces, the weak and strong topologies coincide. In particular, the narrow convergence stated in Lemma <ref> implies the pointwise convergence. We will use this property in the proofs of the following results.
In the next lemma, we establish the convergence of the Fisher information.
Let the family of measures {ρ^ϵ}_ϵ> 0 be such that ρ^ϵ⇀ρ in (^h) as ϵ→ 0, then
lim_ϵ→ 0_ϵ,h (ρ^ϵ) = _h, up (ρ) = 2∑_(K, L)∈Σ^hτ_K|L^hα_0^* ⟨*|u_K, u_L, q_K|L^h/2,
where
α_0^*(a, b, q) = 1/2( a|q^+|^2 + b|q^-|^2 ).
The limit Fisher information contains only the limit of _ϵ,h^2, since lim_ϵ→ 0(_ϵ,h^0 + _ϵ,h^1 )= 0. Recall that
_ϵ,h^2 (ρ^ϵ) = 1/2∑_(K, L)∈Σ^hτ_K|L^h |q_K|L^h|^2 𝕙_ϵ⟨*| u^ϵ_K, u^ϵ_L, q_K|L^h,
with 𝕙_ϵ being
𝕙_ϵ (a, b, q) = ∫_0^1 [a 𝔥(λ q/ϵ) + b 𝔥(-λ q/ϵ)](1-λ)λ, 𝔥(s) = 1/4e^s-1-s/sinh^2(s/2).
It is uniformly bounded by the following argument. Since 0 ≤𝔥(s) ≤ 1, s∈, we have that
_h^2 (ρ^ϵ) ≤1/4∑_(K, L)∈Σ^hτ_K|L^h |q_K|L^h|^2 (u^ϵ_K + u^ϵ_L) ≤1/2 c_pot^2 c_ .
Moreover, we notice that
lim_ϵ→ 0𝔥(s/ϵ) = _(0,∞)(s) + 1/2_{0} (s),
and, hence,
lim_ϵ→ 0∫_0^1 𝔥(λ q/ϵ) (1-λ)λ = 1/2( _(0,∞)(q) + 1/2_{0} (q) ) 𝔥_0(q).
Now we define
u^ϵ_KL∫_0^1 [u^ϵ_K 𝔥(λ q_K|L^h/ϵ) + u^ϵ_L 𝔥(-λ q_K|L^h/ϵ)](1-λ)λ.
Since u^ϵ→ u pointwise on ^h, we get
lim_ϵ→ 0u^ϵ_KL = u_K 𝔥_0(q_K|L^h) + u_L 𝔥_0(-q_K|L^h),
which concludes the proof.
Finally, we prove the convergence of the dissipation potential.
Let the family of measure-flux pairs { (ρ^ϵ, j^ϵ)}_ϵ>0⊂𝒞ℰ_h (0,T) satisfying
* ρ^ϵ_t ⇀ρ_t in (^h) for all t∈ [0, T],
* ∫_· j_t^ϵ t ⇀^* ∫_· j_t t weakly-* in ((0, T)×Σ^h).
Then,
∫_s^t _up,h(ρ_r,j_r) r ≤lim inf_ϵ→ 0∫_s^t _ϵ,h (ρ^ϵ_r, j^ϵ_r) r for any [s,t]⊂[0,T],
where
_up,h(ρ,j) = ∑_(K,L)∈Σ^hτ_K|L^h⟨*| u_K | j^+_K|L/τ_K|L^hu_K|^2 + u_L | j^-_K|L/τ_K|L^hu_L|^2 , u_K=ρ_K/|K|.
We begin by proving the convergence
lim_ϵ→ 0_ϵ,h^* (ρ^ϵ, ξ) = _up,h^*(ρ,ξ) = 1/4∑_(K,L)∈Σ^hτ_K|L^h ⟨*|u_K |ξ_K|L^+|^2 + u_L |ξ_K|L^-|^2
for any ξ∈(Σ^h).
Since ρ^ϵ converges pointwise to ρ (cf. Remark <ref>) and estimate (<ref>) provides
∑_(K,L)∈Σ^hτ_K|L^hα_ϵ^* ⟨*|u^ϵ_K, u^ϵ_L, ξ_KL≤ 2∑_(K,L)∈Σ^hτ_K|L^hα_ϵ^* ⟨*|u^ϵ_K, u^ϵ_L, ξ_∞≤ξ_∞^2 c_ ,
we obtain the asserted convergence by means of Lemma <ref><ref> and the dominated convergence.
We now use the Legendre duality to infer the asserted liminf inequality for the dissipation potential. From the convergence result established in the first part of the proof, it follows that
∫_s^t ∑_(K,L)∈Σ^hχ_r ξ_KL j_KL(r) r - ∫_0^T 2∑_(K,L)∈Σ^hτ_K|L^hα_0^* ⟨*| u_K(r), u_L(r), χ_r ξ_KL/2 r
≤lim_ϵ→ 0∫_s^t ∑_(K,L)∈Σ^hχ_r ξ_KL j^ϵ_KL(r) r - lim sup_ϵ→ 0∫_s^t _ϵ,h^* (ρ^ϵ_r, χ_r ξ) r
≤lim inf_ϵ→ 0∫_s^t {∑_(K,L)∈Σ^hχ_r ξ_KL j^ϵ_KL(r) - _ϵ,h^* (ρ^ϵ_r, χ_r ξ) } r
≤lim inf_ϵ→ 0∫_s^t _ϵ,h (ρ^ϵ_r, j^ϵ_r ) r for any χ∈([0,T]), ξ∈(Σ^h).
Now let η∈([0,T]×Σ^h). We introduce the measures Θ_ρ^±, Θ∈([0,T]×Σ^h) in the way that for any measurable A ⊂ [0,T] and B ⊂Σ^h it holds that
Θ(A× B) = ∫_A ∑_(K,L)∈ Bτ_K|L^h t,
Θ_ρ^+ (A× B) = ∫_A ∑_(K,L)∈ Bτ_K|L^h u_K(t) t, Θ_ρ^- (A× B) = ∫_A ∑_(K,L)∈ Bτ_K|L^h u_L(t) t.
Then, we rewrite
∫_s^t ∑_(K,L)∈Σ^hη_KL(r) J_KL(r) r - ∫_s^t 2∑_(K,L)∈Σ^hτ_K|L^hα_0^* ⟨*| u_K(r), u_L(r), η_KL(r)/2 r
= ∬_[s,t]×Σ^hη J - ∬_[s,t]×Σ^h 2 α_0^* ( Θ^+_ρ/Θ, Θ^-_ρ/Θ,η/2) Θ I_0^[s,t] (η).
It is left to determine sup_η∈([s,t]×Σ^h) I_0^[s,t] (η). We note that
∬_[s,t]×Σ^hη J
= ∬_[s,t]×Σ^hη^+ ( [ J/Θ_ρ^+]^+ - [ J/Θ_ρ^+]^-) Θ_ρ^+ + η^- ( [ J/Θ_ρ^-]^- - [ J/Θ_ρ^-]^+ ) Θ_ρ^-.
The two negative terms can only decrease the total value, therefore the supremum over ([0,T]×Σ^h) is equivalent to taking supremum over η∈([0,T]×Σ^h) satisfying
η^±≡ 0 on (J^0)^∓.
Because of the structure of α_0^* with one part depending on η^+ and the other part depending on η^-, the expression under the supremum splits into two independent parts with the supremum over η^+ and the supremum over η^-. The first part is
sup_η∈([s,t]×Σ^h){∬_[s,t]×Σ^hη^+ [ J/Θ_ρ^+]^+ Θ_ρ^+ - η^+/2^2_L^2([s,t]×Σ^h, Θ_ρ^+)} = [ J/Θ_ρ^+]^+ ^2_L^2([s,t]×Σ^h, Θ_ρ^+).
and the second part is
sup_η∈([s,t]×Σ^h){∬_[s,t]×Σ^hη^- [ J/Θ_ρ^-]^- Θ_ρ^- - η^-/2^2_L^2([s,t]×Σ^h, Θ_ρ^-)} = [ J/Θ_ρ^-]^- ^2_L^2([s,t]×Σ^h, Θ_ρ^-).
In both parts, we imply that if the supremum is finite then it equals the L^2-norm of the corresponding flux densities.
Combining the two, we obtain
sup_η∈([s,t]×Σ^h) I^[s,t](η)
= [ J/Θ_ρ^+]^+ ^2_L^2([s,t]×Σ^h, Θ_ρ^+) + [ J/Θ_ρ^-]^- ^2_L^2([s,t]×Σ^h, Θ_ρ^-)
= ∫_s^t ∑_(K,L)∈Σ^hτ_K|L^h⟨*| u_K(r) | j^+_K|L(r)/τ_K|L^h u_K(r)|^2 + u_L(r) | j^-_K|L(r)/τ_K|L^h u_L(r)|^2 r
= ∫_s^t _up,h (ρ_r, j_r) r,
therewith concluding the proof.
To summarize, the energy-dissipation functional corresponding to the upwind scheme comprises the driving energy
(^h)∋ρ↦_up,h (ρ) = ∑_K∈^h V^h_K ρ_K + 1/2∑_K,L∈^h×^h W^h_KLρ_K ρ_L,
the dissipation potential _up,h: (^h) ×(Σ^h) →_+ ∪{+∞}
(^h) ×(Σ^h)∋ (ρ,j)↦_up,h(ρ, j) = ∑_(K,L)∈Σ^h⟨*||j^+_K|L|^2/τ_K|L^hu_K + |j^-_K|L|^2/τ_K|L^hu_L ,
and the Fisher information
(^h)∋ρ↦_up,h (ρ) = ∑_(K,L)∈Σ^hτ_K|L^h⟨*| u_K| q^+_K|L/2|^2 + u_L| q^-_K|L/2|^2 .
For completeness, we point out that the dual dissipation potential in this case is
(^h) ×(Σ^h)∋ (ρ,ξ)↦_up,h^* (ρ, ξ) = ∑_(K,L)∈Σ^hτ_K|L^h⟨*| u_K|ξ_K|L^+/2|^2 + u_L| ξ_K|L^-/2|^2 .
Consider a family {(ρ^ϵ,h, j^ϵ,h)}_ϵ>0 of GGF-solutions to (<ref>) according to Definition <ref> and the tilt-independent structure introduced in Section <ref>. Lemma <ref> and Lemma <ref> provide the existence of a subsequential limit pair (ρ^up,h, j^up,h) ∈𝒞ℰ_h (0, T) and the convergence specified in Theorem <ref>(1).
The liminf inequality for the energy-dissipation functionals from assertion (2) is proven in Lemma <ref> and Lemma <ref>. With a simple chain rule, we easily deduce _up,h^[s,t](ρ^up,h, j^up,h)≥ 0 for every [s,t]⊂[0,T], and hence, the limit pair (ρ^h,ϵ, j^h,ϵ) is the GGF solution of the upwind scheme (<ref>).
§.§ Continuous case
Recall that for each ϵ>0, a gradient flow solution (ρ^ϵ,j^ϵ) of (<ref>) satisfies
^[s,t]_ϵ (ρ^ϵ, j^ϵ)=∫_s^t {(ρ_r^ϵ,j_r^ϵ) + _ϵ(ρ_r^ϵ)} r + _ϵ(ρ_t^ϵ) - _ϵ(ρ_s^ϵ) = 0 for all [s,t]⊂[0,T],
with Fisher information
_ϵ(ρ) = 2ϵ^2∫_Ω| ∇√(u)|^2 x + ϵ∫_Ω∇ u ·∇𝖰(ρ) x + 1/2∫_Ω|∇𝖰(ρ)|^2 ρ, u = ρ/^d.
In particular, √(u^ϵ)∈ H^1(Ω) for every ϵ>0.
As in the previous results, we will pass to the liminf in each of the terms in the energy-dissipation functional _ϵ. Due to the joint lower semicontinuity of the dissipation potential w.r.t. weak-* convergence and the fact that _agg≤_ϵ, the only difficulty here is in proving the liminf inequality for the Fisher information _ϵ, as it is unclear that the first two terms vanish in the limit.
However, since the chain rule ∇ v^2 = 2 v ∇ v∈ L^1(Ω) for v∈ H^1(Ω), _ϵ takes the alternative form
_ϵ(ρ) = 1/2∫_Ω|2ϵ∇√(u) + √(u) ∇𝖰(ρ)|^2 x, u = ρ/^d, √(u)∈ H^1(Ω).
Moreover, by defining the ^d-valued measure
g_t^ϵ := √(u_t^ϵ)(2ϵ∇√(u_t^ϵ) + √(u_t^ϵ)∇𝖰(ρ_t^ϵ))^d = (ϵ∇ u_t^ϵ + u_t^ϵ∇𝖰(ρ_t^ϵ) )^d∈(Ω;^d),
for every t∈[0,T], we can further express _ϵ(ρ^ϵ) as
_ϵ(ρ_t^ϵ) = 1/2∫_Ω| g_t^ϵ/ρ_t^ϵ|^2 ρ_t^ϵ = (ρ_t^ϵ,g_t^ϵ).
Therefore, if ρ_t^ϵ⇀^* ρ_t weakly-* in (Ω) and g_t^ϵ⇀^* g_t weakly-* in (Ω;^d) for every t∈[0,T], then the weak-* lower semicontinuity of yields
(ρ_t,g_t) ≤lim inf_ϵ→ 0(ρ_t^ϵ,g_t^ϵ) = lim inf_ϵ→ 0_ϵ(ρ_t^ϵ).
Hence, it suffices to show that g_t^ϵ⇀^* ρ_t ∇𝖰(ρ_t) weakly-* in (Ω;^d) for every t∈[0,T].
Let {ρ^ϵ}_ϵ>0⊂([0,T];(Ω)), ρ∈([0,T];(Ω)) be such that ρ_t^ϵ⇀^* ρ_t weakly-* in (Ω) for every t∈[0,T] and the interaction potential W satisfy (<ref>). Then for every t∈[0,T], the sequence {g_t^ϵ}_ϵ>0⊂(Ω;^d) defined in (<ref>) satisfies
g_t^ϵ⇀^* g_t:=ρ_t ∇𝖰(ρ_t) weakly-* in (Ω;^d).
In particular, we have
∫_s^t _agg(ρ_r) r≤lim inf_ϵ→ 0∫_s^t _ϵ(ρ_r^ϵ) r for every [s,t]⊂[0,T].
Let φ∈_c^1(Ω;^d) be arbitrary and t∈[0,T]. Then
⟨φ,g_t^ϵ⟩ = ∫_Ωφ·(ϵ∇ u_t^ϵ + u_t^ϵ∇𝖰(ρ_t^ϵ) ) x = - ϵ∫_Ωdivφ ρ_t^ϵ + ∫_Ωφ·∇𝖰(ρ_t^ϵ)ρ_t^ϵ,
and therefore
|⟨φ,g_t^ϵ - g_t⟩ | ≤ϵdivφ_sup + φ_sup∇𝖰(ρ_t^ϵ)-∇𝖰(ρ_t)_sup + |⟨φ·∇𝖰(ρ_t),ρ_t^ϵ-ρ_t⟩|
From the assumptions placed on the potentials V and W, one easily deduces the uniform convergence ∇𝖰(ρ_t^ϵ)-∇𝖰(ρ_t)_sup→ 0 as ϵ→ 0. Clearly, the other terms also converge to zero.
Using the weak-* lower semicontinuity of , we then obtain
(ρ_t,g_t) ≤lim inf_ϵ→ 0_ϵ(ρ_t^ϵ) for every t∈[0,T].
Since _ϵ(ρ_t^ϵ)≥ 0 for t∈[0,T], an application of Fatou's lemma then yields the result.
Following the same strategy as in the previous sections, we obtain a compactness result for
Let a family of pairs {(ρ^ϵ, j^ϵ)}_ϵ>0⊂𝒞ℰ (0, T) satisfying
c_0sup_ϵ>0∫_0^T _ϵ(ρ_t^ϵ, j_t^ϵ) t<∞.
Then there exist a limit pair (ρ, j) ∈𝒞ℰ(0, T) and a (not relabelled) subsequence such that
ρ^ϵ_t ⇀^* ρ_t weakly-* in (Ω) for all t∈ [0,T],
∫_· j_t^ϵ t J^ϵ⇀^* J=∫_· j_t t weakly-* in ([0,T]×Ω;^d).
An application of Jensen't inequality immediately yields
sup_ϵ>0|j_·^ϵ|(Ω)_L^2((0,T))^2 ≤ 2sup_ϵ>0∫_0^T(ρ_t^ϵ,j_t^ϵ) t = 2 c_0.
In particular, the sequence {t↦ |j_t^ϵ|(Ω)}_ϵ>0 is equi-integrable, and the weak-* compactness of {J^ϵ}_ϵ>0 can be proven as in Lemma <ref>.
We now prove the asserted weak-* convergence for the sequence {ρ^ϵ}_ϵ>0⊂([0,T];(^d)). Since (ρ^ϵ,j^ϵ) satisfies the continuity equation (<ref>), for any φ∈_c^1(^d) with ∇φ_L^∞≤ 1:
|⟨φ,ρ_t^ϵ - ρ_s^ϵ⟩| = |∫_s^t ⟨∇φ, j_r^ϵ⟩ r| ≤∫_s^t |j_r^ϵ|(Ω) r ≤√(|t-s|)|j_·^ϵ|(Ω)_L^2((0,T)).
Taking the supremum over Lipschitz functions φ satisfying ∇φ_L^∞≤ 1 then gives
W_1(ρ_t^ϵ,ρ_s^ϵ) ≤ c_0√(|t-s|) for all ϵ>0 and [s,t]⊂[0,T],
where W_1 is the 1-Wasserstein distance. The Ascoli-Arzelá theorem then provides the existence of a limit curve ρ∈([0,T];(Ω)) and a subsequence such that the convergence holds.
We now conclude with the proof of Theorem <ref>.
Consider a family {(ρ^ϵ, j^ϵ)}_ϵ>0 of gradient flow solutions to (<ref>) according to Definition <ref>. Lemma <ref> provides the existence of a subsequential limit pair (ρ, j) ∈𝒞ℰ (0, T) and the convergence specified in Theorem <ref>(1).
To show the liminf inequality for the energy-dissipation functionals from assertion (2), we begin by noticing that
∫_s^t (ρ_r^ϵ,j_r^ϵ) r = 1/2∬_[s,t]×Ω| J^ϵ/ R^ϵ|^2 R^ϵ, R^ϵ = ∫_·ρ_t^ϵ t,
where the right-hand side is jointly weakly-* lower semicontinuous as a functional on ([s,t]×Ω)×([s,t]×Ω;^d). Since (R^ϵ,J^ϵ)⇀^* (R,J) weakly-* in ([s,t]×Ω)×([s,t]×Ω;^d) with R = ∫_·ρ_t t and J=∫_· j_t t, we then conclude that
lim inf_ϵ→ 0∫_s^t (ρ_r^ϵ,j_r^ϵ) r ≥1/2∬_[s,t]×Ω| J/ R|^2 R = ∫_s^t (ρ_r,j_r) r.
Together with Lemma <ref> and the fact that _agg≤_ϵ, we easily deduce the asserted liminf inequality _agg^[s,t](ρ,j) ≤lim inf_ϵ→ 0_ϵ^[s,t](ρ^ϵ, j^ϵ)=0 for every [s,t]⊂[0,T].
Finally, the chain rule <cit.> yields _agg^[s,t](ρ,j) ≥ 0 for every [s,t]⊂[0,T]. Therefore, the limit pair (ρ, j) is an (_agg, , ^*)-gradient flow solution of (<ref>) in the sense of Definition <ref>
§ FROM THE UPWIND SCHEME TO THE AGGREGATION EQUATION
In this section, we complete the commutative diagram in Figure <ref> by studying the variational convergence of the upwind scheme (<ref>) to the aggregation equation (<ref>). We mentioned earlier that we could not consider general tessellations in this section, thus, we restrict to Cartesian grids. Moreover, we assume (<ref>) for the interaction potential W. On the other hand, we can handle any initial data ρ_in^h∈(^h) satisfying ρ̂^h_in⇀^* ρ_in weakly-* in (Ω) without any additional assumptions.
We work with (_up,h, _up,h, _up,h^*)-generalized gradient flow solutions of the upwind scheme (<ref>), where _up,h, _up,h, and _up,h^* are defined in (<ref>), (<ref>), and (<ref>), respectively. The strategy should be familiar to the reader by now. We begin with the necessary compactness result in Lemma <ref>. The convergence of the dual dissipation potential _up,h^* and, consequently, the Fisher information _up,h given in (<ref>) is established in Theorem <ref>. We conclude this section with the proof of Theorem <ref>.
We begin this section with a compactness result.
The family {J^h}_h>0 is weakly-* compact in ((0, T)×Ω; ^d) and the family {t↦ |^h_t|}_h>0 is equi-integrable. In particular, there exists a (not relabelled) subsequence of { (ρ̂^h, ^h) }_h>0 and a pair (ρ, j)∈𝒞ℰ(0,T) such that
ρ̂^h_t →ρ_t weakly-* in (Ω) for all t ∈ [0, T],
∫_·^h_t t J^h ⇀^* J=∫_· j_t t weakly-* in ((0, T)×Ω; ^d).
The weak-* compactness of {J^h}_h>0 and equi-integrability of the family {t↦ |^h_t|}_h>0 can be proven as in Lemma <ref>. Indeed, using the dissipation potential _up,h (cf. (<ref>)) instead, we obtain
sup_h>0|_·^h|(Ω)_L^2((0,T))^2≤ 2c_κ d^2 sup_h>0∫_0^T _up,h(ρ_t^h,j_t^h) t =: c_0 <∞.
For the pointwise weak-* convergence of {ρ̂_t^h}_h>0, we simply mimic the proof of Lemma <ref>.
Let { (^h, Σ^h)}_h>0 be a family of Cartesian tessellations with edge-length h>0. Let the family {ρ^h∈(^h) }_h>0 satisfy ρ̂^h ⇀^* ρ weakly-* in (Ω). If the family of discrete functions {φ^h∈(^h)}_h>0 is such that for some φ∈ C_b^1(Ω):
φ^h(K,L) = ∇φ(x_K) · (x_L - x_K) + o(h),
then
lim_h→ 0_up,h^*(ρ^h, φ^h) = 1/2∫_Ω |∇φ(x)|^2 ρ( x).
Consequently, if the interaction potential W satisfies assumption (<ref>), then
lim_h→ 0_up,h(ρ^h) = 1/2∫_Ω| ∇𝖰 (ρ) |^2 ρ,
with 𝖰(ρ)=∇ V + ∇ W ∗ρ.
Using symmetry, we rewrite the functional as
_up,h^*(ρ^h, φ^h) = 1/4∑_(K,L)∈Σ^h( u^h_K |(φ^h(K,L))^+ |^2 + u^h_L | (φ^h(K,L))^- |^2 )τ_K|L^h
= 1/2∑_(K,L)∈Σ^h| (φ^h(K,L))^+ |^2 u^h_K τ_K|L^h.
Since the mapping ∋ q ↦ q^+ is Lipschitz, we have that
(φ^h(K,L))^+ = (∇φ(x_K) · (x_L - x_K))^+ + o(h).
Inserting this expression into the functional yields
_up,h^*(ρ^h, φ^h) = 1/2∑_(K,L)∈Σ^hτ_K|L^h u^h_K| ( ∇φ (x_K) · (x_L - x_K) )^+ |^2 + o(h^2) ∑_(K,L)∈Σ^hτ_K|L^h/|K|ρ^h_K
= 1/2∑_K∈^h⟨∇φ (x_K), ∑_L∈^h_Kτ_K|L^h (x_L - x_K) ⊗ (x_L - x_K) ^φ_K(L) ∇φ (x_K) ⟩ u^h_K + o(1),
where we set
^φ_K _{ M∈^h: ∇φ (x_K) · (x_M - x_K) > 0 } + 1/2_{ M∈^h:∇φ (x_K) · (x_M - x_K) = 0 }
The indicator ^φ_K means that for any cell K∈^h the sum goes only over the faces (K|L) for which ∇φ (x_K) · (x_L - x_K) > 0. For the Cartesian grid, all the neighboring cells ^h_K can be grouped in pairs M, L ∈^h_K such that x_L - x_K = - (x_M - x_K) and x_L - x_K = ± h e_i for some basis vector e_i, i ∈{ 1, …, d }. We illustrate this idea in Figure <ref> below.
This means that for any ∇φ(x_K) that is not parallel to any basis vector {e_i}_i=1^d, the indicator ^φ_K "chooses" all the basis vectors with either plus or minus sign. Hence, the tensor takes the form
∑_L∈^h_Kτ_K|L^h (x_L - x_K) ⊗ (x_L - x_K) _K^φ(L) = h^d ∑_i=1^d e_i ⊗ e_i = |K| .
If ∇φ(x_K) is parallel to some e_i for some i∈{1,…,d}, then ^φ_K includes both he_i and -he_i with the coefficient 1/2, which does not change the form of the tensor.
The expression above then simplifies to
_up,h^*(ρ^h, φ^h) = 1/2∑_K∈^h| ∇φ (x_K) |^2 |K| u^h_K + o(1).
Since ∇φ is uniformly continuous on Ω, it holds that
| ∇φ (x_K) |^2 = _K | ∇φ (x) |^2 x + o(1).
Therefore, the functional admits an integral form
_h(ρ^h, φ^h) = 1/2∫_Ω |∇φ(x)|^2 ρ̂^h( x) + o(1) 1/2∫_Ω |∇φ(x)|^2 ρ( x).
As for the convergence of the Fisher information, we notice that the assumptions on V and W give
|∇𝖰 (ρ̂^h) (x_K)|^2 = _K |∇𝖰 (ρ̂^h) (x)|^2 x + o(1),
and therefore,
_up,h(ρ^h) = 1/2∫_Ω| ∇𝖰 (ρ̂^h) (x) |^2 ρ̂^h ( x) + o(1).
The assertion then follows from the weak-* convergence ρ̂^h ⇀^* ρ in (Ω) and the uniform convergence ∇𝖰 (ρ̂^h) →∇𝖰(ρ) in (Ω).
Consider a family {(ρ^h, j^h)}_h>0 of GGF-solutions to the upwind scheme (<ref>) according to Definition <ref> and the generalized gradient structure obtained as the EDP limit in Section <ref>. Let {(ρ̂^h, ^h)}_h>0 be defined as in (<ref>). Then, the existence of a subsequential limit pair (ρ, j) ∈𝒞ℰ(0,T) and the convergence specified in Theorem <ref>(1) follow from Lemma <ref>.
The convergence of the Fisher information is proven in Theorem <ref>. The liminf inequality for the dissipation potential follows from the limit of the dual dissipation potential shown in Theorem <ref> and a duality argument from <cit.>. In this way, the assertion (2) is proven and it immediately follows that _agg^[s,t](ρ, j) ≤lim inf_h→ 0_up,h^[s,t](ρ^h, j^h)= 0 for every [s,t]⊂[0,T]. On the other hand, the chain rule <cit.> yields _agg^[s,t](ρ,j) ≥ 0 for every [s,t]⊂[0,T]. Therefore, the limit pair (ρ, j) is an (_agg, , ^*)-gradient flow solution of (<ref>) in the sense of Definition <ref>.
§ PROPERTIES OF THE TILTED DUAL DISSIPATION POTENTIAL
The following lemma contains some properties and an integral representation of the harmonic-logarithm mean Λ_H introduced in (<ref>).
A function M:_+ ×_+ →_+ is a mean if it is
* positively one-homogeneous: M(λ s,λ t) = λ M(s,t) for all s,t∈_+ and λ >0;
* bounded by min*s,t≤ M(s,t)≤max*s,t for all s,t∈_+;
* jointly concave.
The logarithmic mean Λ: _+ ×_+ →_+,
Λ(s,t) = ∫_0^1 s^τ t^1-ττ =
s-t/log s - log t , s t ;
s , s=t .
is a mean between the geometric and arithmetic mean
√(st )≤Λ(s,t) ≤s+t/2 ,
with derivatives bounded
∂_1 Λ(s,t) = ∂_2Λ(t,s) and ∂_1 Λ(s,t) = Λ(s,t)(s-Λ(s,t))/s(s-t) .
The harmonic-logarithmic mean Λ_H : _+ ×_+ →_+ defined by
Λ_H(s,t) = 1/Λ⟨*|1/s, 1/t = st/Λ(s, t)
is a mean between the harmonic and geometric mean
2/1/s+ 1/t≤Λ_H(s,t) ≤√(st)
with the integral representations
Λ_H(a,b) = ∫_0^1 τ/τ/s+ (1-τ)/t = ∫_0^∞s tτ/(τ +s) (τ+t)
and derivatives
∂_1 Λ_H(s,t)=∂_2 Λ_H(t,s) = t( Λ(s, t) - t )/Λ(s, t).
See, for instance <cit.> for many properties of the logarithmic mean, from which the analogous ones of the harmonic-logarithmic mean follow.
The tilt-independent dual dissipation potential _ϵ,h^* in (<ref>) is given in terms of the function α^*_ϵ defined in (<ref>), which we recall here for convenience
α_ϵ^*(a, b, ξ) = ϵ∫_0^ξsinh( x/ϵ) Λ_H (a e^-x/ϵ, b e^x/ϵ) x= ϵ^2 α_1 (a, b, ξ/ϵ).
Below we prove useful properties of α_ϵ^*.
The function α_ϵ^*:_+×_+×→_+ in (<ref>) has the following useful properties:
* α_ϵ^* (a, b, ξ) is convex in ξ for fixed a,b>0, with min*a,b≤∂_ξ^2 α_ϵ^* (a, b, ξ) ≤max*a,b;
*
α_ϵ^* (a, b, ξ) is positively one-homogeneous and jointly concave in (a,b) for fixed ξ;
* α_ϵ^* satisfies the following bound:
α_ϵ^* (a, b, ξ) ≤ϵ^2 √(ab)( cosh( | ξ/ϵ| ) - 1 )= 1/4√(ab) Ψ^*(2ξ).
Moreover, the expansion for ξ≪ 1 is given by
α_ϵ^*(a,b,ξ) = Λ_H(a,b) ξ^2/2 + O⟨*|ξ^3/ϵ;
* It holds that
α_ϵ^*(a,b,ξ) →1/2( a (ξ^+)^2 + b (ξ^-)^2 ) α_0^*(a,b,ξ) as ϵ→ 0 ,
where ξ^± is the positive and negative part of ξ, respectively. Moreover,
| α_ϵ^*(a,b,ξ) - α_0^* (a,b,ξ) | = O(C_a, b, ξ ϵ),
where the constant C_a, b, ξ < ∞ depends on a, b, ξ.
* The function β_ϵ: _+×_+→_+ defined for the argument ξ = - ϵlog√(b/a) in α_ϵ^* has the representation
β_ϵ(a, b) α_ϵ^* (a, b, -ϵlog√(b/a))
= ϵ^2/4∫_a^b ab/z[ 1/Λ(z,a) - 1/Λ(z,b)] z;
* The function β_ϵ:_+×_+→_+ defined in (e) is jointly convex, continuous with
β_ϵ (a, 0) ϵ^2/4π^2/6 a and, symmetrically, β_ϵ (0, b) ϵ^2/4π^2/6 b,
and satisfies the following bounds:
ϵ^2/4 (√(a)-√(b))^2 ≤ϵ^2/4⟨*|a-b^2/a+b≤β_ϵ(a, b) ≤ϵ^2/2 (√(a)-√(b))^2;
Moreover, the function _+ ×_+ ∋ (a, b) ↦β_ϵ (a^2, b^2) is differentiable.
* The function α_ϵ^*(a,b,-ϵlog√(b/a) + q / 2 ) has the expansion
α_ϵ^*(a,b,-ϵlog√(b/a) + q/2 ) = β_ϵ(a, b) + ϵ/4(a-b) q + q^2/4𝕙_ϵ (a, b, q)
with
𝕙_ϵ (a, b, q) ∫_0^1 [a 𝔥(λ q/ϵ) + b 𝔥(-λ q/ϵ)](1-λ)λ, 𝔥(s) = 1/4e^s-1-s/sinh^2(s/2).
(a) From the representation of α^*_1 in terms of the harmonic-logarithmic mean, it follows that
∂_ξα^*_1(a, b, ξ) = sinh(ξ) Λ_H (ae^-ξ, b e^ξ) = sinh(ξ) ab/Λ (ae^-ξ, b e^ξ).
It also holds
∂_ξ^2 α^*_1(a,b,ξ) = a b/⟨*|a e^-ξ - b e^ξ^2⟨*| a (e^-2ξ-1)+ b (e^2ξ -1) + (a-b)⟨*|loge^-ξ/b - loge^ξ/a ,
which can be rewritten with the help of the function
g(x) = x log x - x +1/(x-1)^2
as
∂_ξ^2 α^*_1(a,b,ξ) = a g⟨*|a/be^-2ξ + b g⟨*|b/a e^2ξ .
The convexity follows now by observing that
∀ x∈ [0,1] : 0≤ g(x) ≤ 1 and g(x) + g(x^-1) = 1
and hence the bound
min{a,b}≤∂_ξ^2 α^*_1(a,b,ξ) ≤max{ a ,b } ,
implying the convexity in ξ for fixed a,b>0.
(b) The positively one-homogeneity and joint concavity follows from the properties of Λ_H.
(c) Let ξ>0. Using the inequality between the harmonic-logarithmic and geometric mean, we obtain
α_1 (a, b, ξ) = ∫_0^ξsinh(x) Λ_H (a e^-x, b e^x) x
≤∫_0^ξsinh(x) √(ab) x = √(ab)( cosh(ξ) - 1 ).
If ξ < 0, then
α_1 (a, b, ξ) = ∫_0^|ξ|sinh(x) Λ_H(ae^x, be^-x) x ≤√(ab)( cosh(|ξ|) - 1 ).
Combining the two cases and considering α_ϵ, we get
α_ϵ (a, b, ξ) ≤ϵ^2 √(ab)( cosh( | ξ/ϵ| ) - 1 ).
As for the asymptotic expansion, we obtain, by definition of α^*_1,
α^*_1(a, b, ξ) = ∂_ξ^2 α^*_1(a, b, ξ) |_ξ=0ξ^2/2 + O( |ξ|^3 )
= Λ_H (a, b) ξ^2/2 + O( |ξ|^3 ).
Then it follows directly that
α^*_ϵ(a, b, ξ) = ϵ^2 α^*_1 (a, b, ξ/ϵ) = Λ_H (a, b) ξ^2/2 + O( |ξ|^3/ϵ).
(d) We rewrite α^*_ϵ as
α^*_ϵ(a, b, ξ) =ϵ^2 ∫_0^ξ/ϵsinh(x) Λ_H (ae^-x, be^x) x
= ϵ^2/2∫_0^ξ/ϵ( Λ_H (a, be^2x) - Λ_H (ae^-2x, b) ) x
= ϵ/2∫_0^ξ( Λ_H (a, be^2x/ϵ) - Λ_H (ae^-2x/ϵ, b) ) x.
For x > 0, it holds that
ϵ/2Λ_H (a, be^2x/ϵ)
= ϵ/2ab e^2x/ϵ/a - be^2x/ϵ( loga/b - 2x/ϵ)
= ab/a e^-2x/ϵ- b( ϵ/2loga/b - x)
ax,
and
- ϵ/2Λ_H (ae^-2x/ϵ, b)
= - ab/a - b e^2x/ϵ( ϵ/2loga/b - x) 0.
For x < 0, similarly, we obtain
ϵ/2( Λ_H (a, be^2x/ϵ) - Λ_H (ae^-2x/ϵ, b) ) bx.
Combining the two cases yields
lim_ϵ→ 0α^*_ϵ(a, b, ξ)
= _ξ > 0∫_0^ξ ax x + _ξ < 0∫_0^ξ bx x = 1/2( a (ξ^+)^2 + b (ξ^-)^2 ).
(e) Direct calculation shows
β_ϵ(a, b) = α^*_ϵ(a, b, -ϵlog√(b/a))
= ϵ^2 α^*_1 (a, b, log√(a/b))
= ϵ^2 ∫_0^log√(a/b)sinh(x) Λ_H(ae^-x, be^x) x
= ϵ^2/4∫_1^a/b(√(y) - 1/√(y)) ab/Λ( a/√(y), b√(y))1/y y
= ϵ^2/4∫_1^a/bab/y[ 1/Λ(a/y, b ) - 1/Λ(a, b y )] y
= ϵ^2/4∫_a^b ab/z[ 1/Λ(z, a) - 1/Λ(z, b)] y.
(f) The joint convexity of β_ϵ follows from (a) and (b). It is clear that β_ϵ is continuously differentiable in _+ ×_+ since it is defined as an integral of a bounded continuous function. However, on the boundary {0}× [0, +∞) ∪ [0, +∞) ×{0} some partial derivatives become -∞. In the case of (a, b) ↦β_ϵ(a^2, b^2), the directional derivatives are continuous and bounded:
0 ≥∂_1 β_1 (a^2, 1) = - 2a ∫_a^2^1 log z/z (z - 1) z ≥ - 2a ∫_a^2^1 1/z √(z) z = 4a 1/√(z)|_a^2^1
= 4a ( 1 - 1/a) = 4 (a - 1) > -∞.
As for the bounds, we begin with the
Upper bound. Using the inequality that the harmonic-logarithmic mean is less or equal to the geometric mean yields
β_ϵ (a, b)
≤ϵ^2 √(ab)∫_0^-log√(b/a)sinh(x) x
= ϵ^2 √(ab)( cosh( -log√(b/a)) - 1 )
= ϵ^2/2( √(a) - √(b))^2.
Tight lower bound. Since β_1 is positively one-homogeneous it is enough to prove that
β_1(a, 1) ≥γ(a) 1/4(a-1)^2/a+1 ∀ a≥ 0.
For a = 0 the inequality holds, since β_1(0, 1) = 1/4π^2/6≥1/4 = γ(0). It is left to consider a > 0.
We notice that β_1(1, 1) = 0 = γ(1). Now we aim to compare the derivatives ∂_a β_1(a,1) and ∂_a γ(a) for a∈ (0,1) and a∈(1,∞). The derivative of γ is
∂_a γ(a) = 1/4(a-1)(a+3)/(a+1)^2 = ∫_1^a 2/(z+1)^3 z
We use the representation of β_1 from (e) and apply the change of variables y = z/a in the first part of the integral
∂_a β_1(a, 1)
= 1/4∂_a [ ∫_1^1/a1/y Λ(y,1) y - a ∫_a^1 1/z Λ(z,1) z ]
= a/Λ(1/a, 1)( -1/a^2) - ∫_a^1 1/z Λ(z,1) z + 1/Λ(a, 1)
= ∫_1^a 1/z Λ(z,1) z
= ∫_1^a log z/z (z - 1) z .
Therefore,
∂_a (β_1(a,1) - γ(a) ) = ∫_1^a [ log z/z (z - 1) - 2/(z+1)^3] z.
We are left to show that the integrand is positive, and then the bound follows. For z>1, the integrand is positive, if and only if
log z ≥8z(z-1)/(z+1)^3,
which can be shown again by comparing the derivatives
1/z - 8-z^2 + 4z -1/(z+1)^4 = (z-1)^2 (z^2 + 14z + 1)/z (z+1)^4 > 0 ∀ z >1.
Rough lower bound. This lower bound follows from the inequality between the geometric and arithmetic means
(a-b)^2/a+b = ( √(a) - √(b))^2 ( 1 + 2√(ab)/a +b) ≥ 2 ( √(a) - √(b))^2.
(g) We apply the second-order Taylor expansion for a function f:
f(y) = f(x) + f'(x)(y-x) + (y-x)^2∫_0^1 f”((1-λ)x + λ y)(1-λ)λ
to expand the function α^*_ϵ
α^*_ϵ(a,b,-ϵlog√(b/a) + q/2) = α^*_ϵ(a,b,-ϵlog√(b/a)) + q/2 ∂_ξ(α^*_ϵ) (a,b,-ϵlog√(b/a))
+ q^2/4∫_0^1 (∂_ξ^2α^*_ϵ) (a,b,-ϵlog√(b/a) +
λq/2)(1-λ)λ.
After some manipulation, we find that
(∂_ξα^*_ϵ)(a,b,-ϵlog√(b/a)) = ϵ/2(a-b),
(∂_ξ^2α^*_ϵ) (a,b,-ϵlog√(b/a) +
q/2) = a 𝔥(q/ϵ) + b 𝔥(-q/ϵ),
with
𝔥(s) = 1/4e^s-1-s/sinh^2(s/2).
Hence,
α^*_ϵ(a,b,-ϵlog√(b/a) + q/2) = β_ϵ (a,b) + ϵ/4(a-b) q
+ q^2/4∫_0^1 [a 𝔥(λ q/ϵ) + b 𝔥(-λ q/ϵ)](1-λ)λ,
therewith concluding the proof.
abbrv
|
http://arxiv.org/abs/2306.07188v2
|
20230612154458
|
Fair Learning to Rank with Distribution-free Risk Control
|
[
"Ruocheng Guo",
"Jean-François Ton",
"Yang Liu"
] |
cs.LG
|
[
"cs.LG",
"cs.CY",
"cs.IR"
] |
definition
assumptionAssumption
ByteDance Research
UK
ruocheng.guo,[email protected]
ByteDance Research
USA
[email protected]
Learning to Rank (LTR) methods are vital in online economies, affecting users and item providers. Fairness in LTR models is crucial to allocate exposure proportionally to item relevance. The deterministic ranking model can lead to unfair exposure distribution when items with the same relevance receive slightly different scores. Stochastic LTR models, incorporating the Plackett-Luce (PL) model, address fairness issues but have limitations in computational cost and performance guarantees.
To overcome these limitations, we propose FairLTR-RC, a novel post-hoc model-agnostic method. FairLTR-RC leverages a pretrained scoring function to create a stochastic LTR model, eliminating the need for expensive training. Furthermore, FairLTR-RC provides finite-sample guarantees on a user-specified utility using distribution-free risk control framework. By additionally incorporating the Thresholded PL (TPL) model, we are able to achieve an effective trade-off between utility and fairness.
Experimental results on several benchmark datasets demonstrate that FairLTR-RC significantly improves fairness in widely-used deterministic LTR models while guaranteeing a specified level of utility.
Fair Learning to Rank with Distribution-free Risk Control
Yang Liu
July 31, 2023
=========================================================
§ INTRODUCTION
Learning to rank (LTR) relies on machine learning to optimize rankings of items in applications such as search and recommendation <cit.>.
LTR models play a vital role in online multi-sided economies involving users, item providers and the platform (e.g., e-commerce website), where they have impact on the exposure of items. They are influential on the economic outcomes of entities such as sellers, job candidates, and content creators <cit.>.
A LTR model is typically composed of two components. The first component is a scoring function. Given a query and a set of potential items to be recommened, it predicts ranking scores for these items based on the predicted relevance to the user's query. The second component is a ranking model, which generates a ranking list of products using the scores from stage 1.
Traditional LTR models generally employ deterministic ranking models, such as sorting items in accordance to their ranking scores.
Given the growing impact of LTR on online platforms, the demands for fair allocation of exposure among items <cit.> has significantly increased. In current literature, fair allocation dictates that the exposure of an item in ranked lists should be proportional to its relevance to the query. However, deterministic ranking models can often result in unfair distribution of exposure.
For instance, with a pretrained scoring function that is not 100% accurate, two products with identical relevance can have slightly different ranking scores. With deterministic ranking models, this would result in severely unequal allocation of exposure as the item with higher ranking score will always be ranked at the a higher position <cit.>.
In response to the inherent issue of deterministic LTR models w.r.t. exposure-based fairness, there has been a shift towards stochastic LTR models. One such representive incorporates the Plackett-Luce (PL) ranking model <cit.>. The PL ranking model predicts a distribution of ranking lists based on ranking scores. Thie enables us to sample multiple ranking lists from it, this significantly improving exposure fairness, especially in cases where multiple items have slightly different scores but the same relevance <cit.>.
However, challenges arise when integrating scoring functions from deterministic models into the PL model, as they are not designed to optimize the expected performance under predicted ranking distributions. In addition, training scoring functions with the PL model is computationally intensive, requiring computing gradients from numerous sampled ranking lists.
Finally, the lack of guarantees on ranking performance when we replace deterministic LTR models with the existing stochastic ones presents a significant obstacle to their widespread adoption in real-world applications.
To address these challenges, we present Fair Learning to Rank with Distribution-free Risk Control (FairLTR-RC), a post-hoc, model-agnostic approach for exposure-based fairness in LTR.
By incorporating the framework of conformal prediction <cit.> into the LTR setting.
Our proposed method incorporates a novel partially stochastic ranking model – Thresholded PL (TPL) ranking model, which offers a delicate trade-off between fairness and utility in a post-hoc manner. TPL can work with pretrained scoring function from deterministic LTR models. This circumvents the expensive training needed by existing stochastic LTR models.
In addition, FairLTR-RC provides theoretically supported finite-sample guarantees, assuring a specified performance level even under constrained data settings. and utility by ensuring the utility of our LTR models will not fall below a predetermined threshold.
The contributions of this paper are as follows:
* First, we propose FairLTR-RC, a post-hoc, model-agnostic method that efficiently transforms pre-trained scoring functions from deterministic LTR models, into stochastic ones this avoiding expensive training procedures.
* Second, our method extends distribution-free risk control to LTR. FairLTR-RC achieves a specified level of utility with high probability, despite its stochastic nature.
* Third, extensive experimental results on popular LTR datasets show that FairLTR-RC enhances fairness of various scoring functions (CatBoost <cit.>, LightGBM <cit.>, and Neural Networks) pretrained with deterministic ranking models, while maintaining the specified level of utility.
§ PRELIMINARIES
In this section, we begin by outlining the notation used throughout the paper. Next, we provide a formal definition of a LTR model, which consists of a scoring function and a ranking model. Following that, we introduce definitions for utility and exposure-based fairness measures within the context of Learning to Rank (LTR). Lastly, we conclude this section by presenting our problem statement.
Notations.
For a query q, there exists n_q candidate documents 𝒟^q = {d_1^q,...,d_n_q^q} to be ranked. Each document d_i^q is described by a tuple (𝐱_i^q,ρ(d_i^q)), where the feature vector 𝐱_i^q ∈𝒳 describes the item and its relationship to the query q. For example, features used in e-commerce search can include the price of the item and the average price of items clicked from the query.
And ρ(d_i^q) is the relevance of document d_i^q annotated by human experts.
We assume that the relevance is given for each item corresponding to the queries in the training, validation and calibration set, but unknown for the test set.
For simplicity of notation, we will omit the subscript i and the superscript q when they are not necessary.
A top-K ranking 𝐲=[y_1,...,y_K] ∈𝒴 is a sorted list of K items, where y_k=d means item d is ranked at the k-th position in 𝐲, where 𝒴 is the space of permutations.
Let 𝐲_1:k be the sublist including first k≤ K elements of 𝐲.
Scoring Function and Ranking Model.
Here, we formally define the scoring function and the ranking model of a LTR model.
First, given query q and its item set 𝒟^q, a scoring function f:𝒳→ℝ maps the feature vectors of each item d to its ranking scores s(d).
In this work, we assume that the scoring function f is fixed and ranking scores s(d) are given for d ∈𝒟^q.
Second, a ranking model π : ℝ^n_q×𝒴→ [0,1] which maps the scores of all items in 𝒟^q and a ranking to its probability to be sampled.
Thus, a ranking model π({s(1),...,s(n_q)},) predicts a distribution of rankings for each query q. For simplicity of notations, we denote the predicted distribution of rankings for query q as π^q().
Then, a deterministic LTR model comes with π^q(𝐲)=1 for a certain ranking 𝐲, and π^q(𝐲')=0 for 𝐲'𝐲. While a stochastic model can have π^q(𝐲) > 0 for multiple different rankings 𝐲.
The PL ranking model is adopted for improving exposure-based fairness <cit.> .
PL models predict a distribution of rankings for fairer allocation of exposure among items.
Given query q and the set of items 𝒟^q, their ranking scores s(d),d∈𝒟^q, and the sampled items for positions 1,...,k-1, denoted by _1:k-1, the PL
ranking model <cit.> samples an item d for position k from π_PL() = ∏_k=1^K p_PL(d|_1:k-1) with:
p_PL(d|_1:k-1) = 1(d∉_1:k-1)exp(s(d)/τ)/∑_d'∈𝒟^q∖_1:k-1exp(s(d')/τ),
where τ is the temperature.
However, training such stochastic LTR models is expensive, which requires sampling at least 100 ranking lists for each query <cit.> and compute gradients of model parameters based on these samples.
Utility and Fairness Metrics.
Given the definitions above, here, we define the utility and exposure-based fairness for a LTR model.
In LTR, the ultility function considers the ranking of each item by weighting each position k with weight θ_k.
The utility of a ranking model π on query q can be defined as <cit.>:
U^q(π) = ∑_𝐲∈𝒴π^q(𝐲) ∑_k=1^K θ_k ·ρ(y_k),
which leads to the overall utility U(π) = 𝔼_q[U^q(π)].
If we choose θ_k=1[k≤ K]/log_2(1+k), then U(π|q) is DCG@K.
Let iDCG@k be the maximal DCG@k for a given query q at position k, then U^q(π) is NDCG@K if θ_k = 1[k≤ K]/log_2(1+k) ×iDCG@k, which measures the normalized exposure of items ranked at position k.
In this work, we consider bounded utility functionU^q(π)∈ [0,1]. Thus, the utility risk to be controlled is R_util(π) = 1-U(π), e.g., 1-NDCG@K.
Fairness in ranking deals with the allocation of exposure over items.
Exposure measures the probability of users to examine a certain position.
The widely used utility metric NDCG@K is based on the logarithmic reduction of exposure proportional to the position.
To measure item exposure fairness, we first define exposure of item d under the ranking model π as
ℰ^q(d;π) = ∑_𝐲π^q(𝐲) ∑_k=1^K θ_k ·1[y_k=d],
where 1[y_k=d]θ_k is the exposure of item d in the ranking 𝐲.
Intuitively, it measures the mean exposure of item d in the rankings sampled from the predicted
distribution π^q(𝐲).
Let ℰ(d) denote ℰ^q(d;π) when q and π can be dropped.
Based on this, we can define a disparity measure for exposure-based fairness in ranking.
Exposure-based Fairness in Ranking <cit.>.
In this work, we focus on fair allocation of exposures to items.
Singh and Joachims <cit.> first propose a fairness notion: the exposure of an item ℰ(d) should be proportional to its relevance ρ(d).
They compute the average difference of the exposure-relevance ratio ℰ(d)/ρ(d) - ℰ(d')/ρ(d') between each pair of items for each query.
Oosterhuis <cit.> proposes a variant of this disparity metric, which handlew the cases for the items with 0 relevance but was ranked in top-K.
Given the ranking model π, the disparity measure for exposure-based fairness is defined as <cit.>
R_fair^q(π) = 2∑_d∈𝒟^q∑_d'∈𝒟_ d^q ℓ(ℰ^q(d;π)ρ(d'),ℰ^q(d';π)ρ(d))/|𝒟^q|(|𝒟^q|-1),
where 𝒟_ d^q denotes 𝒟^q ∖{d} and ℓ(a,b) is (a-b)^2.
Let R_fair(π)
=𝔼_q[R_fair^q(π)] be the expectation over queries.
Intuitively, R_fair^q(π) measures how is the exposure of items under the ranking model π different from the ideal case where the exposure is proportional to the relevance, ℰ(d)/ρ(d) = ℰ(d')/ρ(d'), for all pairs of items d,d'∈𝒟^q.
Problem Statement.
This work examines a real-world situation.
For any pretrained scoring function f, we aim to improve exposure-based fairness of the LTR model via a post-hoc method while maintaining a satisfactory level of utility (e.g., NDCG@K is no less than a certain level) with high probability.
Given query q and the candidate items 𝒟^q, and a fixed scoring function f, the goal is to optimize the ranking model π to minimize the disparity with a simluataneous guarantee on the utility, i.e.,
πmin R_fair(π) s.t. P(R_util(π) ≤α) ≥ 1-δ,
where α∈ (0,1) (1-α) is the desired risk (utility) level, and 1-δ∈ (0,1) is the desired coverage rate.
§ METHODOLOGY
In this section, the background of distribution-free risk control is introduced, followed by the proposed framework.
§.§ Background: Distribution-free Risk Control
Distribution-free risk control is a post-hoc model-agnostic method based on split conformal prediction.
It uses a calibration set 𝒬_cal to determine the value of a set of model parameters s.t.
Let 𝒯(λ) be a set-valued function with a scalar parameter λ (e.g., a threshold on item scores) that predicts a set of items. Given a bounded risk function R(𝒯(λ)) ∈ [0,B]
that measures the expected loss of 𝒯(λ) over queries, we define of a Risk Controlling Prediction Set <cit.>. For simplicity, let R(λ) denote R(𝒯(λ)).
Distribution-free risk control <cit.> uses an independent and identically distributed (i.i.d.) data split as the calibration set 𝒬_cal to select λ for the set-valued functions 𝒯(λ) s.t. the risk function R is guaranteed on the test set 𝒬_test.
In our setting, set-valued functions predict a set of items for each position in the ranking. The connection between set-valued functions and ranking models is shown in Section <ref>.
Risk Controlling Prediction Set <cit.>.
Given a desired risk level α∈ [0,B] and tolerance rate δ∈ (0,1), a set-valued function 𝒯(λ) is a (α,δ) risk-controlling prediction set iff
P(R(λ)≤α) ≥ 1-δ
Intuitively, in our setting, this means the probability of observing the risk function R ≤α is at least 1-δ across repeated runs with different random data splits when the set-valued function 𝒯(λ) is applied.
Then, the following assumptions are employed.
* Nesting Properties.
λ < λ' ⇒𝒯(λ) ⊂𝒯(λ'), 𝒮⊂𝒮' ⇒ R(𝒮) ≥ R(𝒮')
* Existence of an upper confidence bound (UCB) R̂^+(λ) for the risk function. It satisfies
P(R(λ) ≤R̂^+(λ)) ≥ 1 - δ.
Under the aforementioned assumptions, Bates et al. <cit.> propose to select the threshold λ̂ on the calibration set 𝒬_cal s.t. 𝒯(λ̂) is a (α,δ) risk-controlling prediction set. Intuitively, they select the λ s.t. any λ' ≥λ leads to UBC smaller than the desired level α as
λ̂ = inf{λ∈Λ : R̂^+(λ') < α, ∀λ' ≥λ},
Then, <cit.> extends risk control to cases where the nesting properties are violated, which also allows multi-dimensional thresholds.
The crux of <cit.> is hypothesis testing by the duality of p-values and concentration inequality, which selects a threshold by rejecting its corresponding null hypothesis R(λ) > α with p-value smaller than δ, where λ is the vector representing a multi-dimensional threshold. In Section <ref>, we will present concrete instantiation of both UBC and p-value based risk control for ranking.
It can be infeasible to control risk at every level of α for every data distribution <cit.>.
For example, guaranteeing NDCG@K≥ 0.9 may be unattainable given a subpar fixed scoring function f, where risk control methods should abstain from returning a threshold.
§.§ Thresholded PL Ranking Model
Here, we propose the Thresholded PL (TPL) ranking model. Applying TPL on top of pretrained scoring functions from deterministic LTR models achieves an effective utility fairness trade-off.
The TPL model is built upon set-valued functions, which enables distribution-free risk control for LTR.
With parameters of the set-valued functions obtained from risk control algorithms, the TPL model provides a guarantee on a specified risk function.
Suppose we have access to a risk control score s̃(d) of each document d, which approximates the relevance of the item. We let s̃(d) be a function of the provided ranking score s(d) from the fixed scoring function f. We choose the probability to sample item d at the first position in the PL model, p(d|∅,0).
More detailed discussion on the choice of risk control score can be found in Appendix <ref>.
For each position k, the TPL ranking model uses a set-valued function 𝒯(λ_k) to select items whose predicted scores are high enough for each position k, where λ_k is the threshold parameter for position k:
𝒯(λ_k) = {d|s̃_d ≥λ_k, ∀ d ∉_1:k-1},
where _1:k-1=∅ if k=1.
For each position k, TPL creates a distribution of the items selected based on the set-valued function 𝒯(λ_k) defined in Eq. (<ref>) and then combines them to predict a distribution of rankings as:
p(d|_1:k-1,λ_k) = 1(d∈𝒯(λ_k))exp(s_d/τ)/∑_d'∈𝒯(λ_k)exp(s_d'/τ),
π() = ∏_k=1^K p(d|_1:k-1,λ_k),
When λ_k takes extreme values, the TPL model will reduce to the PL and the deterministic ranking model. Specifically, when λ_k=0 and λ_k ≥max({s_d}_d∈𝒟^q∖_1:k-1 ) for k=1,...,K, TPL is equivalent to PL and the deterministic ranking model, respectively.
We verify this empirically in Section <ref> (see Fig. <ref>).
In conformal prediction, it is often desired to have a small prediction set size |𝒯(λ_k)|, which is not the case here.
To achieve the goal described by the problem statement (Eq. (<ref>)), our method first finds a set of λ_k that is large enough for maintaining a guaranteed level of utility then adopts the minimal λ_k in the set for optimizing exposure-based fairness.
More specifically, we aim to minimize R_fair under the constraint that the utility is guaranteed, which is equivalent to maximizing |𝒯(λ_k)| with items whose scores are at least λ_k.
We discuss selecting λ_k through distribution-free risk control in Section <ref>.
With the set-valued function 𝒯(λ_k), TPL can adapt to the distribution of scores to achieve this goal, compared to the PL model which samples any item from 𝒟^q∖_1:k-1 and the deterministic model which only takes the item with the highest score.
When there are multiple items with high and similar scores, it is desired to include all of them in the prediction set. When there are two items with scores much higher than others, the prediction set should only include them.
§.§ Risk Control with Thresholded PL Model
Here, we describe the distribution-free risk control algorithm that selects thresholds for the TPL model to provide provable guarantees on the bounded risk function R_util.
Given user-specified desired utility level 1-α for a bounded list-wise utility function U (e.g., NDCG@K),
our method utilizes the calibration set 𝒬_cal to find thresholds that provide guaranteed utility.
Note that existing post-hoc methods for exposure-based fairness <cit.> are not able to provide such guarantees as they blindly optimize a weighted combination of utility and fairness objectives.
Selecting Thresholds via Risk Control.
We leverage distribution-free risk control <cit.> to learn thresholds λ=[λ_1,...,λ_K] for top-K positions s.t. the risk of the ranking model is under control, i.e., P(R_util(π) ≤α) ≥ 1-δ, where R_util(π) = R_util(π(λ)).
When the threshold is a vector, the nesting properties may not hold.
The risk control algorithm works as follows. First, we specify a search space Λ for the thresholds. Each value λ∈Λ corresponds to a null hypothesis
R_util(λ) > α.
Then, for each value of λ, we test the null hypothesis on the calibration set 𝒬_cal, which is assumed to be exchangeable with the test data 𝒬_test <cit.>.
Specifically, we aim to obtain the rejected values of λ through the hypothesis testing, which is associated with a specified UCB R̂^+ for the risk R_util as
Λ̂ = {λ∈Λ | R̂^+(λ) < α}
Finally, we choose λ̂ = min_λ∈Λ̂ R_fair(λ) to optimize fairness.
However, the computation can be inhibitive if brute-force grid search is performed to test all possible values of λ from a predefined grid with M values for each λ_k. This requires computing R̂^+(λ) for M^K times, each of which includes computing the risk R_util on the calibration set.
In this work, we overcome this issue by limiting the search space of λ with a heuristic, where we simply let each position use the same threshold λ, which empirically performs well.
Risk Control for LTR.
Here, we provide concrete instantiations of the UCB <cit.> and p-valued based risk control <cit.>, which are crucial for determining the risk-controlling thresholds Λ̂.
First, with the duality of concentration inequality and p-values in hypothesis testing, we adopt the widely adopted Hoeffding-Benktus (HB) inequality <cit.> which combines the two inequalities by taking the minimum of the p-values from the two iequality. The p-value associated with HB inequality is a function of the mean risk R̂_util(λ) of the calibration set and the number of queries in the calibration set |𝒬_cal| as:
p^HB(λ) = min (exp(-|𝒬_cal|h_1(R̂_util(λ)∧α,α)),
exp(1)× p(Bin(n,α)≤⌈ |𝒬_cal|R̂_util(λ)⌉)),
where h_1(a,b) = a log (a/b)+(1-a)log(1-a/1-b), Bin(n,α) is the Binomial distribution and ⌈ a ⌉ takes the ceiling of the scalar a.
Given the Hoeffding Benktus p-values computed by Eq. (<ref>), we can obtain the set of selected thresholds Λ̂={λ∈Λ| p^HB(λ) < δ} based on the p-value of its hypothesis and then take the minimal λ∈Λ̂ as it heuristically minimizes the disparity measure R_fair.
Second, besides the UCBs introduced in <cit.>, we adopt a theory-backed UCB based on the DKWM inequality <cit.> for risk functions taking discrete values (e.g., R_util=1-NDCG@K).
R̂^+ = R_util(λ)+ ·√(ln (2/δ)/2· |𝒬_cal|).
where is a constant that depends on the set of loss values – we specify the details and provide the proof in Appendix <ref>.
With such a UCB, we can search for the minimal λ∈Λ that satisfies R^+ < α (as in Eq. (<ref>)) on the calibration set and obtain the guarantee by Theorem 1 of <cit.>. R_util(λ) may not satisfy the nesting assumption (Eq. (<ref>)), but we can find R̃_util(λ)= max_t ≤λ R_util(λ) ≥ R_util that satisfies the assumption <cit.>. Then we can apply the UCBs on R̃_util.
§ EXPERIMENTS
In this section, we perform experiments on popular LTR benchmark datasets with various pretrained scoring functions to answer the following research questions:
* RQ1: Can achieve effective trade-off between utility and exposure-based fairness?
* RQ2: Can achieve high marginal coverage rate on the risk function and improve exposure-based fairness at the same time?
§.§ Experimental Setup
Datasets.
We consider two popular publicly available datasets for LTR: Yahoo!Webscope (Yahoo) <cit.> and MSLR-WEB30k (MSLR) <cit.>.
Table <ref> shows the statistics describing the widely used datasets for evaluating LTR models. We observe that Yahoo has more features and MSLR has more queries and much more items per query.
These datasets consist of queries, their associated documents, and relevance labels in 0-4 indicating the expert-judged relevance between an item and a query. Each feature vector represents a query-document pair.
Similar to <cit.>, to compute the coverage rate, we repeat the experiment 50 times by randomly splitting the original test set into calibration (25%) and test sets (75%).
The scoring functions are pretrained on the training set and model selection is done by maximizing NDCG@5 on the validation set.
Then, for risk control, the threshold λ is selected based on UCB or p-value computed on the calibration set.
We compare the proposed with the deterministic and PL ranking model with mean and standard deviation of the test performance over these 50 runs.
Finally, we follow <cit.> to ignore all queries with no relevant documents for fair evaluation.
It avoids arbitrarily assigning NDCG@K=1 or 0 to such queries with any ranking, to prevent unfair comparisons.
Thus, the NDCG@K reported in this work lower than those in the literature.
Scoring Functions.
We use <cit.>, (LGB) <cit.> and Neural Network (NN) as the pretrained scoring function.
The NN is a three-layer MLP with sigmoid activation trained with LambdaLoss <cit.>.
On top of the pretrained scoring functions, we apply the ranking models (deterministic, PL and TPL). More details about the experimental setup can be found in Appendix <ref>.
To make the coverage results more comprehensive, we also apply our method to the state-of-the-art stochastic LTR models that train scoring function on the top of the PL model, including PL-Rank-3 <cit.>, StochasticRank <cit.>, and Policy Gradient <cit.>, the results for which is in Appendix <ref>.
Evaluation Metrics.
We consider the widely used NDCG@5 as the utility metric.
For exposure-based fairness, we follow <cit.> to adopt Eq. (<ref>) with and ℓ(a,b)=(a-b)^2 to measure the mean squared disparity R_sq-fair.
It measures how the assigned exposure of an item is different from being proportional to its relevance.
For the guarantee, we repeat the experiment 50 times and report the marginal coverage on test sets with thresholds selected by risk control algorithms λ̂ as
∑_t=1^T 1(R_util(λ̂)≤α) /T.
We choose α based on the performance of the original deterministic model.
§.§ Experimental Results
Trade-off Results.
We first verify that the proposed TPL ranking model can achieve an effective trade-off between utility and fairness.
As shown in Fig. <ref>, when the threshold λ increases, the utility (risk) of the TPL ranking model increases (decreases) while the disparity measure increases.
In addition, we verify the claim that TPL model can reduce to the PL and deterministic model.
When λ=0, the TPL model reduces to the PL ranking model (Sto). When λ≥max(s_d), TPL reduces to the deterministic ranking model (Det).
Coverage and Fairness Improvement.
Let U^* and R_sq-fair^* be the NDCG@5 and mean squared disparity of the pretrained deterministic LTR model, respectively. Results in Table <ref> show that, with risk control based on Hoeffding-Benktus, in at least 48 out of 50 runs (≤ 2 abstentions), our method achieves 100% coverage (NDCG@5≥ 1-α). At the same time, our method improves fairness significantly with at least 13.29% drop in R_sq-fair.
In practice, when the risk control method abstains from selecting any thresholds, we can set the threshold λ=1 to make TPL reduce to the deterministic ranking model.
Fig. <ref> in Appendix <ref> shows distribution of NDCG@5 with thresholds selected by Hoeffding-Benktus and DKWM iequality.
§ RELATED WORK
Stochastic Ranking and Exposure-based Fairness.
Stochastic LTR models were initially adopted to address the challenge of optimizing ranking metrics, which are flat or discontinuous <cit.>, where it is shown that training scoring functions with the PL ranking model improves their ranking performance.
In addition, the PL ranking model also can enable exploration for by the estimated uncertainty as it explcitly estimates uncertainty of the scoring function through the probability distribution from the PL ranking model <cit.>.
This can be helpful when there exists samples (e.g., users, queries and items) with few interactions.
Recently, stochastic LTR models are adopted to improve exposure-based fairness.
Singh et al. <cit.> proposed the notion of exposure-based fairness and a policy gradient algorithm to train LTR models to optimize a combination of utility and fairness.
<cit.> evaluated two types of stochastic ranking models including the PL model w.r.t. exposure-based fairness based on various click models.
<cit.> improve the efficiency of training scoring functions with the PL model forrelevance and fairness.
Different from them, our method transforms a pretrained scoring function from a deterministic LTR model to a stochatic one.
Distribution-free Risk Control for Ranking.
Distribution-free risk control is based on split conformal prediction <cit.>.
Bates et al. <cit.> propose a method to predict an interval of a pair-wise score of items with guaranteed confidence. It abstains from predicting unconfident pairs. However, it does not directly provide guarantees on list-wise ranking performance.
Angelopoulos et al. <cit.> apply Learn then Test <cit.> to the recall stage of recommendation systems, which predicts sets with items with scores higher than a threshold, with a guarantee on the expected ratio of false positives.
Wang et al. <cit.> propose a method based on <cit.> to select a threshold for marginal guarantee on the number of candidates from each group and minimizes the prediction set size for each query, which is further extended to the scenario with noisy and biased implicit feedbacks (e.g., clicks) <cit.> .
Different from the existing work, this work focuses on providing guarantee on widely used list-wise ranking metrics (e.g., NDCG@K).
§ CONCLUSION
In this work, we propose FairLTR-RC, a post-hoc model-agnostic method for Fair Learning to Rank (LTR). It can create a stochastic LTR model with improved exposure-based fairness with a scoring function from a pretrained deterministic LTR model.
With distribution-free risk control, our method can provide guarantee on a user-specified utility function.
The integration of the Thresholded Plackett-Luce (TPL) model balances utility and fairness.
FairLTR-RC avoids expensive training and provides guarantees on a specified metric based on distribution-free risk control. Results on benchmark datasets verify the effectiveness of our proposed method, improving fairness of state-of-the-art deterministic models while ensuring a predefined level of utility.
Despite its promising results, this work is not without limitations.
FairLTR-RC may abstain from selecting thresholds with subpar pretrained scoring functions, small calibration sets, conservative bounds in risk control methods, and when α is too small.
unsrt
§ PROOF
We will use the DKWM inequality to derive a general UCB with discrete risk R:
Given a natural number n, let Z_1, Z_2,..., Z_n be real-valued independent and identically distributed random variables with cumulative distribution function F(·). Let F_n denote the associated empirical distribution defined by
F_n(z) = 1/n∑_i 1(Z_i ≤ z), z ∈ℝ
Then ∀δ∈ (0,1), with probability at least 1-δ,
sup_z ∈ℝ|F(z) - F_n(z)| ≤√(ln 2/δ/2n)
In order to apply the aforementioned lemma, we have to show that the risk function R^q(λ) = 1/m∑_j=1^m l(^j, ^*) can be equivalently written as an indicator random variable w.r.t. a certain threshold λ.
More specifically, when we take 1-NDCG@K as the risk function R and l as the 0-1 loss function.
Each threshold λ corresponds to a certain probability that the best item is shown at the top position, which is denoted as z(λ). Denote by Z_j a uniformly random variable defined on [0,1]. Then
l(^j, ^*) = 1(Z_j ≤ z(λ)),
and therefore
R^q(λ) =1/m∑_j=1^m 1(Z_j ≤ z(λ)).
The lemma can therefore apply. Since the lemma covers all z ∈ℝ, it for sure will cover z(λ).
Then we show how to generalize the argument to the case with multiple but discrete loss levels. Suppose the loss values are l_1, l_2, ... , l_K and the chance of incuring a loss l_k is captured by an interval [z_k-1(λ), z_k(λ)) with z_K(λ) = 1:
[0, z_1(λ)), [z_1(λ),z_2(λ)),..., [z_K-1(λ), 1)
Then
l(^j, ^*) = Σ· 1(Z_j ≤ z_k(λ)) + l_K,
where Σ denotes the summation ∑_k=1^K-1 (l_k - l_k+1).
Then, the empirical CDF can be written as
F_m(z_k(λ)) :=1/m∑_j=1^m 1(Z_j ≤ z_k(λ))
With this we have
R^q(λ) = Σ·1/m∑_j=1^m 1(Z_j ≤ z_k(λ)) + l_k
=Σ· F_m(z_k(λ)) + l_k
Then we can derive the following inequality
| R^q(λ) - E[R^q(λ)]|
= |(Σ· F_m(z_k(λ)) + l_k) - (Σ· F(z_k(λ)) + l_k) |
=|Σ· (F_m(z_k(λ) - F(z_k(λ)))|
≤∑_k=1^K-1 |l_k - l_k+1|· |F_m(z_k(λ)) - F(z_k(λ))|
Since for all z we have
|F_m(z) - F(z)| ≤√(ln 2/δ/2m)
Then
| R^q(λ) - E[R^q(λ)]| ≤∑_k=1^K-1 |l_k - l_k+1| ·√(ln 2/δ/2m), ∀λ.
P-value for the DKWM Concentration Inequality.
Here, we provide a discussion on the p-value corresponding to the DKWM inequality based UCB.
The DKWM inequality does not naturally comes with a p-value as it is bounding the difference between the empirically CDF F_m(z) and the true one F(z).
The p-value upper bounds the probability of observing a deviation at least as large as the observed deviation.
In our case, while we can approximate F_m(z) and √(ln 2/δ/2m), the true CDF F(z) is unknown, the DKWM inequality tells us, for any z,
p(E[R(λ)]>R̂(λ)+∑_k=1^K-1|l_k-l_k+1|√(ln 2/δ/2m)) ≤δ / 2,
where R̂ is the mean risk computed on the calibration set 𝒬_cal.
Then, we are interested in the p-value which refers to the probability that a user-specified level of risk α is larger than true mean of risk function R̂(λ):
p(E[R(λ)] < α).
Intuitively, if we know p(E[R(λ)] < α), then we can choose the threshold λ accordingly to ensure that p(E[R(λ)] < α) ≥ 1 - δ to achieve the constraint in our problem statement in Eq. (<ref>).
Without any assumption on the distribution of the true mean p(E[R(λ)]), this implies that, for all α≥R̂(λ)+2∑_k=1^K-1|l_k-l_k+1|√(ln 2/δ/2m), we can guarantee
p(E[R(λ)] < α) ≥ 1-δ/2.
This is equivalent to
p(E[R(λ)] < α) ≥ 1-δ,
with α≥R̂(λ)+2∑_k=1^K-1|l_k-l_k+1|√(ln 1/δ/2m).
By assuming the distribution of p(R(λ)) (e.g., normal distribution), we can compute this p-value by the CDF of R(λ), which can further enable us to combine it with the existing p-value provided by Hoeffding-Benktus.
§ EXPERIMENTAL SETUP DETAILS
CatBoost <cit.> and LightGBM <cit.>. We perform grid search following the benchmark repository of CatBoost for LTR[<https://github.com/catboost/benchmarks/tree/master/ranking>]. In particular, we search the following hyperparameters: learning rate in {0.03, 0.07, 0.15, 0.3}, max_bin in {64,128,254}, and max_depth in {4,6,8,10}. We select the best model is based on NDCG@5 in the validation set.
Neural Network Scoring Function. We follow the open-source code[<https://github.com/HarrieO/2022-SIGIR-plackett-luce>] of <cit.> to train the NN base model for our method and reproduce results of the state-of-the-art stochastic LTR models.
Specifically, we use a three-layer MLP with sigmoid activation. The number of hidden units is 32, batch size is 256. Each model is trained for 100 epochs. A checkpoint is saved for each epoch. We select the checkpoint with highest NDCG@5 in the validation set.
§ COMPLETE EXPERIMENTAL RESULTS
Here, we demonstrate the complete experimental results.
§.§ Complete Trade-off Results
Here, Fig. <ref> shows the complete results of utility fairness trade-off for the two datasets with all three scoring functions pretrained with deterministic LTR models.
§.§ Complete Coverage Results
Here, we first show the coverage rates by DKWM inequality. We can observe that, compared to Hoeffding-Benktus, DKWM inequality leads to a less tight UCB. This explains why DKWM results in higher NDCG@5, less drop in the disparity measure R_sq-fair and larger number of absteintions. We leave finding a tighter bound than the Hoeffding-Benktus to future work.
Then, for the two datasets, we show more detailed results on coverage – the distributions of NDCG@5 with all three scoring functions in Fig. <ref>.
We make the following observations.
* With thresholds selected by both Hoeffing-Benktus and DKWM, our method can achieve 100% coverage rate to provide the guarantee that NDCG@5≥ 1- α on both datasets with all three scoring functions.
* The UCB of DKWM is less tight than that of Hoeffing-Benktus, leading to more conservative selection of thresholds, higher NDCG@5, worse improvement in fairness, and greater number of absteintions.
§.§ Coverage Results for Stochastic LTR models
Here, we illustrate the complete coverage results for CatBoost, LightGBM, LambdaLoss, and the state-of-the-art stochastic LTR models including PL-Rank-3 <cit.>, StochasticRank <cit.>, and Policy Gradient <cit.>.
With the results shown in Table <ref>, we make the following observations:
* First, the proposed method achieves effective utility fairness trade-off while maintaining the guarantee on utility. With the scoring function from the pretrained stochastic LTR models, our method can improve exposure-based fairness by at least 30.52% on Yahoo and 26.14% on MSLR, respectively, with guarantee that the NDCG@5 is at least 90% of that obtained with the same scoring function on the top of a deterministic ranking model.
* Second, with PL-Rank-3, our method achieves the most effective trade-off with 35.05% and 29.61% improvement in exposure-based fairness measured by R_sq-fair. We conjecture that this is due to the scoring function pretrained by PL-Rank-3 maintains the best ranking performance.
§ SCORE FOR DISTRIBUTION-FREE RISK CONTROL
Risk control (RC) score is a crucial design choice for distribution-free risk control <cit.>.
In TPL, RC score determines whether an item would be included in the prediction set 𝒯(λ_k) for position k.
A natural choice of RC score is the expected position-wise sampling probability 𝔼__1:k-1[p(d|_1:k-1,0)] in the PL model, which is a calibrated measure of the predicted relevance of item d for position k.
However, it can be computationally expensive to compute p(d|_1:k-1,0) for all possible (d,_1:k-1) when |𝒟^q| and k are large.
To decouple the score function of position k from the sampled partial rankings _1:k-1, we propose to use the probability to sample d at the first position in the PL model, i.e., p(d|∅,0), as the RC score.
Unlike the original ranking score s_d, as probability density, p(d|∅,0) ∈ [0,P^max] for d∈𝒟^q for all queries q, is always bounded. In addition, the maximum P^max = max_d(p(d|∅,0)) can be easily computed using the calibration set, making it convenient to obtain the range for the threhsolds λ_k for creating the set of candidate thresholds Λ̂ for distribution-free risk control.
Furthermore, it can be easily demonstrated that p(d|∅,0) exhibits monotonicity with respect to the expected position-wise sampling probability 𝔼_1:k-1[p(d|_1:k-1,0)].
In practice, we use the softmax of normalized ranking scores from pretrained scoring function f as the scores:
s_d = (s_d^raw - mean(s^raw)) / std(s^raw)
p_d = softmax(s_d),
where mean(s^raw) and std(s^raw) are the mean and standard deviation of the ranking scores computed on the validation set.
Empirically, we find it is much easier to design the range for searching λ if the scores are normalized.
Recall that ranking scores are unbounded real values, normalizing them makes the range of scores, i.e., p(d|∅). much easier to compute.
|
http://arxiv.org/abs/2306.04231v1
|
20230607081417
|
Learning Probabilistic Coordinate Fields for Robust Correspondences
|
[
"Weiyue Zhao",
"Hao Lu",
"Xinyi Ye",
"Zhiguo Cao",
"Xin Li"
] |
cs.CV
|
[
"cs.CV"
] |
Manuscript Submitted to IEEE Trans. Pattern Analysis & Machine Intelligence; Jun 2022
Shell zhao et al.: Learning Probabilistic Coordinate Fields for
Robust Correspondences
We introduce Probabilistic Coordinate Fields (PCFs), a novel geometric-invariant coordinate representation for image correspondence problems. In contrast to standard Cartesian coordinates, PCFs encode coordinates in correspondence-specific barycentric coordinate systems (BCS) with affine invariance. To know when and where to trust the encoded coordinates, we implement PCFs in a probabilistic network termed PCF-Net, which parameterizes the distribution of coordinate fields as Gaussian mixture models. By jointly optimizing coordinate fields and their confidence conditioned on dense flows, PCF-Net can work with various feature descriptors when quantifying the reliability of PCFs by confidence maps. An interesting observation of this work is that the learned confidence map converges to geometrically coherent and semantically consistent regions, which facilitates robust coordinate representation. By delivering the confident coordinates to keypoint/feature descriptors, we show that PCF-Net can be used as a plug-in to existing correspondence-dependent approaches. Extensive experiments on both indoor and outdoor datasets suggest that accurate geometric invariant coordinates help to achieve the state of the art in several correspondence problems, such as sparse feature matching, dense image registration, camera pose estimation, and consistency filtering. Further, the interpretable confidence map predicted by PCF-Net can also be leveraged to other novel applications from texture transfer to multi-homography classification.
image correspondences, coordinate representations, barycentric coordinates, probabilistic modeling, affine invariance
Learning Probabilistic Coordinate Fields
for Robust Correspondences
Weiyue Zhao,
Hao Lu, Member, IEEE,
Xinyi Ye,
Zhiguo Cao, Member, IEEE,
and Xin Li, Fellow, IEEE
This work is supported by the National Natural Science Foundation of China under Grant No. 62106080. Corresponding author: Z. Cao.
W. Zhao, H. Lu, X. Ye and Z. Cao are with Key Laboratory of Image Processing and Intelligent Control, Ministry of Education; School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, 430074, China (e-mail: {zhaoweiyue,hlu,xinyiye,zgcao}@hust.edu.cn).
X. Li is with the Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown WV 26506-6109 (e-mail: [email protected]).
May 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The search for robust correspondences is a fundamental problem in computer vision and can benefit a number of applications, such as image registration <cit.>, 3D reconstruction <cit.>, and image fusion <cit.>. Many image matching approaches <cit.> resort to local descriptors to represent features around the keypoints of neighboring regions of interest <cit.>. Due to their local nature,
few works <cit.> embed keypoint coordinates into descriptors to improve feature matching. Although some approaches
can overcome changes in illumination and blur distortion, they cannot tackle challenging scenarios such as repeated patterns and low-texture regions. In fact, standard coordinates, such as the Cartesian coordinate shown in Fig. <ref>(a), are sensitive to affine transforms. They offer poor positional information or geometric consistency between image pairs. Recently, several works established dense matches in an end-to-end manner <cit.>. However, they
suffer from the same problem as those of improper coordinate encoding, such as image-level multi-frequency sine/cosine embedding. Most recently, it was pointed out in LoFTR <cit.> that inappropriate positional encoding can even significantly degrade performance.
Unlike previous approaches, we propose to encode image coordinates based on barycentric coordinate system (BCS) <cit.>. This system has been widely used in computer graphics due to its geometric invariance under rigid transforms.
By constructing a pair of correspondence-specific coordinate systems between two images, we can acquire barycentric coordinate fields (BCFs) to facilitate some correspondence problems. BCFs are not only affine invariant but also geometrically coherent between image pairs. However, the geometric coherence of BCFs is prone to errors in the presence of occlusion, perspective/non-rigid transforms, or inaccurately constructed BCSs. Therefore, it is crucial to know when and where to trust the encoded coordinate fields <cit.>, regardless of whether they are Cartesian or barycentric.
Inspired by the estimation of uncertainty in optical flow and dense geometry matching <cit.>, we propose to predict the confidence values for BCFs under the supervision of estimated flows. Specifically, we first introduce the notion of Probabilistic Coordinate Field (PCF), an uncertainty-aware geometric-invariant coordinate representation. To generate the PCF, we then develop a network termed PCF-Net that jointly optimizes coordinate encoding and confidence estimation. The key idea behind PCF-Net is to generate correspondence-specific BCSs from dense flows and, more importantly, to use the estimated flows to supervise the encoding of the BCFs such that geometric coherence and semantic consistency can be preserved. To find reliable co-visible regions between paired BCFs, we parameterize the distribution of BCFs as Gaussian mixture models and infer the parameters of probabilistic models using PCF-Net when forming the confidence maps. In this way, PCFs can be interpreted as confidence maps over BCFs, as shown in Fig. <ref>.
We have evaluated our approach on indoor (SUN3D <cit.> and ScanNet <cit.>) and outdoor (YFCC100M <cit.>, PhotoTourism <cit.>, and MegaDepth <cit.>) datasets on top of various state-of-the-art matching approaches (SuperGlue <cit.>, LoFTR <cit.>, and OANet <cit.>). We first pre-train the PCF-Net using a mixture of synthetic and realistic datasets, and then incorporate the fixed PCF-Net into different baselines to replace the original coordinate representation with PCFs. For detector-based approaches (SuperGlue and OANet), we also tested them under different descriptors, including RootSIFT <cit.>, HardNet <cit.>, and SuperPoint <cit.>. Extensive experiments show that PCFs are generic coordinate representations for correspondence problems. PCFs can be used as a plug-in to help advance the state-of-the-art and, therefore, to benefit a number of correspondence-dependent tasks, such as sparse feature matching, dense image registration, camera pose estimation, and consistency filtering. Meanwhile, we have also discovered some exclusive properties of the confidence maps predicted by PCF-Net; for instance, they could be useful for other tasks ranging from texture transfer to multi-homography classification.
Summary of Contributions.
We show that geometric-invariant coordinate representations can be powerful for image correspondence problems. Technically, we introduce PCF-Net to encode geometric-invariant coordinates with confidence estimation. To our knowledge, our work is the first attempt to unify robust feature correspondences with probabilistic coordinate encoding. As shown by our experimental results, the proposed PCF is versatile, supporting a variety of keypoint descriptors and matching algorithms. The PCF-Net can be applied as a plug-in module in existing baselines with affordable computational overhead (on average 116 ms on a modern GPU). Furthermore, the interpretable confidence maps predicted by PCF-Net for coordinate representations can be exploited for other novel applications.
§ RELATED WORK
Feature Matching.
In general, feature matching can be categorized into detector-based and detector-free matching. Detector-based approaches adopt mainly hand-crafted local descriptors <cit.> or learning-based descriptors <cit.>.
The local-only descriptors <cit.> encode local patterns and are typically extracted from the image patches. However, when repeated patterns appear, they may not work well. The local-global descriptors <cit.> simultaneously encode local and global cues. Global cues can be discriminative even with similar local patterns. However, regardless of the descriptor type, the inclusion of global features is limited to gray or color information, ignoring complex textures.
Furthermore, despite detector-free approaches <cit.> that work directly on dense feature maps, the limited resolution of feature maps can still cause information loss. In this work, we use geometrically invariant coordinates to characterize spatial locations between paired images. The generated coordinate fields provide geometrically and semantically consistent information regardless of the type of descriptor and the resolution of the image.
Positional Embedding.
Positional representation has recently received a lot of interest. The absolute position encoding with the multifrequency sine/cosine function was first introduced by <cit.>.
<cit.> also implement similar encoding strategies to address object detection and local feature matching. By incorporating position encoding into local descriptors, features are expected to be position-dependent. Another choice is the learned positional encoding <cit.>. In addition to the absolute position, recent work also considers relative positional encoding <cit.>. However, all of these positional representations are sensitive to geometric transforms. Especially for image matching, there is no explicit geometric coherence between pairwise positional representations. Instead, we propose to encode geometric invariant coordinates based on BCSs, which have been shown to greatly benefit geometric coherence between paired images.
Uncertainty Estimation.
Compared to other techniques, uncertainty estimation is less studied in feature matching. DGC-Net <cit.> proposes a matchability decoder to remove uncertain correspondences. <cit.> attempts to improve the accuracy of correspondence matching by adding probability maps. Similarly, some optical flow <cit.> and geometric dense matching <cit.> approaches also predict the confidence map when estimating a flow map. In this work, we need to address how to find confident regions in the PCFs because barycentric coordinate systems may not be accurate.
However, compared with the smooth probabilistic map generated by flow models, we prefer instead a more distinctive probabilistic map with clear demarcation to classify reliable/unreliable coordinate regions.
§ BACKGROUND, MOTIVATION, AND OVERVIEW
§.§ Background: Barycentric Coordinates Fields
For explicit representation of position, coordinates are widely used within convolutional networks <cit.> or with multifrequency sine/cosine functions <cit.>. However, most coordinate representations suffer from geometric transforms. In the context of correspondence problems, a key insight of our work is that one should use geometric-invariant coordinates, such as BCS.
Within a BCS <cit.>, barycentric coordinates (λ_1, λ_2, λ_3) of a point P are defined by
λ_1=S_ PBA/S_ ABC ,
λ_2=S_ PAC/S_ ABC ,
λ_3=S_ PCB/S_ ABC ,
where λ_1 + λ_2 + λ_3 = 1, and S_ represents the triangle area (the detailed review of BCS can be found in Appendix A).
Given a BCS, we can form the image coordinates to be a barycentric coordinate field (BCF). According to Eq. (<ref>), there is a linear relation between λ_1, λ_2, and λ_3 such that two out of three can represent the rest. Without loss of generality, we choose (λ_1, λ_2) as the coordinate representation.
Formally, given an image pair Z = (I^s, I^t) where I^s, I^t∈ℝ^H × W × 3, we can construct a pair of BCFs (C^s, C^t) from the image coordinates using two sets of BCS, respectively, where C^s is the source BCF and C^t is the target. This process defines a transformation ℱ:ℝ^H × W × 2→ℝ^H × W × 2 by Eq. (<ref>), where H and W are the height and width of the image, respectively.
Note that any single coordinate system alone does not provide information on the geometric coherence between pairs of images.
§.§ Motivation: Confidence Over Coordinate Fields
A key step of our work is to bridge C^s and C^t using correspondences, as shown in Figs. <ref>(a)-(d). In particular, each vertex of the target BCS is associated with a correspondence and has a matched source vertex. Ideally, if geometric transforms between image pairs are constrained to affine transforms, the geometric coherence of the paired BCFs is highly reliable due to the invariance property of the BCS (please refer to the Appendix A for more details about the affine invariant property of BCS). However, occlusions, non-rigid transforms, or large displacements often appear in reality. In these cases, the matched geometric coherence is prone to errors. This can be observed from the coordinate error map (see Fig. <ref>(g)) between C^t (see Fig. <ref>(d)) and C^r (see Fig. <ref>(f)), where C^r is the remapped coordinate field.
From the error map, we can make the following important observation: errors are subject to a consistent distribution locally, but show a trend of smooth transition globally. In general, errors increase approximately linearly with increasing distance from the origin of the BCS (the white point in Fig. <ref>(f)). Since BCFs could be erroneous, it is critical to know when and where to trust the encoded coordinate field. Thus, we present probabilistic coordinate fields (PCFs), where a confidence metric is introduced into the BCFs and modeled in a probabilistic framework. Next, we formally define correspondence-specific BCSs, introduce PCFs by conditional modeling, and show how PCFs can be generated from a network.
§.§ Overview of Our Method
As illustrated in motivation and Fig. <ref>, our objective is to generate a set of geometry-invariant coordinate fields with confidence maps for a given image pair. To achieve this goal, we have developed a new model called PCF-Net which utilizes affine-invariant barycentric coordinates. This architecture takes the RGB image pair as input and produces the barycentric coordinate fields and the corresponding pixel-level confidence values. By combining the coordinate representations with their respective confidence values, we can generate the probabilistic coordinate field.
Specifically, we introduce the barycentric coordinate system and analyze its limitations in this Section. In Section <ref>, we explain how to build barycentric coordinate systems between an image pair and mathematically model uncertainty estimation for coordinate systems. In Section <ref>, we provide technical details of PCF-Net, which implements uncertainty estimation with Gaussian mixture models. In Section <ref>, we explain in detail how to incorporate our approach into various correspondence problems.
§ FROM BARYCENTRIC TO PROBABILISTIC COORDINATE FIELDS
Here we present our general idea of how to build the BCSs from an image pair, how to generate BCFs conditioned on BCSs, and how to model uncertainty into BCFs.
§.§ Barycentric Coordinate Systems From Flows
Given a pair of images, the correspondence problem refers to the problem of determining which parts of one image correspond to which parts of the other image.
Correspondences can be generated from sparse/dense feature matching. However, sparse matching is often limited by local features or sparse keypoints, whereas dense matching usually requires expensive computation and inference. Instead, optical flows <cit.> often provide a dense but coarse correspondence map at low resolution (LR) at low computational cost. This conforms to the construction of initial BCFs when precise correspondences are not necessary.
In this work, we follow the recent GLU-Net <cit.>, a unified model capable of geometric matching and optical flow estimation, to acquire an initial set of correspondences. In particular, given an image pair Z, GLU-Net generates a low-resolution (LR) flow map Y_I^s → I^t∈ℝ^H_L × W_L× 2, where H_L = H/4 and W_L = W/4. To simplify the notation, we use Y in what follows. In particular, we choose the set of correspondences S from the flow map Y, where S is made up of three paired correspondences. Using S, we can construct a pair of correspondence-specific BCSs between Z. According to Eq. (<ref>), the initial BCFs (C^s, C^t) can be derived accordingly.
Correspondence-Specific BCSs.
Here, we explain how to set up the correspondence between specific BCSs (as shown in Fig. <ref>).
Given the flow map Y from the source image to the target and an image coordinate grid V, a source histogram G^s can be obtained by ∀ p ∈⌈ Y ⊕ V ⌉, G^s(p) = G^s(p) + 1, where ⊕ denotes elementwise addition and ⌈·⌉ denotes rounding operation. Specifically, ⌈ Y ⊕ V ⌉ obtains an index map, each value of which represents the coordinates of the corresponding point in the source image that corresponds to the current index point. Then, we initialize an all-zero histogram G^s and accumulate it indexed by the index map. The value of the source histogram G^s indicates the number of times a pixel of the source image is mapped to the target image. Then the flow map Y can remap G^s to the target G^t, denoted by the flow density map, which characterizes the mapping frequency of the pixels of the source image in the flow distribution.
After applying the average pooling on G^t with a kernel size of K (K is a positive odd integer), the peak (yellow marked in Fig. <ref>) of the average density map is chosen as the origin of the target BCS. The other two vertices are randomly chosen around the origin, with a radius of (K-1)/2. Once the target BCS is built, the corresponding source BCS can be constructed according to the flow map and the target BCS.
Since Y is of low resolution, we upsample C^s and C^t with bilinear interpolation to apply them to images of the original resolution. It is worth noting that upsampling does not affect the geometric coherence of coordinate fields due to the scale invariance of barycentric coordinates. Nevertheless, the BCFs are still not ready for use because the flow map can be inaccurate, such that the generated correspondence set and constructed BCSs are inaccurate. Hence, it is important to know which correspondences are trustable on the flow map.
§.§ Probabilistic Coordinates via Conditional Modeling
To find reliable geometrically coherent regions, our idea is to design a network to predict a confidence map for BCFs conditioned on the flow map. Intuitively, this idea can be implemented by designing the network to predict the conditional probability used to represent the confidence of the coordinate fields.
Given an image pair Z, coupled BCFs (C^s, C^t), and flow map Y, our goal is to generate a confidence map M ∈ℝ^H × W relating C^s to C^t. This can be achieved by defining a conditional probability density 𝒫(X(C^s;Y)|Ψ(θ)), where X(C^s;Y) denotes the remapped coordinate field C^s conditioned on the flow Y, and Ψ(θ) is a network parameterized by θ that predicts the parameters of the probabilistic model. If assuming spatial independence, for any position (i, j), 𝒫(X|Ψ) amounts to a family of distributions ∏_ij𝒫(x_ij|ψ_ij), where x_ij denotes the remapped barycentric coordinate depending on the optical flow. ψ_ij is a network that outputs the parameters of the probability density specific to the spatial location (i, j). To simplify the notation, we eliminate the subscripts i,j in the following expressions.
A common choice to model a conditional probability density is to use a Laplacian/Gaussian distribution <cit.>. Their main difference lies in the definition of loss function: ℓ_1 loss |x-μ| vs. ℓ_2 loss (x-μ)^2, where x is a variable and μ is the mean. Interestingly, many optical flow approaches adopt the Laplacian model due to the robustness of ℓ_1 <cit.> (i.e., ℓ_1 distance does not significantly penalize large flow errors). However, in our correspondence problems, we expect an opposite behavior- , large coordinate errors should be penalized more because they directly reflect the accuracy of the correspondence. In other words, unlike probabilistic flow models <cit.>, our model must be sensitive to large coordinate errors to acquire a discriminative confidence map. An experimental justification for the preference for ℓ_2 over ℓ_1 can be found in our ablation studies (Section <ref>).
§ PROBABILISTIC COORDINATE FIELDS: BCFS ON CONFIDENCE
The goal of probabilistic modeling of coordinate fields is to use a probabilistic model to identify reliable regions in BCFs. In practice, we first need the model to generate a confidence map, which is the core of Probabilistic Coordinate Fields (PCFs). Then we show how to generate the PCF with a network termed PCF-Net, which extends the previous work PDC-Net <cit.>.
§.§ Probabilistic Modeling by Gaussian Mixture Models
We assume that every spatial location obeys a 2D Gaussian distribution 𝒢(x)∼ N(μ_u, σ_u^2)· N(μ_v, σ_v^2), where each target coordinate x=(u, v) ∈ℝ^2 is modeled by two conditionally independent 1D Gaussians: N(μ_u, σ_u^2) and N(μ_v, σ_v^2) respectively.
Additionally, we assume equal variances on both coordinate axes such that σ_u^2 = σ_v^2 = σ^2. By further defining μ = [μ_u, μ_v]^T, the 2D Gaussian distribution 𝒢 amounts to
𝒢(x|μ, σ^2) = 1/2πσ^2e^-||x-μ||_2^2/2σ^2 .
In theory, we expect that the above probabilistic regression model can fit the empirical distribution shown in Fig. <ref>(g) as well as possible. Inspired by <cit.>, we choose Gaussian Mixture Models (GMMs) for flexible modeling. Concretely, we consider a model consisting of k Gaussian components, , 𝒫(x|ψ) = ∑_n=1^kα_n𝒢(x|μ, σ_n^2), where all K components share the same μ but different variances σ_n, and μ is given by the remapped source coordinate field C^r (refer to Fig. <ref>(f)).
α_n ≥ 0 satisfies ∑_n=1^kα_n = 1. Empirically, when K increases, a GMM should fit complex distributions better. However, we experimentally find that a complex model can cause overfitting in some uncertain regions, causing small σ_n^2 with large α_n. Recall that our goal is to discriminate the coordinate field into reliable/unreliable regions. Inspired by PDC-Net <cit.>, we propose to use a constrained two-component GMM to model the distribution, defined by
𝒫(x|ψ) = α_+𝒢(x|μ, σ_+^2) + (1 - α_+)𝒢(x|μ, σ_-^2) ,
where (α_+, σ_+^2, σ_-^2) are all predicted by the network ψ(θ). For robust fitting, we add some constraints to both variances (σ_+^2 and σ_-^2) and α_+.
Unlike <cit.>, we applied different constraint strategies to meet the modeling needs of the coordinate probability model. The details are discussed in the following. An experimental comparison of our preference over PDC-Net can be found in Section <ref>.
Constraint on σ_+^2 and σ_-^2.
Intuitively, each variance σ_n in standard GMMs accounts for a certain range of uncertainties, loosely corresponding to different error regions in Fig. <ref>(g). To identify reliable regions more explicitly, we constrain σ_+^2 and σ_-^2 into different ranges such that
0 ≤σ_+^2 ≤δ_+ < δ_++Δδ≤σ_-^2 < δ_- ,
where σ_+^2 accounts for reliable regions and σ_-^2 for erroneous regions. δ_+ and δ_- are empirically set hyperparameters in this work. The margin Δδ is used to avoid a smooth transition and to force the network to make a choice at every spatial location.
We also find that it is useful to constrain the value of σ_+^2 to δ_+, rather than using a fixed σ_+^2 as in <cit.>.
Constraint on α_+.
In standard GMMs, each α_n controls the contribution of a component. In the open literature, some approaches <cit.> predict unconstrained α_n's independently and then use a softmax layer for normalization. However, a potential issue is that there is no information interaction or explicit value constraints between different α_n's. This may cause confusion, so similar α_n's are used in different components, which is undesirable for discriminating reliable/unreliable regions. Instead of predicting α_n for each component, here we only predict α_+ and set α_-=1-α_+. In this way, the network is required to make a hard choice between two components. Note that we also normalize α_+ with a function sigmoid.
Confidence Map M.
Given the predicted parameters (α_+, σ_+^2, σ_-^2) of the probabilistic model in Eq. (<ref>), we can compute the confidence M_ij at location (i,j) within a radius ||x-μ||_2<R as
M_ij = ∫_{x∈ℝ^2:||x-μ||_2<R}𝒫(x|ψ) dx
=∫_-R^R∫_-R^Rα_+𝒢(x|μ, σ_+^2) + (1 - α_+)𝒢(x|μ, σ_-^2)dudv
=1 - exp(-R^2/2σ_-^2) +α_+[exp(-R^2/2σ_-^2) - exp(-R^2/2σ_+^2)] .
Finally, once M is obtained, it can be used to interact with the remapped coordinate field C^r to generate the PCF, given by M ⊗ C^r, where ⊗ is the element-wise multiplication. Note that the PCF has a hard form and a soft form, conditioned on whether the map M is binarized.
§.§ Probabilistic Coordinate Field Network
Here, we show how to generate the PCF with a network called the Probabilistic Coordinate Field Network (PCF-Net). The PCF-Net pipeline is shown in Fig. <ref>. Mostly, it includes a coherence module and a probabilistic model predictor.
Coherence Module.
The coherence module aims to construct geometric and semantic coherence between an image pair (I^s, I^t) before predicting the model parameters. We model geometric and semantic coherence by combining the coordinate features (C^s, C^t) with the image features (f^s, f^t). The coordinate features extracted by a shared coordinate encoder are integrated into the upsampled content features. By warping the integrated source features with the flow map Y, a 4D correlation map U <cit.> can be calculated for parameter estimation.
Probabilistic Model Predictor.
To predict the parameters of the probabilistic model, we also integrate the information of a prior distribution and the flow map. As shown in Fig. <ref>(g), we propose to use a distance map D to approximate the error distribution, given by
D_ij = exp(-γ·1/d_ij) ,
where γ controls the decrease speed and d_ij is the distance between a location (i,j) and the origin of BCS. In addition, PDC-Net <cit.> points out that dense flow estimation is important to move objects independently. Similarly to D, the flow characteristic f^Y is also considered. Finally, we concatenate them and feed them to a predictor to regress the parameters of the probabilistic model through several convolutional layers (reference to the implementation in <cit.>).
Loss Function. Using the following practices in probabilistic regression tasks <cit.>, we train our model with the negative log-likelihood loss function.
For a given image pair, the coupled coordinate fields (C^s, C^t) and the ground truth flow Y_gt, the loss takes the form of
ℒ = -log𝒫(X̂(C^s;Y_gt)|Ψ(θ)) ,
where X̂(C^s;Y) denotes the remapped coordinate field C^s conditioned on the flow Y_gt.
Note that this loss function explicitly supervises the reliable regions of PCFs. Similarly, the remapped coordinate field X̂ also implicitly supervises the learning of the flow map.
§.§ Visualization of PCFs
To better understand our probabilistic model, we visualize the parameters predicted by PCF-Net, as shown in Fig. <ref>. Specifically, we choose indoor and outdoor scenes to demonstrate the generalization of our approach. The inferred parameters are all predicted by PCF-Net for the target image. The confidence map is calculated by Eq. (<ref>). Using confidence maps, probabilistic coordinate fields (PCFs) can be obtained by integrating BCFs and confidence maps together. Technically, we binarize the confidence map (threshold taken 0.5) and then mask the unreliable region corresponding to the BCF. The PCFs are visualized in Fig. <ref>. We produce consistent positional encodings from a pair of images. The valid region encoded by the PCFs guarantees consistency in both the coordinate geometry and the content representation. Moreover, the indoor results show that our approach can identify the nonaffine region.
§.§ Computational Complexity
It is worth noting that our pipeline is relatively efficient because it operates on LR image pairs of size H/4×W/4. We report the computational complexity and the number of PCF-Net parameters in Table <ref>. PCF-Net adopts an efficient network structure for probabilistic parameter prediction (See Appendix B).
§ PRACTICAL USE OF PCFS
Here, we show how to apply PCFs to correspondence problems. We first introduce how to build multiple coordinate systems that are used to increase the robustness of PCFs. Furthermore, considering that different correspondence tasks use different feature descriptors (, different feature dimensions), multiple PCFs need different embedding strategies so that PCFs can be compatible with different descriptors. Therefore, we present possible positional encoding strategies in three typical correspondence tasks.
§.§ Multiple Coordinate Systems
As explained in Section <ref>, it is difficult to encode the correct coordinates for the entire target image with a single pair of BCS. A benefit of our approach is that it allows one to obtain multiple coordinate fields from multiple BCSs. Specifically, we construct a new coordinate system outside the predicted reliable regions. However, for each additional coordinate system, the inference time increases by approximately 25 ms. Therefore, it is important to know how many additional rewards an additional BCS pair can provide. To this end, we tested the Intersection-over-Union (IoU) metric between the union of different reliable regions and the ground-truth flow map on two large-scale datasets (MegaDepth and ScanNet), which evaluates the additional contribution of multiple coordinate systems (Fig. <ref>). First, we find that two different pairs of BCS are sufficient to encode the entire target image. In the case of reselecting BCSs, we mask the region around the initial origin with a radius of (K-1)/2 and repeat the above steps to build other BCSs. Moreover, the multiple BCSs encourage reliable coordinate encoding for fitting nonrigid transforms with large relative pose, as shown in Fig. <ref>(a).
Note that the location of the BCS origin is selected on the basis of the distribution of the estimated flow map (Section <ref>). If the position of a coordinate system corresponds to an incorrect flow estimate, our confidence map can help to detect and reselect the coordinate system position in time (Fig. <ref>). For the failure case, we re-select the origin no more than 5 times. If the origin is still null (, the extreme perspective changes shown in Fig. <ref>(b), our approach returns an all-zero confidence map and the original Cartesian coordinates. Note that an all-zero confidence map is theoretically harmless to the matcher network according to our design strategy.
§.§ Sparse Feature Matching
For sparse feature matching, we investigate two different positional encoding strategies that integrate PCFs with feature descriptors: an MLP-based strategy and an attention-based one. Let X^s ∈ℝ^N × 2 and X^t ∈ℝ^L × 2 denote the sparse normalized BCF with the zero score of the source and target image, where N and L represent the number of key points in the source and target image, respectively. Let m^s∈ℝ^N × 1 and m^t∈ℝ^L × 1 be the corresponding confidence values.
MLP-based Positional Encoding. To use only reliable encoded coordinates, we first clip X^s/t according to the corresponding confidence values, i.e.,
x̂_i^s/t =
x_i^s/t, if m_i^s/t≥ 0.5
[max{ X^s/t(:,1) }, max{ X^s/t(:,2) } ], otherwise ,
where x_i^s/t∈ X^s/t denotes the barycentric coordinate of the i^th keypoint. Following <cit.>, we embed the coordinate x̂_i^s/t into a high-dimensional vector with an MLP. Instead of processing X̂^s and X̂^t individually <cit.>, we concatenate both and process them simultaneously with the same batch normalization layers.
Attention-based Positional Encoding.
Compared to the MLP-based encoding strategy, we further develop an attention-based method with the confidence mask to capture contextual consistency. Practically, inspired by <cit.>, we apply a transformer without positional encoding to encode X^s and X^t and generate F^s ∈ℝ^N × d and F^t ∈ℝ^L × d, such that (F^s, F^t) =MLP(X^s, X^t), F^s = F^s + G_self/cross(F^s, F^s/t, m^s, m^s/t) , where G_self and G_cross represent self-attention <cit.> and cross-attention with multiple heads H = 4, respectively, and d denotes the dimension of the keypoint descriptor. In contrast to the conventional attention mechanism, we introduce the confidence maps m_Q and m_K into the formulation given by
Atten(Q,K,V,m_Q,m_K)= softmax((m_Q m_K^T) ·QK^T/√(d_k))V ,
where Q, K, and V refer to the query, key, and value with d-dimension in the Transformer <cit.>, respectively. m_Q and m_K denote the confidence maps of the query and the key. They can be the same m^s/m^t in self-attention or m^s/m^t and m^t/m^s in cross-attention. Benefiting from the transformer without distance constraints, the encoded coordinate features integrate contextual and geometric positional information.
Training Details.
We detected up to 2 K key points for all images. The baseline is the state-of-the-art method SuperGlue <cit.>. We use a batch size of 20 and train for 300 K iterations. We use Adam optimizer <cit.> with a constant leaning rate of 10^-4 for the first 200/100/50K iterations, followed by an exponential decay of 0.999998/0.999992/0.999992 until iteration 900 K. We perform 100 Sinkhorn iterations and use a confidence threshold (we choose 0.2) to retain some matches from the soft assignment. For integrating sparse PCFs, we replaced the original input coordinate vectors of SuperGlue with our different encoding strategies (MLP-based/Attention-based positional encoding).
§.§ Dense Image Registration
In dense image registration, we can directly embed PCFs into feature maps.
Let C^s ∈ℝ^H × W × 2 and C^t ∈ℝ^H × W × 2 denote the dense normalized BCF with zero score of the source and target image, respectively. Let M^s∈ℝ^H × W × 1 and M^t∈ℝ^H × W × 1 be the corresponding confidence maps. We choose reliable regions of BCFs with confidence maps such that
Ĉ_ij = 1(M_ij - 0.5) C_ij ,
where C_ij denotes the barycentric coordinate at the spatial location (i, j), and Ĉ_ij is the masked coordinate at the same location. Once the PCFs Ĉ^s and Ĉ^t are obtained, we can embed them into feature maps with a few strided convolutions. Note that the input of these convolutional layers includes two different sets of BCS, resulting in Ĉ^s/t∈ℝ^b × 4 × H × W, where b is the batch size.
Training Details.
We choose the state-of-the-art method LoFTR <cit.> as our baseline. For the indoor dataset, the input resolution of the image pair is set to 640× 480. For the ourdoor dataset, the images are resized so that their longer dimensions are equal to 840 for training and 1200 for validation. Due to resource constraints, we use a batch size of 8/12 for indoor/outdoor datasets, respectively, instead of 64. We also reduced the original 30 training epochs to 15 epochs. The model is trained using Adam optimizer with an initial learning rate of 10^-3. We follow the default hyperparameter settings as in LoFTR.
To integrate coordinate representations, we embed the PCFs through a series of convolutional layers illustrated above. Note that we need to resize the obtained BCFs and confidence map to fit the input resolution requirement of LoFTR.
§.§ Consistency Filtering
The standard input 𝒥∈ℝ^N × 4 of consistency filtering is a concatenation of four-dimensional coordinate vectors representing N candidate correspondences. Similarly, we can obtain a four-dimensional barycentric coordinate vector 𝒥_i = [x_i^s, x_i^t] for each correspondence as an alternative input, where x_i^s ∈ X^s and x_i^t ∈ X^t, i=1,...,N. X^s ∈ℝ^N × 2 and X^t ∈ℝ^N × 2 denote the sparse zero-score normalized BCF of the source and target image, respectively. Then, we encode the prior geometric consistency into a flag vector such that
τ_i = m_i^s m_i^t ·3exp(-h) - 1/ 1 + exp(-h) ,
where the confidence values m_i^s and m_i^t correspond to x_i^s and x_i^t, respectively, and the coordinate distance h = ||x_i^s - x_i^t||_2.
In this way, reliable correspondences receive positive flags and unreliable ones receive negative flags. Moreover, the uncertain correspondences have
almost zero values. Due to the two different pairs of BCS implemented in our experiments, the output is
𝒯∈ℝ^N × 2. Eventually, we concatenate it with the original input 𝒥 as the final input 𝒥̂∈ℝ^N × 6, , 𝒥̂_i = [x_i^s, x_i^t, τ_i^1, τ_i^2], where 𝒯_i = [τ_i^1, τ_i^2], and τ_i^j indicates the i^th row and the j^th column of 𝒯.
Training Details.
Our baseline is chosen as the state-of-the-art outlier rejection network OANet <cit.>.
We use Adam solver with a learning rate of 10^-4 and batch size 32. The weight α is 0 during the first 20 K iterations and then 0.1 in the rest 480 K iterations as in <cit.>. We detect 2 K key points for each image evaluated.
For the rest, we used the default hyperparameters in the original implementation. The initial matching set is generated with the mutual nearest neighbor check (MNN) and the ratio test (RT) <cit.>. Following the above strategy, we re-encode the PCFs to a 2D vector and concatenate it with the original 4D input vector.
§ EXPERIMENTS
We integrate our approach into a lightweight implementation of an optical flow network, GLU-Net <cit.>, and perform extensive experiments on multiple matching datasets. We show that PCF-Net can work with various descriptors to achieve state-of-the-art performance and that our approach can be applied to different correspondence problems, including sparse feature matching, dense image registration, and consistency filtering. We also identify the specificity of our constrained Gaussian mixture model and will thus highlight the differences between our model and PDCNet <cit.>.
Moreover, we demonstrate the potential of our approach in texture transfer and multi-homography classification.
§.§ Datasets
Here we introduce the details of all datasets used, including data sampling and splitting strategies.
Synthetic Dataset. We use a mixture of synthetic data for PCF-Net training with a total of 40 K images, which combines datasets from DPED <cit.>, CityScapes <cit.>, and ADE-20K <cit.>. Following <cit.>, training image pairs are generated by applying random warps and small local perturbations to the original images. Meanwhile, for better compatibility with real scenes, we further augment the synthetic data with a random moving object from the MS-COCO <cit.> dataset as implemented in <cit.>.
MegaDepth. MegaDepth <cit.> is a large-scale outdoor dataset consisting of 196 scenes, which are reconstructed from one million images on the Internet using COLMAP <cit.>. We generate ground truth correspondences by projecting all points of the source image with depth information onto the target image, using the intrinsic and extrinsic camera parameters provided by D2-Net <cit.>. A depth check of the source depth map is also conducted to remove irrelevant pixels such as sky and pedestrians. For PCF-Net, we use 150 scenes and sample up to 58 K training pairs with an overlap ratio of at least 30% in the sparse SfM point cloud. Furthermore, we sampled 1800 validation data from 25 different scenes. For dense image registration, we follow <cit.> to only use the scenes of “Sacre Coeur” and “St. Peter's Square” for validation. Training and testing indices are provided by <cit.>.
ScanNet. ScanNet <cit.> is a large-scale indoor dataset composed of monocular sequences with ground-truth poses and depth images. It also has well-defined training, validation, and testing splits.
We generate ground-truth correspondences referring to the procedure used on the MegaDepth dataset. Following the dataset indices provided by <cit.>, we select 230 M training pairs and 1500 testing pairs, discarding pairs with small or large overlaps.
YFCC100M. Yahoo's YFCC100M dataset <cit.> contains 100 M internet photos, and <cit.> later generates 72 3D reconstructions of tourist landmarks from a subset of the collections. Following <cit.>, we select 68 scenes for training and 4 scenes for testing. In each scene, image pairs with overlap beyond 50% are included in the data set, resulting in pairs of 250K training and 4K testing. Due to the lack of depth maps, ground-truth correspondences are supervised by the symmetric epipolar distance (< 1e^-4).
PhotoTourism. PhotoTourism <cit.> is a subset of the YFCC100M dataset <cit.> with 15 scenes and has ground-truth poses and sparse 3D models obtained from COLMAP. We select 12 scenes for training and the rest for testing. In each scene, we generate image pairs by finding the top 10 images for each image according to the number of common points, resulting in image pairs of 230 K training and 6 K testing.
SUN3D. The SUN3D dataset <cit.> is an indoor RGBD video dataset with camera poses computed by generalized bundle adjustment. Following <cit.>, we split the dataset into sequences, with 239 for training and 15 for testing. We subsample each video every 10 frames and select image pairs with an overlap beyond 35%. Finally, the 1 M training and 1500 testing pairs are selected.
§.§ Protocols and Implementation Details of PCF-Net
Training Details.
To train the PCF-Net, we use both synthetic and real data. Synthetic data include DPED <cit.>, CityScapes <cit.>, ADE-20K <cit.>, and COCO <cit.>. Real data uses MegaDepth <cit.> and ScanNet <cit.> datasets.
We adopt VGG-16 <cit.> as the feature extractor backbone and GLU-Net <cit.> as the flow network. The input image pairs of VGG-16 are cropped to 520 × 520 during training. Our PCF-Net cascades separately after the last two layers of the GLU-Net (which corresponds to resolutions of 1/4 and 1/8). The parameters in Eq. (<ref>) are fixed to δ_+ = 1, δ_- = 11, and Δδ = 2. γ in Eq. (<ref>) is set to 0.03. The radius R in Eq. (<ref>) is set to 1.
Training has two stages. In the first stage, we employ the VGG-16 network pre-trained on ImageNet and the GLU-Net pre-trained on DPED-CityScape-ADE <cit.>, and PCF-Net is trained on synthetic data. Note that during the first training stage, the feature and flow backbones are frozen. In the second stage, we train our approach on a combination of sparse data from real scenes and dense synthetic data. In addition, we fine-tune the two backbones in this stage.
During the first stage of training on unique synthetic data, we train for 130 epochs, with a batch size of 16. The learning rate is initially set to 10^-4 and halved after the 70 and 110 epochs. When fine-tuning the composition of the real-scene dataset and synthetic dataset, the batch size is reduced to 8, and we train for the 150 epochs. The initial learning rate is fixed at 5×10^-5 and halved after 90 and 130 epochs. Our approach is implemented using PyTorch <cit.> and our networks are trained using the Adam optimizer <cit.> with a weight decay of 0.0004.
Evaluation Metrics. To evaluate the performance of correspondence selection, we report the Precision (P), Recall (R), and F-measure (F) as in <cit.>. We use two types of precision for different tasks. For consistency filtering <cit.>, P_epi represents the epipolar distance of the correspondences below 10^-4. For sparse feature matching <cit.>, P_proj indicates the reprojection error of correspondences lower than 5 pixels. To further measure the accuracy of pose estimation, we report the AUC of pose error at thresholds of 5^∘, 10^∘, and 20^∘.
Note that the AUC metric adopts the approximate AUC as in <cit.>.
To recover the pose of the camera, we calculate the essential matrix with findEssentialMat (the threshold is set to 0.001) implemented by OpenCV and RANSAC, followed by recoverPose.
§.§ Results on Correspondence Tasks
§.§.§ Sparse Feature Matching
Sparse feature matching is often affected by the quality of the descriptors. In particular, coordinate representations are largely overlooked in this field. To demonstrate the superiority of PCFs for sparse feature matching, we built our baseline using the state-of-the-art network SuperGlue <cit.> and three representative descriptors: RootSIFT <cit.>, HardNet <cit.>, and SuperPoint <cit.>. We replace the original SuperGlue input coordinate vectors with our sparse PCFs (see Section <ref>). We report the performance on the PhotoTourism <cit.> and SUN3D <cit.> datasets. Note that SuperGlue <cit.> did not provide the official training code. Therefore, we have tried our best to reproduce the results of the original article by consulting the authors for a fair comparison.
Ablation Study. To understand the importance of PCF-Net and explore the appropriate implementation of PCFs, we first conduct an ablation study on the PhotoTourism dataset with RootSIFT and SuperPoint descriptors. The results are shown in Table <ref>:
1) The naive BCF encoded by PCF-Net consistently improves recall;
2) The combination of Cartesian coordinates and barycentric coordinates performs better than independently using each of them;
3) The confidence map contributes to a significant performance improvement (+2.6% at AUC@20^∘ and +12% at recall for RootSIFT), which implies that the reliability of geometric coherence appears to be more important than uninformed geometric invariance;
4) Compared with conventional positional encoding by MLP (see Section <ref>), the attentional mask works better in encoding geometric positional information.
In the following experiments, we adopt the last row of the sub-tables in Table <ref> as our baseline (CC+BCF+CM+Atten.).
Results.
Outdoor results are reported in Table <ref>. Our approach (PCF-Net + SuperGlue) improves pose estimation and matching accuracy using three different descriptors, especially when HardNet is used. Due to the patch-based characteristic of HardNet, SuperGlue can acquire only global information from keypoint coordinates.
However, the poor performance of SuperGlue with Cartesian coordinates implies that Cartesian coordinates are not useful. By integrating reliable barycentric coordinates into SuperGlue, a significant improvement is observed in all metrics (+7.8% in precision, +14.1% in recall, and +11.5% in pose estimation).
These results clearly demonstrate the advantages of reliable coordinate representation and an appropriate encoding strategy in sparse feature matching.
We also remark that the improvements of PCF-Net in the indoor dataset (Table <ref>) are not as obvious as in outdoor scenes, because low-texture regions or repetitive patterns appear more frequently in indoor environments. But SuperGlue with PCFs still shows a certain degree of superiority over the baseline, especially in pose estimation with +3% in AUC@20^∘.
Particularly, SuperGlue with PCFs and the SuperPoint descriptor achieves competitive results with the detector-free method LoFTR <cit.>.
Visualizations of the correspondence results are shown in Fig. <ref>.
§.§.§ Dense Image Registration
Although CNNs implicitly encode position-sensitive features, dense image registration approaches typically use 2D extension of frequency sine functions to extract positional information. However, such positional information can be ambitious for an image pair.
Here, we further demonstrate the advantage of PCF in dense image registration. We choose the state-of-the-art detector-free network LoFTR <cit.> as our baseline. To integrate coordinate representations, we embed the PCFs into feature maps through 3 successive stride-2 convolutional layers with the kernel size of 3 × 3 (see Section <ref>). Following LoFTR, we use MegaDepth <cit.> and ScanNet <cit.> for the evaluation.
Results.
As shown in Table <ref>, compared to the baseline in both datasets with two types of differentiable matching layers, LoFTR with PCFs achieves better performance in pose estimation (on average +2.5% in AUC). The improved results again demonstrate the importance of geometrically invariant coordinate encoding.
Interestingly, when we remove the original position encoder from the baseline, the results are improved, suggesting that inappropriate positional representation can even degrade the performance in dense image registration.
§.§.§ Consistency Filtering
The standard input for consistency filtering is an 4-element vector that concatenates the coordinates of each correspondence. Intuitively, such a coordinate vector would lack global geometric information. Here, we explore the potential of PCFs in this task. In particular, we reencode the PCFs to a 2D vector that characterizes the initial possibility that correspondences are inliers/outliers (see Section <ref>). This 2D vector is then concatenated into the standard coordinate input as the new representation to a consistency filter.
We choose the state-of-the-art outlier rejection network OANet <cit.> as the baseline. Following OANet, we report on performance on the Yahoo YFCC100M <cit.> and SUN3D datasets.
Results.
The results are listed in Tables <ref> and <ref>. In both datasets, PCFs achieve a clear improvement in outlier rejection and pose estimation. Even without the learnable parameters for the encoded coordinates, our method still outperforms the baseline (on average +2.3% in the AUC) with all descriptors. In particular, in the setting of RooSIFT in the outdoor dataset, we observe a relative improvement of +11.2% in precision and +6.6% in recall. Our results imply that PCF-Net reliably encodes geometrically invariant coordinates between image pairs, and geometric coherence is also crucial for consistency filtering. Visualizations of the correspondence results are shown in Fig. <ref>.
It should be noted that there are discrepancies in the results obtained by our reproduction of OANet using SuperPoint features and the results reported in Appendix of SuperGlue <cit.>. These discrepancies can be attributed to the following differences in the experimental settings used for SuperPoint feature extraction: 1) We set the non-maximum suppression (nms) parameter to 3 instead of 4, and 2) We directly extracted the SuperPoint descriptors without resizing the input images.
§.§ Comparison of Competing Probabilistic Models
To demonstrate the necessity of our probabilistic model, we compare our probabilistic coordinate model with the traditional probabilistic flow model, such as PDC-Net <cit.>. The differences between parameterization are summarized: 1) fixed σ_+ vs. learnable σ_+; 2) the use of sigmoid instead of softmax to predict constrained α; 3) Laplacian model(ℓ_1 loss) vs. Gaussian model (ℓ_2 loss). The reasons for our choice have been explained in Section <ref> and Section <ref>. Here, we conduct an ablation experiment to identify their respective contributions.
Experimental Details. Based on the probabilistic flow model of PDC-Net <cit.>, we replace the parameter settings of the original model in turn with our probabilistic coordinate model.
During the training stage, we only replaced the probabilistic model and the corresponding loss function in PCF-Net, and the other training settings remain the same. Note that here we only use one set of BCSs. Table <ref> reports the Intersection-over-Union (IoU) metric between the union of different reliable regions and the ground truth flow map in two validation datasets (MegaDepth and ScanNet). The large performance gaps indicate that our coordinate probabilistic model is designed for this task. Specifically, substituting sigmoid for softmax to obtain the constrained α is crucial to help the entire network.
We visualize the prediction parameters σ_+ and α_+, using the probabilistic model of PDCNet <cit.> that does not fit the distribution of coordinates directly as a baseline and our own model, respectively. As shown in Fig. <ref>, the probabilistic model of PDCNet does not make sense, while our model yields reasonable prediction results.
Meanwhile, we observe an interesting result, that the Gaussian model (ℓ_2 loss) is more suitable for our task than the Laplacian model (ℓ_1 loss) (discussed in Section <ref>). Here, we visualize their own predicted confidence maps in Fig. <ref>. Clearly, the Gaussian model provides a sharper edge than the Laplacian model, which justifies our preference for the Gaussian model for probabilistic coordinate representations.
§.§ Additional Analysis
Our PCF-Net is built on the GLU-Net <cit.>, which is a universal network architecture applicable to dense correspondence problems. To show the improvements brought about by our approach, we present a visual comparison of GLU-Net and our PCF-Net in Fig. <ref>. GLU-Net rarely considers the uncertainty of predicted correspondences, leading to large displacements and significant appearance transformations between image pairs (see the 3^rd column). Consequently, it cannot directly serve our probabilistic coordinate encoding task. By contrast, our PCF-Net integrates coordinate features and image features to estimate a confidence map, which can be used jointly for optical flow estimation and coordinate encoding. As illustrated in the last two columns of Fig. <ref>, our method achieves superior correspondence performance and more accurate confidence map estimation.
To further isolate the contribution of barycentric coordinates and reliable region prediction, we conduct the experiment on the MegaDepth dataset using LoFTR-OT <cit.> as a baseline. The position encoding module in LoFTR-OT will be replaced by the following three strategies: 1) The PCF-Net without any modifications proposed in Section <ref> and using cartesian coordinates (CC); 2) The full proposed PCF-Net without distance map and using CC; 3) The full proposed PCF-Net using the BCFs. It is important to note that each PCF-Net is trained individually based on the specific settings. The results are reported in Table <ref>, demonstrating that both the geometry-invariant barycentric coordinates and the proposed probabilistic models contribute to correspondences and pose estimation. It is worth noting that the original probability model can even lead to a decline in baseline performance, which highlights the necessity of the modifications proposed in our approach.
§.§ Extensions into Other Applications
Texture Transfer. Since our PCF-Net method is built on the flow field, the confidence map for coordinate encoding can be used to represent the uncertainty of flow prediction. Mathematically, the coordinate confidence map is a subset of the flow confidence map. To better illustrate the relationship between them, we compare our confidence results with PDC-Net <cit.>, the state-of-the-art network for predicting flow uncertainty.
When comparing the boundary differences between the predicted confidence maps and the GT, the prediction accuracy in the target contour varies between the approaches (as shown in Fig. <ref>).
Obviously, our approach generates a discriminative confidence map with a clear demarcation to classify regions. For a fair comparison, we train PDC-Net on the MegaDepth and ScanNet datasets following the original implementation, respectively. Interestingly, we emphasize the phenomenon that PDC-Net does not fit the probability distribution of indoor scenes well, while our approach works in different scenes.
Due to the potential of our approach to predict optical flow uncertainties and the sharp edges of approximate instance segmentation, our approach can be used to transfer texture between images. In Fig. <ref> we show the results using historical and modern images from the LTLL dataset <cit.>. Note that we did not resort to any pre-trained segmentation models. Perhaps our work may be useful for the task of segmentation.
Multi-homography Classification.
Normally, we apply a single homography matrix to align the pair of images. However, such an assumption is invalid for image pairs with strong 3D effects or large object displacements.
To relax this assumption, Multi-X <cit.> formulated the multi-class model fitting by energy minimization and model searching. Recently, some work <cit.> proposed iteratively reducing the threshold of RANSAC <cit.> to discover multiple homography candidates.
At each iteration, they can remove feature correspondences that are inliers for previous homographies (the threshold ϵ_g) as well as from locations within the matchability masks predicted previously and recompute RANSAC using a smaller threshold (ϵ_l < ϵ_g ). This procedure is repeated until the number of inliers is less than η. The image pairs are then divided into several nonoverlapping regions, each of which corresponds to a distinct homography transformation.
However, we found two drawbacks to such a strategy: 1) The selection of the threshold ϵ for each iteration is heuristic and sensitive for the results; 2) When the number of input correspondences is large (, dense flow map), the time consumption of the algorithm is intolerable, as shown in Fig. <ref>. For example, for an image pair of 720 × 720 resolution, the dense correspondences between them are at least 200 K.
Fortunately, our approach is not plagued by operational efficiency and hyperparameters. As proposed in Section <ref>, we simply need to change the mask area from a circle of fixed radius (K-1)/2 to the predicted reliable area. The number of iterations is pre-set like the RANSAC-flow <cit.>. We visualize the results in Fig. <ref>. For multiple homographies cases, our approach could successfully recognize them and give dense classification results. This may be helpful for image alignment or image stitching.
§.§ Inference Time
Our network and pipeline are implemented with PyTorch. Here we measure the inference time on an 24 G NVIDIA GTX 3090 GPU and Intel(R) Xeon(R) Gold 6226R CPU @2.90GHz. The image pairs are of size 520 × 520, which corresponds to the predetermined input resolution of GLU-Net. Table <ref> is a summary of the run time of each component stage, including the extraction of feature maps, the estimation of flow, the calculation of BCFs, the prediction of probabilistic parameters and the calculation of PCFs.
Our pipeline takes 116 ms on average to infer. For dense correspondence tasks <cit.>, the extracted image feature maps can be shared with these methods to further save computational cost. The estimated flow map not only is used for our geometric-invariant coordinate encoding, but also contributes to other tasks, such as image alignment and texture transfer. The obtained confidence map can also be applied as the uncertainty of flow estimation, furthermore, to identify multi-homography regions. Overall, our proposed pipeline can be applied to different tasks with rapid inference time.
Furthermore, we also compared the PCF-Net with other state-of-the-art sparse matching methods, including SuperGlue, SGMNet <cit.> and ClusterGNN <cit.>. SGMNet and ClusterGNN both build on SuperGlue to further improve computational efficiency and matching performance. In particular, SGMNet establishes a small set of nodes to reduce the cost of attention. ClusterGNN uses K-means to construct local sub-graphs to save memory cost and computational overhead. We test the running time and memory of different methods with a gradually increasing number of keypoints. As shown in Fig. <ref>(a), the runtime of sparse matching methods is relevant to the input number of keypoints. In particular, for the 10K keypoints, the run-time of our method could be negligible compared to SuperGlue. Fig. <ref>(b) reports memory occupation averaged by batch size.
§ CONCLUSIONS
In this work, we demonstrate the surprising performance of reliable and geometric-invariant coordinate representations for correspondence problems. Technically, we introduce the PCF and generate it using a PCF-Net network that jointly optimizes coordinate fields and confidence estimation. We show the effectiveness of PCF-Net in various problems and report highly consistent improved performance across multiple datasets. We believe that PCF-Net points out a novel direction for solving correspondence problems: learning reliable and geometric-invariant probabilistic coordinate representations. Future research directions include further optimization through cost aggregation <cit.> and graph matching <cit.>.
IEEEtran
[
< g r a p h i c s >
]Weiyue Zhao
received the B.S. degree from Huazhong University of science and Technology, Wuhan, China, in 2020. He is currently pursuing the M.S. degree with the School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China.
His research interests include computer vision and machine learning, with particular emphasis on image registration, multi-view stereo and various computer vision applications in video.
0pt plus -1fil
[
< g r a p h i c s >
]Hao Lu
received the Ph.D. degree from Huazhong University of Science and Technology, Wuhan, China, in 2018.
He was a Postdoctoral Fellow with the School of Computer Science, The University of Adelaide, Australia. He is currently an Associate Professor with the School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, China. His research interest include various dense prediction problems in computer vision.
0pt plus -1fil
[
< g r a p h i c s >
]Xinyi Ye
received the B.S. degree from Huazhong University of science and Technology, Wuhan, China, in 2021. He is currently pursuing the M.S. degree with the School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China.
His research interests include computer vision and machine learning, with particular emphasis on consistency filtering, multi-view stereo and inverse rendering for physics-based material editing and relighting.
0pt plus -1fil
[
< g r a p h i c s >
]Zhiguo Cao
received the B.S. and M.S. degrees in communication and information system from the University of Electronic Science and Technology of China, Chengdu, China, and the Ph.D. degree in pattern recognition and intelligent system from Huazhong University of Science and Technology, Wuhan, China.
He is currently a Professor with the School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, China. He has authored dozens of papers at international journals and conferences, which have been applied to automatic observation system for object recognition in video surveillance system, for crop growth in agriculture and for weather phenomenon in meteorology based on computer vision. His research interests spread across image understanding and analysis, depth information extraction, and object detection.
Dr. Cao's projects have received provincial or ministerial level awards of Science and Technology Progress in China.
0pt plus -1fil
[
< g r a p h i c s >
]Xin Li
received the B.S. degree with highest honors in electronic engineering and information science from University of Science and Technology of China, Hefei, in 1996, and the Ph.D. degree in electrical engineering from Princeton University, Princeton, NJ, in 2000. He was a Member of
Technical Staff with Sharp Laboratories of America, Camas, WA from Aug. 2000 to Dec. 2002. Since Jan. 2003, he has been a faculty member in Lane Department of Computer Science and Electrical Engineering. His research interests include image and video processing, compute vision and computational neuroscience. Dr. Li was elected a Fellow of IEEE in 2017 for his contributions to image interpolation, restoration and compression.
|
http://arxiv.org/abs/2306.10700v1
|
20230619045832
|
Perturbation-Based Two-Stage Multi-Domain Active Learning
|
[
"Rui He",
"Zeyu Dai",
"Shan He",
"Ke Tang"
] |
cs.LG
|
[
"cs.LG"
] |
University of Birmingham
Birmingham
United Kingdom
Southern University of Science and Technology
Shenzhen
China
[email protected]
The Hong Kong Polytechnic University
Hong Kong
China
Southern University of Science and Technology
Shenzhen
China
[email protected]
University of Birmingham
Birmingham
United Kingdom
[email protected]
Southern University of Science and Technology
Shenzhen
China
[email protected]
In multi-domain learning (MDL) scenarios, high labeling effort is required due to the complexity of collecting data from various domains.
Active Learning (AL) presents an encouraging solution to this issue by annotating a smaller number of highly informative instances, thereby reducing the labeling effort.
Previous research has relied on conventional AL strategies for MDL scenarios, which underutilize the domain-shared information of each instance during the selection procedure.
To mitigate this issue, we propose a novel perturbation-based two-stage multi-domain active learning (P2S-MDAL) method incorporated into the well-regarded ASP-MTL model.
Specifically, P2S-MDAL involves allocating budgets for domains and establishing regions for diversity selection, which are further used to select the most cross-domain influential samples in each region.
A perturbation metric has been introduced to evaluate the robustness of the shared feature extractor of the model, facilitating the identification of potentially cross-domain influential samples.
Experiments are conducted on three real-world datasets, encompassing both texts and images.
The superior performance over conventional AL strategies shows the effectiveness of the proposed strategy.
Additionally, an ablation study has been carried out to demonstrate the validity of each component.
Finally, we outline several intriguing potential directions for future MDAL research, thus catalyzing the field's advancement.
<ccs2012>
<concept>
<concept_id>10010147.10010257.10010282.10011304</concept_id>
<concept_desc>Computing methodologies Active learning settings</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257.10010258.10010262.10010277</concept_id>
<concept_desc>Computing methodologies Transfer learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257.10010258.10010262</concept_id>
<concept_desc>Computing methodologies Multi-task learning</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Active learning settings
[500]Computing methodologies Transfer learning
[300]Computing methodologies Multi-task learning
Perturbation-Based Two-Stage Multi-Domain Active Learning
Ke Tang
July 31, 2023
=========================================================
§ INTRODUCTION
In practical applications, aggregating data from diverse sources is a common practice for accomplishing specific tasks.
These data sources, often called "domains," exhibit distinct distributions.
For instance, sentiment analysis may involve data collection from various social media platforms such as Twitter, Facebook, and Weibo.
In image classification, different styles of images <cit.>, including sketches, cartoons, art paintings, and camera photos, represent distinct domains.
Each domain possesses unique characteristics and contexts while containing sharable intertwined information.
If appropriately harnessed, this information could significantly improve the performance of machine learning models.
Multi-domain learning (MDL) <cit.> aims to simultaneously learn across various domains, leveraging shared knowledge to enhance overall performance.
Empirically, MDL outperforms both single-domain learning and joint learning across multiple domains <cit.> in many real-world applications.
However, high labeling effort represents a challenge in MDL, since data need to be collected from multiple domain experts.
Additionally, varying legal, privacy, and ethical requirements and annotation tools across domains make the multi-domain data collection process more arduous.
Therefore, it is crucial to minimize the labeling effort in MDL.
Active learning (AL) <cit.> presents a promising solution for reducing the labeling effort in machine learning tasks.
By iteratively selecting informative samples for annotation, AL achieves comparable performance to random selection while requiring significantly less labeling effort.
Several studies have explored the application of AL to reduce labeling effort in MDL, known as multi-domain active learning (MDAL) <cit.>.
Most works simply adapt conventional single-domain AL strategies to MDL models <cit.>, which leads to noticeable improvements.
They mix all the evaluations from different domains and select the ones with the highest evaluation score.
Nonetheless, solely applying single-domain AL to MDL models is suboptimal, as the domain-shared information remains underutilized in item selection.
Besides, the mixed scores from different domains are incomparable, potentially leading to biased selection.
To the best of our knowledge, no existing work has designed AL strategies specifically for MDL to tackle these issues.
Thus, a natural question arises: how can we design effective AL strategies tailored explicitly for MDL?
This paper proposes a novel perturbation-based two-stage multi-domain active learning (P2S-MDAL) strategy, which builds upon the classical and renowned MDL model, ASP-MTL <cit.>.
In the first stage, we allocate a budget to each domain to ensure a fair in-domain comparison and establish regions for diversity selection.
Subsequently, within each region, we further select the most cross-domain influential samples for annotation.
The influence evaluation is based on perturbations, a novel metric that assesses the robustness of the shared feature extractor of ASP-MTL.
The underlying intuition is that if a sample is more informative to the shared extractor, it will be more vulnerable to perturbations, i.e., the perturbed sample will likely have more distinct outputs compared to the original one.
Consequently, such examples, which are less learned by the shared feature extractor, could be more influential to all the domains.
Experimental results on three real-world datasets, encompassing texts and images, demonstrate the superiority of the proposed P2S-MDAL strategy over conventional AL strategies.
The main contributions of this paper are summarized as follows:
* We introduce a novel AL strategy called P2S-MDAL, the first strategy tailored for the MDL scenario upon the renowned ASP-MTL model. Experimental results indicate noticeable improvements over conventional AL strategies.
* We employ perturbations to evaluate the cross-domain influences of instances in AL. This perspective offers a fresh viewpoint in assessing the potential of individual instances.
* We highlight several intriguing research directions for MDAL.
§ PROBLEM FORMULATION
Same to AL, MDAL could be formulated as a bi-level optimization problem, where the number of labeled instances is minimized to reduce the annotating cost, and the target loss is minimized to improve the overall performance.
The practical solution to the bi-level optimization problem presents significant challenges when attempting to solve it directly.
Consequently, it is often necessary to re-formulate this problem as an iterative selection process.
Given K different data sources (domains) 𝒟 = {𝒟_1, 𝒟_2, …, 𝒟_K}, a set of data pools could be collected 𝒫 = {𝒫_1, 𝒫_2,…, 𝒫_K}.
The initial labeled and unlabeled data set can be written as ℒ_0 = {ℒ_0,1, ℒ_0,2, …, ℒ_0,K} and 𝒰_0 = {𝒰_0,1, 𝒰_0,2, …, 𝒰_0,K}, where 𝒫_k = ℒ_0,k⋃𝒰_0,k, and |ℒ_0,k|≪|𝒰_0,k| for a domain k.
MDAL is to reduce the labeling cost by iteratively selecting informative instances from the unlabeled data set 𝒰_0 according to an AL acquisition strategy α.
First, a multi-domain model ℳ_0 is trained on the initial labeled data ℒ_0 and unlabeled data 𝒰_0.
Then, a batch of to-be-queried instances 𝒬_i with top-b acquisition utility is selected and annotated by an oracle in the i-th AL iteration:
-10pt
𝒬_i = max_x ∈𝒰_i-1^bα (x, ℳ_i-1), |𝒬_i| = b
ℒ_i-1 and 𝒰_i-1 are then updated with the annotated selected batch 𝒬_i.
The model ℳ_i is trained on the updated data as follows:
-5pt
ℳ_i(𝒬) = min_ℳLoss_ sup(ℳ; ℒ_i-1∪𝒬_i) + Ω (ℳ; 𝒫)
Loss_ sup denotes the supervised loss on the labeled data.
Ω(ℳ; 𝒫) denotes a designed loss on the whole set of data pools 𝒫 for capturing the common knowledge through ℳ.
The labeling process terminates once the labeling budget ℬ is exhausted or the desired performance has been reached.
Finally, the labeled set ℒ_i and the model ℳ_i at the final iteration are obtained as the outputs.
§ RELATED WORK
§.§ Multi-Domain Learning
Multi-domain learning <cit.> primarily focuses on performing a unified task across multiple domains simultaneously.
Existing MDL research centers around information sharing among domains while preserving domain-specific information, which could be achieved through model architectures.
The widely used architecture for addressing this challenge is the shared-private structure, originally utilized in domain adaptation problems <cit.>.
Adversarial shared-private model (ASP-MTL) <cit.> is the pioneering approach to employ the shared-private structure in the context of MDL, resulting in significant improvements over single-domain models.
ASP-MTL adopts adversarial learning to encourage domain-invariance in the shared feature extractor and domain-specificity in the private feature extractors.
Several subsequent works <cit.> have further enhanced the performance based on this share-private architecture.
§.§ Multi-Domain Active Learning
Only a limited number of studies directly relate to MDAL.
These works still employ conventional single-domain AL strategies on models trained on multiple domains.
For instance, Li et al. <cit.> first applied active learning with multiple support vector machines on concatenated features from each domain in the context of multi-domain sentiment classification.
He et al. <cit.> conducted a comprehensive comparative study of conventional active learning strategies on multiple neural-network-based MDL models, demonstrating improvements over random selection.
Some works have also applied active learning to multiple domains without considering information-sharing.
They either construct independent classifiers for each domain <cit.> or utilize a single model for all domains <cit.>.
In these existing works, information sharing is primarily considered during the model training process, while the selection process evaluates the informativeness of items solely within specific domains.
In other words, conventional AL strategies do not account for the potential impact of a sample on other domains.
§ METHODOLOGY
To address the limitations of conventional AL strategies, we propose a novel method P2S-MDAL for MDAL that evaluates the influence of samples on other domains.
P2S-MDAL follows a two-stage framework: selecting regions establishment and perturbation-based item evaluation.
The overall framework is illustrated in Figure <ref>.
§.§ Selecting Regions Establishment
To avoid the incomparability between sample evaluation scores from different domains, we constrain the score-based selection within each domain.
Thus, the budgets should be allocated to each domain in advance according to the influence of domains.
Here we take the total number of samples as an influence estimate.
Let n_k denote the number of samples in domain k, and B denote the total budget.
The budget allocation is calculated as follows:
-5pt
B_k = n_k/∑_i=1^K n_i× B
Next, to ensure the diversity of the sample selection, the selection space is divided in each domain with the corresponding allocated budgets.
We employ the k-Means algorithm to divide the selection space into B_k regions for the k-th domain.
We utilize gradients E at the last layer of the model as embeddings.
Compared to the original feature space, the gradient space is more discriminative and better represents the sample influence on the current model <cit.>.
The division process could be written as follows:
-10pt
{S_k,1, ⋯, S_k,B_k} = kMeans({E_k,1, E_k,2, ⋯, E_k,n_k},B_k)
where E_k,i represents the gradient of the i-th sample in domain k, and S_k,j represents the j-th cluster in domain k.
§.§ Domain Influence Estimation
Given the established regions for selection, we can evaluate the cross-domain influence of samples in each region, ensuring that the selected samples benefit not only the current domain but also other domains.
The sample with the highest evaluation score is selected from each region.
This evaluation is based on the characteristic of the ASP-MTL model, where a domain-shared feature extractor F_s(·) and domain-private feature extractors F_p_k(·) are combined to form the final feature representation.
Since the shared information is captured solely by the shared feature extractor, the cross-domain influence evaluation could base on this component.
The intuition of our method is that if a sample is informative to the shared feature extractor, it will be informative to all domains.
In AL, a common approach to evaluating the informativeness of a sample is uncertainty measurement.
Motivated by previous works <cit.> that measure uncertainty by adversarial samples, we introduce a perturbation-based method to estimate the informativeness of each sample on the shared feature extractor.
If a sample is more informative to the shared feature extractor, it will be more vulnerable to perturbations, i.e., the perturbed sample will be more likely to be misclassified.
Consequently, such examples, which are less learned by the shared extractor, could be more influential to all the domains.
We sample perturbations from a Gaussian distribution δ∼𝒩(0, σ^2), and add them to the output of the shared feature extractor.
The perturbed output probability of an item in the k-th domain is denoted as:
-10pt
Out_k(x, δ) = C_k((F_s(x) + δ) ⊕ F_p_k(x))
Here, ⊕ denotes the concatenation operation, and C_k represents the classifier for the k-th domain.
The distance between the original output and the perturbed output is used to evaluate the cross-domain informativeness of the sample.
The distance is calculated by the Kullback-Leibler divergence <cit.> between two distributions, which can be written as:
-5pt
Score(x) = δ𝔼[Distance(Out_i(x), Out_i(x, δ))],
Distance(P,Q) = D_KL(P Q) = -∑_x ∈𝒳 P(x) log(Q(x)/P(x))
The score represents the expected output distance between the original and perturbed output distributions.
Empirically, the score is calculated by sampling multiple perturbations.
§ EXPERIMENTS
§.§ Research Questions
* Whether P2S-MDAL improves the performance of the ASP-MTL model compared to conventional strategies?
* Whether each stage of P2S-MDAL provides positive effects?
* Whether P2S-MDAL is applicable in terms of time?
§.§ Experimental Setup
§.§.§ Dataset
Three popular multi-domain textual and image datasets are used in our experiments, namely Amazon <cit.>, COIL <cit.>, and FDUMTL <cit.>.
Amazon dataset consists of four textual domains, each containing two categories, with instances encoded to a vector representation of length 5000.
COIL dataset consists of two image domains, each containing twenty categories, with instances encoded to a vector representation of length 1024.
FDUMTL dataset comprises sixteen textual domains, each containing two categories, with raw texts utilizing word2vec embedding.
§.§.§ Model Implementation
ASP-MTL <cit.> is used with the proposed strategy.
On datasets COIL and Amazon, single hidden layer MLP with width 64 is used.
On dataset FDUMTL, CNN is used as feature extractor with the output dimension 128.
The classifier is a fully connected layer for all datasets.
The model is trained using an SGD optimizer with batch size 8.
§.§.§ AL settings & Evaluation
Five conventional single domain AL strategies are selected for the comparison.
Random is the simplest strategy, which randomly selects instances from each domain.
Best vs. Second Best (BvSB) <cit.>, as an uncertainty measurement, selects instances with the greatest difference in prediction probability between the most and second most likely classes.
Expected Gradient Length (EGL) <cit.> is designed for models that can be optimized by gradients.
The instances leading to the longest expected gradient length to the last fully connected layer will be selected.
Coreset <cit.> selects instances using a greedy furthest-first search conditioned on the currently labeled examples by using the middle representation.
Batch Active learning by Diverse Gradient Embeddings (BADGE) <cit.> calculates the gradients of the last fully connected layer.
A k-Means++ initialization is applied to the gradients to ensure the diversity of the selected batch.
In the selecting process, for Amazon and COIL dataset, 10% of the training set are annotated to initialize the model.
The total budget is 50% of the training set, and 5% of the training set is selected in each iteration.
For FDUMTL, the total budget is 30% of the training set.
For the proposed AL strategy, the perturbation is sampled 20 times with amplitude of 0.01.
All the experiments are repeated 5 times with different random seeds to get an average performance.
The learning curves are used to evaluate the performance of the AL strategies, where the x-axis represents the number of selected instances, and the y-axis represents the accuracy of the model on the test set.
The area under learning curve (AULC) is also calculated.
§.§ RQ1: Performance Evaluation
A comparative analysis was undertaken between the proposed approach and five AL strategies, utilizing three datasets.
The results are shown in Figure <ref>-<ref>.
Notably, the performance of a single strategy can vary greatly across different datasets.
For instance, the proposed method and Coreset share the top tier in performance on the COIL dataset.
However, Coreset's performances on Amazon and FDUMTL datasets are suboptimal.
In contrast, P2S-MDAL consistently performs at an optimal level across all three datasets.
Besides, for FDUMTL dataset, with its sixteen domains, the selection hardness grows in complexity.
P2S-MDAL still outperforms others on this dataset, thereby further demonstrating its effectiveness.
§.§ RQ2: Ablation Study
First, the effectiveness of the proposed perturbation module is analyzed.
Utilizing the same selecting regions establishment processes (first stage), various AL strategies (namely Center, BvSB, EGL) are substituted at the second stage.
The performance, measured in terms of AULC, is depicted in Table <ref>.
Once again, P2S-MDAL comes out as the top performer, underscoring the efficacy of the perturbation module.
An ablation study was also carried out to further analyze the effectiveness of both stages.
Specifically, we removed the perturbation module (w/o perturb) and the region establishment module (w/o region) individually, and subsequently compared their performances.
The results of this study are illustrated in Figure <ref>.
With both components, P2S-MDAL achieves the best performance.
§.§ RQ3: Time Complexity Analysis
The time complexity is qualitatively analyzed in this section.
The time complexity for score-based methods (e.g. Uncertainty, EGL) typically falls in O(n), where n represents the quantity of unlabeled instances.
P2S-MDAL includes the scoring stage, which means it at least has higher time complexity than score-based methods.
Distribution-based methods (e.g. BADGE, Coreset) generally have a high time complexity due to the requisite calculations of pairwise distances.
P2S-MDAL also calculates pairwise distances in selecting regions establishment, whereas it is implemented on each domain with a reduced number of items.
This leads to significantly shorter processing times when compared to distribution-based methods.
§ CONCLUSIONS AND FUTURE DIRECTIONS
This paper marks the first instance of a dedicated AL strategy designed to address the MDAL problem.
The proposed P2S-MDAL method is a two-stage approach, which first establishes selection regions in each domain for sampling diversity, and then selects the most cross-domain influential instances in each region by using perturbations.
The efficacy of the proposed method is substantiated by the evaluations on three separate datasets.
Furthermore, the ablation study and the perturbation analysis contribute additional proof of the effectiveness of each module in P2S-MDAL.
Looking toward the future, we anticipate that the budget allocation method could be enhanced by incorporating considerations of data distribution and domain difficulty.
With the feedback from either the training or validation set, the budget allocation could be adjusted to better fit the data.
Moreover, with some adaptations, the proposed perturbation-based cross-domain evaluation could be extended to other MDL models.
Furthermore, we envision the development of a unified MDAL method, where both models and strategies are designed within single framework.
This requires the model training to incorporate the unique characteristics of the AL-selected items.
Meanwhile, the AL selection process could benefit from the explicitly designed structure of the model.
ACM-Reference-Format
|
http://arxiv.org/abs/2306.02941v2
|
20230605150351
|
Gauge fields through the Big Bang
|
[
"Martina Adamo",
"Flavio Mercati"
] |
gr-qc
|
[
"gr-qc"
] |
Heavy Flavoured Meson Fragmentation Functions in e^+e^- annihilation up to NNLO + NNLL [Contribution to the proceedings of the 57^th Recontres de Moriond 2023, QCD and High Energy Interactions.]
Leonardo Bonino [Speaker at the conference.]^*, Matteo Cacciari ^†^ and Giovanni Stagnitto ^*
July 31, 2023
==================================================================================================================================================================================================
Recent studies have demonstrated the possibility to uphold classical determinism within gravitational singularities, showcasing the ability to uniquely extend Einstein's equations across the singularity in certain symmetry-reduced models. This extension can be achieved by allowing the orientation of spatial hypersurfaces to dynamically change. Furthermore, a crucial aspect of the analysis revolves around the formulation of the dynamical equations in terms of physical degrees of freedom, demonstrating their regularity at the singularity. Remarkably, singular behavior is found to be confined solely to the gauge/unphysical degrees of freedom. This paper extends these results to gravity coupled with Abelian and non-Abelian gauge fields in a symmetry-reduced model (homogeneous anisotropic universe). Near the Big Bang, the dynamics of the geometry and the gauge fields is reformulated in a way that shows that determinism is preserved, assuming a change in orientation at the singularity. The gauge fields are demonstrated to maintain their orientation throughout the singularity, indicating that the predicted orientation change of spatial hypersurfaces holds physical significance. This observation suggests that an observer can discern the specific side of the Big Bang they inhabit.
§ INTRODUCTION
General relativity (GR) predicts the existence of gravitational singularities: regions of the spacetime manifold where certain physical quantities become meaningless in a coordinate-independent way. In these regions, for example, some components of the stress-energy tensor may diverge, as well as some curvature invariants, or the geodesic equation may be singular (i.e., geodesic incompleteness, as predicted by the Penrose–Hawking singularity theorems <cit.>).
Currently, quantum gravity effects are considered the most promising approach to regularize gravitational singularities <cit.>, similar to how QED renders the energy of a point-like electric charge finite, thanks to the uncertainty principle <cit.>. However, spacetime singularities differ significantly from those in electromagnetism. Unlike the latter, spacetime singularities arise directly from the evolution (via Einstein's equations) of regular initial data. This makes them physical predictions of the theory, while the diverging energy of a point-like charge is a consequence of the idealization of a point particle, which is introduced manually into the initial conditions.
One of the most remarkable implications of gravitational singularities is the apparent breakdown of determinism. In Lorentzian field theories like GR, classical determinism refers to the ability to uniquely predict the values of the physical fields anywhere within a region of spacetime known as the causal diamond, provided that the initial values of these fields on some space-like region are given. The presence of a gravitational singularity appears to violate determinism, making it impossible to predict the values of the fields throughout a future causal cone originating from the singularity. The loss of predictability in GR around these regions can be summarized by Hawking's words: “One does not know what will come out of a singularity” <cit.>.
Nevertheless, recent works <cit.> have revealed the possibility of preserving determinism in certain symmetry-reduced models that exhibit Big Bang or black hole singularities. Ref. <cit.> proved that, under a homogeneous but not necessarily isotropic ansatz, it is possible to reformulate Einstein's equations in terms of a set of variables that satisfies a theorem of existence and uniqueness at the singularity. This result has been established in <cit.> for the initial singularity of the Bianchi-IX model, a homogeneous non-isotropic universe with an S^3 topology <cit.>, filled with stiff matter. The presence of this type of matter source is necessary to regularize the eternal chaotic dynamics that would otherwise occur as the singularity is approached. In the absence of stiff matter, the Bianchi-IX singularity exhibits Misner's mixmaster behavior <cit.>: as the singularity is approached, the spatial volume goes to zero, while the shape degrees of freedom (which measure the anisotropy of the spatial metric) oscillate chaotically. This intricate motion persists indefinitely in coordinate time, with the shape variables oscillating an infinite number of times before reaching the singularity. However, the singularity itself is reached within a finite amount of proper time: the behavior is then that of an essential singularity (analogous to lim_x→ 0sin(1/x )). This essential singularity prevents knowledge of the exact values of all the physical degrees of freedom at the singularity. The presence of stiff matter regularizes this behavior, ensuring that the system enters a final phase of quiescence, i.e., a non-chaotic anisotropic collapse described by the Bianchi-I (or Kasner) model <cit.>. This condition is indispensable for extending the solution of Einstein's equation through the singularity since each physical degree of freedom must admit a well-defined limit at the Big Bang.
It is possible to identify a set of physical variables that remain well-defined at the singularity,[See <cit.> for a reformulation of the dynamics of GR as a non-hamiltonian system based on these variables.] enabling the formulation of the equations of motion in a manner consistent with the Picard–Lindelöf theorem on the existence and uniqueness of solutions. This implies that the newly introduced regular variables continue to evolve uniquely through the singularity. As the singularity resides at the boundary of the configuration space, this suggests the need to extend the configuration space of GR. In <cit.>, this is achieved by allowing changes in the orientation of space. The interpretation of the regular variables beyond the singularity is as follows: they describe the geometry of spatial hypersurfaces with reversed orientations that lie beyond the Big Bang. Consequently, a “second universe” emerges from the singularity with an inverted spatial orientation.
In <cit.>, it was conjectured that the preservation of determinism is not limited to homogeneous models such as Bianchi IX but is a general characteristic of realistic Big Bang and black hole singularities. Firstly, quiescence, which was a key feature of the original result <cit.>, is not exclusive to models with a stiff matter source. As noted in <cit.>, the Starobinski model, which involves an effective action for gravity that includes the lowest-order quantum corrections to the Einstein–Hilbert action, also exhibits this characteristic (in addition to being the most promising candidate for explaining inflation <cit.>). Therefore, it can be said that pure (semiclassical) gravity alone, without any matter sources, tends towards a quiescent behavior.
Secondly, the assumption of homogeneity, which allowed the treatability of the models examined thus far, does not seem to be a prerequisite for the continuity results to hold. A strong evidence supporting this idea is provided by the Belinsky–Khalatnikov–Lifshitz (BKL) conjecture <cit.>, which states that as one approaches a space-like singularity, the time derivatives in Einstein's equations dominate over spatial derivatives, implying that the asymptotic dynamics is described by an (infinite) set of decoupled ordinary differential equations, one for each spatial point. These equations are identical to the equations of motion for a Bianchi-IX universe, and in quiescent models, they exhibit the continuation result under discussion. Interestingly, the BKL conjecture is essentially proven for universes with stiff matter sources <cit.>, providing strong indications that inhomogeneities will not change the result regarding continuation through singularities.
Furthermore, the models examined thus far have only included scalar matter fields, which, as we remarked, can be seen as an effective description of quantum gravitational degrees of freedom, in the case of Starobinski's model <cit.>. It is commonly understood that “matter does not matter” near a singularity <cit.>. In the vicinity of an isotropic FLRW solution, a simple scaling argument shows that the contributions to Friedman's equations arising from Standard Model matter (a^-3), radiation (a^-4), the cosmological constant (a^0), and spatial curvature (a^-2) are all suppressed compared to the contribution of anisotropic shear, which scales as a^-6 (where a denotes the FLRW scale factor). Here, by anisotropic shear, we refer to what we later refer to as shape kinetic energy, which represents the term analogous to kinetic energy associated with the change in anisotropy parameters (visualize a scenario in which we are in close proximity to an initially isotropic spatial metric that is gradually losing its isotropy). It should be noted that the only exception to this behavior is scalar fields, which contribute to the Friedman equations with terms that scale as a^-6. This intuition does not hold when we delve deep into the anisotropic regime <cit.>. If the pressure of matter sources becomes anisotropic, it can interact in a complex manner with the shape degrees of freedom, and it is not possible to demonstrate that matter or radiation decouples in the same way as in the isotropic regime. The motto “matter does not matter” does not hold in an anisotropic universe, therefore the continuability of Einstein's equations through the Big Bang needs to be proven separately in presence of matter or radiation fields.
In this paper, our focus is on radiation, specifically electromagnetic and Yang–Mills fields. We aim to provide a comprehensive analysis of their dynamics near a Big Bang singularity in a simplified model, namely, under the assumption of homogeneity. Our first objective is to rigorously prove that the radiation degrees of freedom truly decouple from the gravitational ones (in the sense that they disappear from the equations of motion of the latter), while being driven by their own evolution. This will be the first goal of this paper.
The second goal is to study how the gauge degrees of freedom evolve under the influence of the gravitational ones as we progress through the singularity. This question is intriguing because, although the orientation of spatial slices is reversed upon crossing the singularity, it is not evident whether this reversal can be observed, for example by means of parity-breaking tests like beta decay <cit.>. It remains uncertain whether such tests could determine the side of the singularity we find ourselves on. For this, we need to know what happens to gauge fields and fermions, whether for example their direction is flipped or not. This paper takes the first step towards addressing this question by analyzing the behavior of the gauge fields.
Our results suggest that gauge fields do not reverse their direction across the singularity, although we cannot prove this yet in a fully general context. Our analysis is restricted to homogeneous gauge fields, and furthermore, it applies in full generality only to Abelian gauge groups. In the non-Abelian case, our analysis is limited to a “one-dimensional” ansatz, meaning that both the gauge vector potential and its conjugate momentum are assumed to point in the same spatial direction throughout the evolution. The relaxation of these assumptions will be the focus of future investigations.
The paper is structured as follows: Section <ref> provides a review of the Hamiltonian formulation of the Einstein–Maxwell system. Sections <ref> and <ref> focus on the phase-space reduction to the homogeneous case, assuming a spatial topology of a three-sphere and the invariance under translations of both the metric and the gauge fields throughout the evolution. This ansatz is compatible with the Hamiltonian evolution and reduces the degrees of freedom to a finite set, whose equations of motion are ordinary differential equations in time.
Section <ref> introduces the Misner variables commonly used to discuss homogeneous universes with a three-sphere topology (Bianchi-IX models) and demonstrates the inevitability of the singularity even in the presence of gauge fields. In Section <ref>, a further simplification is introduced through the one-dimensional ansatz for the gauge fields discussed earlier. Under this ansatz, the continuation result can be (relatively) easily proven.
Section <ref> considers the relaxation of the one-dimensional ansatz for Abelian gauge fields and the continuation result is proven. The extension of this result to SU(2) and SU(3) gauge fields under the one-dimensional ansatz is detailed in Appendix <ref>. However, the relaxation of this ansatz in non-Abelian gauge theories goes beyond the scope of the present paper. Finally, in Section <ref>, we draw conclusions based on the knowledge gained thus far. In Table <ref> we summarize the notations used in the paper.
§ HAMILTONIAN FORMULATION OF EINSTEIN–MAXWELL THEORY
Our goal is to extend the model of <cit.> to the case of the Einstein–Maxwell system: GR minimally coupled with electromagnetism, whose action is given by
[ S = ∫ d^4 x √(- h)( R - 14 h^μν h^ρσ F_μρ F_νσ) ,
F_μν = ∂_μ A_ν - ∂_ν A_ν . ]
In the Arnowitt–Deser–Misner Hamiltonian formalism <cit.>, the four-dimensional metric h_μν is split into its spatial components g_ij, which serve as canonical variables, and four other fields: the lapse scalar N and the shift vector N^i,[The spatial metric, shift, and lapse are a two-tensor, a vector and a scalar field, respectively, under diffeomorphisms of the spatial hypersurface.] which are Lagrange multipliers because their time derivatives do not appear in the action. The relations between these quantities and the spacetime metric components are given by h_ij=g_ij, h_00 = -N^2+N_iN^i, and h_0i = h_i0 = N_i.
These quantities have the following physical interpretations: the spatial components g_ij represent the three-dimensional metric of equal-time hypersurfaces, the shift generates infinitesimal spatial translations along the hypersurfaces, and the lapse represents the proper time measured by observers moving orthogonally between neighboring hypersurfaces. The time derivatives of g_ij are replaced, through a Legendre transform, by the conjugate momenta π^ij=(√(g)/ 2N)( g^ik g^jl - g^ij g^kl)( ġ_kl - _N⃗ g_kl), where g^ij is the inverse matrix of g_ij, and _N⃗ is the Lie derivative w.r.t. the shift N^i.
The Hamiltonian decomposition of the electromagnetic action is similar to the familiar one in Minkowski spacetime: the canonical variables are the spatial components of the electromagnetic potential A_i, while the time component A_0 (which is the scalar potential) acts as a fifth Lagrange multiplier. The spatial components of the Faraday tensor are given by F_ij = ∂_i A_j - ∂_j A_i. Finally, the momenta canonically conjugate to A_i are the components of the electric field E^i=(√(g)/ N) g^ij( F_0j-N^k F_kj).
The following equal-time Poisson-bracket relations hold for conjugate pairs of variables:
{ g_ij (x) , π^kl(y) } = 12(δ^k_i δ^l_j+δ^l_i δ^k_j) δ^(3)(x-y) ,
{ A_i (x) ,E^j (y) } = δ^j_i δ^(3)(x-y) ,
while all other brackets are zero. The time evolution is given by the total Hamiltonian, which for our system is a linear combination of the following constraints:
ℋ[N] = ∫ d^3 x N ( 1√(g)( π^ijπ_ij - 12π^2 + 12 g_ij E^i E^j ) + √(g)( 14 g^ij g^kl F_ik F_jl - K ) ) ,
ℋ_i[N^i] = ∫ d^3 x N^i ( E^j F_ij - 2 g_ij∇_kπ^jk) ,
𝒢[A_0] = - ∫ d^3 x A_0 ( ∇_i E^i )
,
where K is the Ricci scalar and ∇_i is the covariant derivative, both w.r.t. the metric g_ij. These constraints are first-class, and close an extension of the so-called hypersurface deformation algebra <cit.> (or rather algebroid <cit.>). The first and the last lines in Eq. (<ref>) represent the Hamiltonian and Gauss constraints, which generate time evolution and electromagnetic gauge transformations respectively. The three constraints ℋ_i can be expressed (up to boundary terms) as a linear combination of Gauss and diffeomorphism constraints:
ℋ_i[N^i] = 𝒟_i[N^i] - 𝒢[ A_i N^i] + (boundary terms) ,
where:
𝒟_i[N^i] = ∫ d^3 x ( E^i _N⃗ A_i + π^ij_N⃗ g_ij) .
§ HOMOGENEOUS ANSATZ
We now impose the homogeneous ansatz, which has been the starting point of previous works such as <cit.>. The most general spatially-homogeneous universe with the topology of a three-sphere is described by the Bianchi-IX model <cit.>. The spatial metric is assumed to have three independent Killing vectors that generate spatial translations (homogeneity). On S^3, coordinatized by the usual hyperspherical coordinates θ∈ [0,π], ϕ∈ [0,π], ψ∈ [0, 2π), it is possible to construct a basis of vector fields that are invariant under these translations, as well as a dual basis of one-forms:
{ χ_1 = -sinψ ∂_θ + cosψ θ ∂_ϕ - cosψ θ ∂_ψ
χ_2 = - cosψ ∂_θ - sinψ θ ∂_ϕ + sinψ θ ∂_ψ
χ_3 = ∂_ψ. ,
{ σ^1 = -sinψ d θ + cosψ sinθ d ϕ
σ^2 = -cosψ d θ - sinψ sinθ d ϕ
σ^3 = d ψ + cosθ d ϕ. ,
where the duality is given by
σ^a_i χ_a^j = δ^j_i , σ^a_i χ_b^i = δ^a_b .
The most generic homogeneous (but not necessarily isotropic) metric on S^3 can be expressed as a quadratic form (with spatially constant coefficients) in this basis. By imposing the homogeneous ansatz also on the conjugate momenta p^ij (which is necessary to preserve the homogeneous ansatz for the metric under time evolution), as well as on the electromagnetic fields A_i and E^i, we obtain:
[ g_ij (t,x) = q_ab(t) σ^a_i(x) σ^b_j(x) ,
A_i (t,x) = A_a (t) σ^a_i(x) ,
π^ij (t,x) = p^ab(t) χ^i_a(x) χ^j_b(x) σ(x) ,
E^i (t,x) = E^a(t) χ^i_a(x) σ(x) ,; ; F_ij (t,x) = F_bc (t) σ^b_i (x) σ^c_j (x) =
-A_a (t) δ^ad ε_dbc σ^b_i (x) σ^c_j (x) . ]
Notice that the conjugate momenta p^ij and E^i require a term σ (x)=sinθ to ensure the correct transformation behavior under diffeomorphisms (that of a tensor density of weight +1). In this basis, all these tensor fields have homogeneous components: q_ab, p^ab, A_a, E^a, with only a time dependence. Under this ansatz, all the constraints (each of which constrains one degree of freedom per spatial point) can be smeared over arbitrary functions and become the following global constraints:
ℋ[N] = n ( p^abp^cdq_bc q_da - 12 (p^abq_ab)^2 + q_ab q_cd δ^bcδ^da - 12 ( q_ab δ^ab )^2 .
. + 12 q_ab E^a E^b + 14 q q^ab q^cd F_ac F_bd) ,
ℋ_i[N^i] = n^d ( E^a F_da + 2 p^ab q_ac ε_bdf δ^fc) = 𝒟_i [N^i] ,
𝒢[φ] =0 ,
where:
n = 1√( q)∫d θ dϕ d ψ sinθ N(x) ,
n^a = ∫ d θ dϕ d ψ sinθ σ^a_i(x) N^i(x) ,
are four leftover Lagrange multipliers (the spatial averages of the lapse and shift).
Notice that the Gauss constraint is automatically solved by the homogeneous ansatz.
The homogeneous ansatz is dynamically consistent, i.e., it is preserved by the evolution <cit.>, as can be verified by substituting the ansatz into the RHS of the Einstein equations.
§ SOLVING THE DIFFEOMORPHISM CONSTRAINTS
In order to eliminate the non-physical degrees of freedom, we need to gauge-fix the three diffeomorphism constraints using Dirac's procedure for constrained Hamiltonian systems <cit.>. The diffeomorphism generators are:
{ξ_1 = 𝒟_1 = 2 ( p^13 q_12 - p^12 q_13 + p^23 q_2 - p^2 q_23 + p^3 q_23 - p^23 q_3 ) + E^3 A_2 - E^2 A_3 ≈ 0 ,
ξ_2=𝒟_2 = 2 ( p^1 q_13 - p^13 q_1 + p^12 q_23 - p^23 q_12 + p^13 q_3 - p^3 q_13) + E^1 A_3 - E^3 A_1 ≈ 0 ,
ξ_3=𝒟_3 = 2 ( p^2 q_12 - p^12 q_2 + p^23 q_13 - p^13 q_2 + p^12 q_1 - p^1 q_12) + E^2 A_1 - E^1 A_2 ≈ 0 .
.
A suitable choice <cit.> for the gauge-fixing constraints is:
ξ_4 = q_23≈ 0 ,
ξ_5 = q_13≈ 0 ,
ξ_6 = q_12≈ 0 .
These gauge-fixing constraints are second-class w.r.t. ξ_1, ξ_2, ξ_3 everywhere except at the three planes of symmetry q_1 = q_2, q_2 = q_3 and q_3 = q_1. Except for a measure-zero set of solutions that takes entirely place on these planes, all solutions that intersect these planes can be uniquely continued through them by making a different local choice of gauge-fixing. We can solve the diffeomorphism constraints w.r.t. the non-diagonal components of p^ab, and this choice is regular everywhere away from the three symmetry planes:
{
p^23 = E^2 A_3 - E^3 A_2 /2 ( q_2 - q_3 ) ,
p^13 = E^3 A_1 - E^1 A_3/2 ( q_3 - q_1 ) ,
p^12 = E^1 A_2 - E^2 A_1 /2 ( q_1 - q_2 ) .
.
Using the six second-class constraints ξ_α, α=1,…,6, we can construct the Dirac matrix, which, when evaluated on the constraints hypersurface in the space of solutions of the system (i.e., on-shell), reads:
C_αβ =
{ξ_α , ξ_β}≈(
[ 0 M; -M 0 ]) ,
where M=diag(q_3-q_2,q_1-q_3,q_2-q_1). The inverse Dirac matrix is then simply:
(C^-1)^αβ≈(
[ 0 M^-1; -M^-1 0 ]) .
Therefore, the Dirac bracket:
{ f , g}_* = { f , g} - { f , ξ_α}(C^-1)^αβ{ξ_β , g} ,
is canonical on the diagonal components of the metric and their momenta, and on the three components of the electromagnetic potential and their momenta:
{ q_a , p^b }_* = δ^b_a , {A_a , E^b }_* = δ^b_a ,
and all the other Dirac brackets are zero.
We started with a system described by eighteen (not all physical) degrees of freedom: six components of the symmetric three-dimensional metric q_ab, six components of metric momenta p^ab, three components of the electromagnetic potential A_a, and three of the electromagnetic momenta E_a (the electric field). After the diffeomorphism gauge-fixing, we fixed six degrees of freedom: the three off-diagonal components of the metric q_12, q_23, q_13 (which are set to zero), and the three off-diagonal components of the metric momenta p^12, p^23, p^13 (which are now functions of all the other variables). Therefore, we end up with twelve degrees of freedom. Of these, ten are truly physical, in the sense that they are the minimum number of independent variables necessary to uniquely determine a solution. The remaining two are constrained by the Hamiltonian constraint (first equation in (<ref>)) and its gauge-fixing (i.e., the fact that we can freely choose initial conditions among the different points of the solution curve without changing the solution itself).
The equations of motion generated by the Dirac bracket are the canonical equations of motion obtained from the on-shell Hamiltonian:
ℋ[N] =n [ℋ_BIX + q_2 q_3 (M_1)^2/2(q_2-q_3)^2+q_1 q_3 (M_2)^2/2(q_1-q_3) ^2+q_1 q_2 (M_3)^2/2(q_1-q_2)^2
+ 12 q_1 ((E^1)^2 + (A_1)^2 ) + 12 q_2 ((E^2)^2 + (A_2)^2 ) + 12 q_3 ((E^3)^2 + (A_3)^2 )
] ,
where we called q_11 = q_1, q_22 = q_2, q_33 = q_3, p^11 = p^1, p^22 = p^2, p^33 = p^3, and:
M_1 = E^2 A_3 - E^3 A_2 ,
M_2 = E^3 A_1 - E^1 A_3 ,
M_3= E^1 A_2 - E^2 A_1 ,
and where:
ℋ_BIX = (p^1 q_1)^2+(p^2 q_2)^2+(p^3 q_3)^2-(p^3 q_3+p^2 q_2+p^1 q_1)^2/2
+ q_1^2+q_2^2+q_3^2-(q_1+q_2+q_3)^2/2 ,
is the Hamiltonian constraint of an empty Bianchi-IX universe <cit.>.
§ INEVITABILITY OF COLLAPSE
With the following canonical transformation:
{ q_1=a_0^2 exp(x^0-√(3) x^1+x^2√(3)) ,
q_2=a_0^2 exp(x^0+√(3) x^1+x^2√(3)) ,
q_3=a_0^2 exp(x^0-2 x^2√(3)) ,
.
{ p^1=a_0^-2 ( k_2-√(3) k_1+2 k_02 √(3)) exp(-x^0-√(3) x^1+x^2√(3)) ,
p^2=a_0^-2( k_2+√(3) k_1+2 k_02 √(3)) exp(-x^0+√(3) x^1+x^2√(3)) ,
p^3=a_0^-2( k_0-k_2√(3)) exp(-x^0-2 x^2√(3)) ,
.
where a_0 is a dimensional constant (a reference scale), the Hamiltonian constraint takes the form of a diagonal quadratic kinetic term for the metric variables k_a, plus a potential-like term that depends on the metric and electromagnetic variables:
[ ℋ[N] = n [ 12( - k_0^2 + k_1^2 + k_2^2 ) + 12 U (x,A,E) ] ,; ; U(x,A,E) = a_0^4 e^2 x^0√(3) C(x^1,x^2) + a_0^2 e^x^0/√(3) V(x^1,x^2,A,E) + W(x^1,x^2,A,E) . ]
In the previous equation:
C(x^1,x^2) = e^-2x^1 + 2√(3) x^2 + e^2x^1 + 2√(3) x^2 + e^-4√(3) x^2
-2 ( e^2√(3) x^2 + e^-x^1 - 1√(3) x^2 + e^x^1 - 1√(3) x^2) ,
is the Bianchi-IX potential <cit.>, and
V(x^1,x^2,A,E) = e^-x^1+x^2√(3)( (E^1)^2 + (A_1)^2 )
+ e^x^1+x^2/√(3)( (E^2)^2 + (A_2)^2 ) + e^-2x^2/√(3)( (E^3)^2 + (A_3)^2 ) ,
W(x^1,x^2,A,E) = e^x^1+√(3)x^2(M_1)^2/(e^x^1+√(3)x^2-1)^2 + e^x^1+√(3)x^2(M_2)^2/(e^x^1-e^√(3)x^2)^2 + e^2x^1(M_3)^2/(e^2x^1-1)^2 ,
are two new contributions depending on the electromagnetic field.
The variables x^1 and x^2 represent the shape degrees of freedom, which quantify the anisotropy of the spatial metric. Their conjugate momenta, k_1 and k_2, correspond to the rate of change of these anisotropies. The variable x^0 and its conjugate momentum k_0 are related to the volume of the universe v and its conjugate momentum τ, known as the York time. These relationships are expressed as follows:
v = a_0^3 e^√(3)/2 x^0 , τ = 2√(3) a_0^-3 e^-√(3)/2 x^0 k_0 .
The variable τ, which is named after James York <cit.>, is associated with a specific foliation of spacetime known as CMC (constant-mean extrinsic curvature). In this foliation, the initial-value problem can be formulated as a system of elliptic equations, whose solution exists and is unique. In a cosmological setting, τ is proportional to (minus) the Hubble parameter <cit.>. The adoption of this foliation is motivated by the fact that the physical degrees of freedom of GR are spatial-conformal invariants, leading to the proposal of reformulating GR as a three-dimensional conformal field theory known as Shape Dynamics <cit.>. The concepts discussed in the present paper, such as the identification of the shape degrees of freedom as the physical ones, are compatible with the principles of Shape Dynamics, although it does not rely on the shape dynamical interpretation of GR. Therefore, one can view this paper as a result in Shape Dynamics or as an entirely independent result within Hamiltonian GR.
Consider now the equations of motion for x^0 and k_0:
ẋ^0 = - n k_0 , k̇_0 = -n ( 1√(3) a_0^4 e^2x^0/√(3) C + 12√(3) a_0^2 e^x^0/√(3) V ) ,
and assuming, without loss of generality, n=1 and a_0=1, we can use these equations to calculate the second time derivative of the quantity exp(- x^0/ √(3)), which is a certain power of the volume:
d^2/dt^2( e^- x^0/√(3)) = d/dt( e^- x^0/√(3) k_0 ) =
- 13 e^-x^0/√(3)( -k_0^2 + e^2x^0/√(3) C ) - 16 V
≈13 e^-x^0/√(3)( k_1^2 + k_2^2 + W ) + 16 V ,
where we used the Hamiltonian constraint in the last step. The RHS is non-negative, because both V and W are positive-definite. Thus, we have proven that the quantity exp( - x^0 / √(3)) is concave upwards. Consequently, it will monotonically decrease for half of each solution, reaching a single minimum (which may potentially be infinitely far in time, resulting in strictly increasing or decreasing behavior), and then it will monotonically increase for the rest of the solution.
The volume, given by the (square root of the) inverse of this quantity, generally will monotonically increase for half of each solution, will reach a maximum, and subsequently will monotonically decrease to zero. As remarked above, there may also exist degenerate solutions that undergo either monotonic growth or shrinking, reaching maximal expansion only as t→+ ∞ (or t→ -∞) while exhibiting a single Big Bang singularity as t→ -∞ (or t→ +∞). Our focus in this paper is solely on the behavior of the system near one singularity, while the behavior far away from it, where matter fields, cosmological constant terms, and inhomogeneities dominate the dynamics, does not concern us.
The results above represent a cosmological reformulation of the Penrose–Hawking singularity theorems <cit.>. According to these theorems, once a solution begins to collapse, it cannot be halted and will continue to shrink until it reaches a singularity. It is important to note that, although the Big Bang is only reached as t→±∞, this does not mean that it is in the infinite future (or past). In fact, a finite amount of proper time elapses between any finite value of t and t→±∞, as proved in <cit.>.
§ ONE-DIMENSIONAL ANSATZ
In this section, we consider a simpler but illustrative case in which the electromagnetic field has only one spatial component:
A_1 =A_2 = 0 , E^1 = E^2 = 0 .
These conditions are preserved by the equations of motion:
{ A_1 , ℋ[N] }_*|_A_1 =A_2 = 0
E^1 = E^2 = 0 ≈ 0 ,
{ A_2, ℋ[N] }_*|_A_1 =A_2 = 0
E^1 = E^2 = 0 ≈ 0 ,
{ E_1 , ℋ[N] }_*|_A_1 =A_2 = 0
E^1 = E^2 = 0 ≈ 0 ,
{ E_2 , ℋ[N] }_*|_A_1 =A_2 = 0
E^1 = E^2 = 0 ≈ 0 ,
{ A_1 , 𝒢[φ] }_*|_A_1 =A_2 = 0
E^1 = E^2 = 0 = 0 ,
{ A_2 , 𝒢[φ] }_*|_A_1 =A_2 = 0
E^1 = E^2 = 0 = 0 ,
{ E_1 , 𝒢[φ] }_*|_A_1 =A_2 = 0
E^1 = E^2 = 0 = 0 ,
{ E_2 , 𝒢[φ] }_*|_A_1 =A_2 = 0
E^1 = E^2 = 0 = 0 .
In our ansatz, we arbitrarily chose to keep the third component as the non-zero one, but this choice does not make the model lose any generality. In fact, if we were to choose the first or second component of A_a and E^a as the non-zero one in our ansatz, the dynamics would remain identical, with the labels for the first, second and third components permuted accordingly. This can be proven by performing a simultaneous reflection transformation, such as q_1 → q_2, q_3 → q_1, q_2 → q_3 and likewise for p^a. This transformation does not change the Bianchi-IX part of the Hamiltonian constraint: this is due to a discrete symmetry of our system, which remains invariant under the aforementioned reflection transformations.
The Hamiltonian constraint in Eq. (<ref>) now reads:
ℋ[N]=ℋ[N]|_A_1 =A_2 = 0
E^1 = E^2 = 0 =n [ ℋ_BIX + 12 q_3 ( (E^3)^2 + (A_3)^2 ) ] .
If we now consider the following quantity:
H^1D_HO = (E^3)^2 + (A_3)^2 ,
it is immediate to prove that it is conserved, because it is first-class w.r.t. the Hamiltonian constraint:
{ H^1D_HO , ℋ[N] }_* ≈ 0 ,
therefore, we can assign a constant of motion ε to it, which will remain unchanged along the whole solution. As a result, the geometric degrees of freedom will evolve according to the following effective Hamiltonian constraint:
ℋ_eff[N] = n [ ℋ_BIX + 12 q_3 ε] .
We can combine the new term with the potential term present in ℋ_BIX. For any finite value of ε, we can describe the dynamics of the geometrical degrees of freedom as being controlled by an effective potential given by:
U_1D = q_1^2+q_2^2+q_3^2-(q_1+q_2+q_3)^22
+ 12 q_3 ε .
Now let us demonstrate that the additional term in the potential does not alter the result regarding the continuation through the singularity.
Quiescence is unchanged.
The structure of the Hamiltonian constraint resembles that of Bianchi IX, although with a deformed potential. The solutions exhibit similar characteristics to those of Bianchi IX: stretches of inertial motion known as Kasner epochs when the potential term is negligible, interrupted by brief quasi-elastic bounces referred to as Taub transitions that dissipate some of the shape momenta k_1, k_2 <cit.>.
During a Kasner epoch, the dynamics can be well approximated by that of a free particle. In these phases, both the shape degrees of freedom x^i and the scale degree of freedom x^0 evolve linearly w.r.t. parameter time t, so the spatial volume v decreases exponentially. Notice that the proper time s measured by a comoving observer is exponentially related to t (s is proportional to the t-integral of the volume). Therefore, if a Kasner epoch were to extend all the way to the singularity at t → +∞, only a finite amount of proper time would have elapsed <cit.>. Conversely, in a Taub transition, the configuration point bounces against the Bianchi-IX potential, leading to rapid changes in certain shape variables (such as the direction of motion in configuration space), while x^0 undergoes rapid changes in speed (i.e.,its conjugate momentum varies rapidly), but not significantly in magnitude. The resulting motion for x^0 is that of a segmented curve, with periods of straight-line motion separated by rapid changes in slope. Proper time remains finite all the way to the singularity. The Big Bang is thus reached within a finite amount of proper time, but the system undergoes an infinite number of Taub transitions. This chaotic behavior prevents certain degrees of freedom from having a well-defined limit at the singularity.
For instance, consider the angular variable of the polar coordinates of the (x^1,x^2) plane. If a Kasner epoch were to extend to the singularity, this variable would settle into a limiting value. However, each Taub transition makes it change again. If we were to plot its value against proper time, near the singularity it would resemble something similar to the function sin (1/x) as x → 0, exhibiting an essential singularity. Consequently, it is impossible to determine the specific value this variable takes at the Big Bang. This prevent any attempt to continue these solutions thought it <cit.>.
If we introduce a scalar field, we can induce a state of quiescence, meaning that the chaotic behavior stops after a finite number of Taub bounces, and the solution settles onto a last Kasner epoch lasting all the way to the Big Bang. However, the additional term in the potential (<ref>) could, in principle, change the conditions for quiescence. This is not the case for potential terms that are polynomial in the metric components q_1, q_2, q_3, as the one we have under the one-dimensional ansatz. This is easily proven by considering the following scenario: let us assume that we begin during a Kasner epoch. The solution takes the following form:
x^α (t) = η^αβk_β t + x^α (0) ,
k_α (t) = v_α ,
v_0 = +√( (v_1)^2 + (v_2)^2 ) ,
where η^αβ=diag(-1,1,1), α,β=0,1,2. The plus sign in the dispersion relation for the integration constants v_α has been chosen so that the Big Bang singularity occurs at t →+ ∞. Replacing the solution into the metric components (<ref>), we obtain:
q_1(t)=a_0^2 exp(x^0 (t) -√(3) x^1(t)+x^2(t)√(3)) ∝exp(- √((v_1)^2+(v_2)^2) + √(3) v_1 - v_2√(3) t )= e^- ρ_1 t ,
q_2(t)=a_0^2 exp(x^0 (t) +√(3) x^1(t)+x^2(t)√(3)) ∝exp(- √((v_1)^2+(v_2)^2) - √(3) v_1 - v_2√(3) t )= e^- ρ_2 t ,
q_3(t)=a_0^2 exp(x^0 (t) -2 x^2(t)√(3)) ∝exp(- √((v_1)^2+(v_2)^2) +2 v_2√(3) t )= e^- ρ_3 t .
In polar coordinates (v_1,v_2) = |v⃗| (cosφ , sinφ ), the three coefficients ρ_a appearing in the equations above can be expressed as follows:
{ ρ_1 = |v⃗ |√(3)(1 + √(3)cosφ - sinφ) ,
ρ_2 = |v⃗ |√(3)(1 - √(3)cosφ - sinφ) ,
ρ_3 = |v⃗ |√(3)(1 + 2 sinφ) ,
.
and, for any value of φ, one of the coefficients ρ_a is always negative (except for the three special directions along the symmetry axes of the potential, φ=π/2,7π/6,11 π/6, which, however, only concern a measure-zero set of solutions). This can be observed in Fig. <ref>.
Now, if we introduce a homogeneous scalar field without mass nor potential, the Hamiltonian constraint (<ref>) changes into:
ℋ_eff[N] = n [ ℋ_BIX + 12 q_3 ε + 12 k_3^2 ] ,
where k_3 is the conjugate momentum to a homogeneous scalar field, which we call x^3. A Kasner epoch in this case looks exactly the same, with the difference that the dispersion relation appearing in (<ref>) now looks like:
v_0 = √( (v_1)^2 + (v_2)^2 + (v_3)^2 ) ,
where the constant of motion v_3 is the (conserved) value of k_3. Now, Eqs. (<ref>) take the same form, except that the ρ_a coefficients change into:
{ ρ_1(w) = |v⃗ |√(3)(w + √(3)cosφ - sinφ) ,
ρ_2(w) = |v⃗ |√(3)(w - √(3)cosφ - sinφ) ,
ρ_3(w) = |v⃗ |√(3)(w + 2 sinφ) ,
. w = √(1 + (v_3)^2 (v_1)^2 + (v_2)^2) .
The parameter w takes value 1 when v_3=0 (no scalar field), and w>1 when v_3 ≠ 0. Each Taub transition ends in a new Kasner epoch with a lower value of (v_1)^2 + (v_2)^2 (see <cit.> for the proof), so the parameter w progressively grows larger after each bounce. When it reaches values equal to or larger than w=2, all the ρ_a(w) functions become positive everywhere. We reach a situation in which all the terms in any potential that is polynomial in q_a can only decrease with time. The solution settles with increasing accuracy around a single Kasner epoch all the way to the singularity, without further Taub bounces.
As we mentioned earlier, the effective potential of the one-dimensional model (<ref>) is polynomial in q_a (it includes quadratic terms from the Bianchi-IX part and a linear term in q_3). Therefore, the conditions for quiescence remains completely unchanged.
However, it is important to notice that the polynomiality of the potential is not guaranteed in general. From Eq. (<ref>), we can observe that in the general case where the electromagnetic field has more than one spatial component, there are non-polynomial terms, such as q_1q_2 / (q_1-q_2), and so on.
Continuing the dynamics through the singularity.
In the previous Paragraph, we have proved that the presence of a one-dimensional electromagnetic field does not alter the quiescent behavior as the system approaches the Big Bang. This provides the foundation for extending the continuation result of <cit.> to GR minimally coupled with electromagnetism under the one-dimensional ansatz.
It is important to note that the variables x^0, x^1, x^2, x^3, k_0, k_1, k_2, k_3 are not a suitable set for describing the system at the Big Bang. For example, the singularity is located at the boundary of the (x^1,x^2) plane, where (x^1)^2 + (x^2)^2 →∞. Therefore, when expressed in terms of x^0,k_0, …, x^3,k_3, the solutions become degenerate at the Big Bang. The values of certain variables (such as (x^1)^2 + (x^2)^2) at the singularity do not depend on the choice of initial values and, in this sense, they are not predictive. However, we can demonstrate that this loss of predictability at the Big Bang is coordinate-dependent. It is possible to find a sufficiently large number of variables that tend to finite nontrivial limits at the Big Bang, and at the same time possess the property that specifying their values at any instant, including at the singularity, uniquely determines the solution.
Specifically, we can demonstrate that the equations of motion in these variables form an autonomous set of ordinary differential equations (ODEs) that are regular at the Big Bang. This means that the RHSs of the equations of motion tend to finite limits, as do their first derivatives. At the singularity, these equations satisfy the conditions required by the Picard-Lindelöf theorem of existence and uniqueness of solutions of ODEs <cit.>. Thus, it is possible to set an initial value problem at the Big Bang that has a unique solution. Consequently, the Big Bang is not necessarily a region where determinism fails, as no information about the dynamical system is lost there.
If the singularity is a region where the existence and uniqueness theorem holds, a unique solution should depart from any of its points, in two directions. One direction leads to the interior of the configuration space we used so far. However, it is not clear at this point where the other direction should lead. In fact, the singularity lies at the boundary of the configuration space, and we need to extend this space in order to discuss the fate of the solutions that reach the Big Bang. The aforementioned regular variables enable us to achieve such an extension in a natural manner: the metric and scalar variables x^1, x^2, x^3 are related to the three regular variables β, θ, φ through a gnomonic map:
{ x^1 = |tanβ| sinθ cosφ ,
x^2 = |tanβ| sinθ sinφ ,
x^3 = |tanβ| cosθ ,
.
where β,θ∈ [0, π], φ∈ [0,2π) are hyperspherical coordinates on a three-sphere. These coordinates project the configuration space (x^1,x^2,x^3) onto a hemisphere of a three-sphere. The gnomonic map defines a double cover of an N-dimensional plane by an N-sphere (see Fig. <ref>), in which each hemisphere is mapped to a (x^1,x^2,x^3) hyperplane, extending the original configuration space into two copies of itself. Physically, we interpret each hyperplane as the configuration space of a three-geometry (plus a scalar field) with a different spatial orientation that flips upon crossing the boundary of the two hyperplanes (the equator of the three-sphere) <cit.>. This implies that a universe approaching the singularity with a certain spatial orientation will collapse at the Big Bang into a degenerate zero-volume one-dimensional geometry, in which two spatial directions are infinitely smaller than the third one.[There is also a measure-zero set of solutions of two-dimensional degenerate geometries, in which one direction is infinitely smaller than the other two <cit.>.] Once the Big Bang is reached, the volume can start growing again, but a universe with an opposite spatial orientation will emerge. This entire process can be described using the extended configuration space (namely, the gnomonic three-sphere) where the singularity is projected from the boundary of the plane associated with a fixed spatial orientation onto the equator of the sphere (β = π2). Therefore, the Big Bang is approached as β→π2^±, while the angles θ, φ represent the direction in which the equator is approached in the extended configuration space. Quiescent solutions, which were straight lines in the configuration plane, are projected onto half great circles on the gnomonic sphere. Each half great circle has a unique natural and regular continuation, which corresponds to the other half of the same great circle in the other hemisphere (see Fig. <ref>).
At this point, we have a continuation result for purely Kasner solutions: they are great circles on the gnomonic sphere, which correspond to two (generally distinct) straight lines on the two (x^1, x^2,x^3) hyperplanes associated with the two spatial orientations. To extend this result to Bianchi-IX solutions, where the straight lines only exist in a neighborhood of the singularity, we must identify an additional set of five variables that exhibit a finite nontrivial value at the singularity.
The shape and scalar conjugate momenta k_1, k_2, k_3 can be regularized through the following change of variables:
{
J = sgn(tanβ) x^1k_1+x^2k_2+x^3k_3√((x^1)^2+(x^2)^2+(x^3)^2) ,
L_1 = x^2k_3-x^3k_2 ,
L_3 = x^1k_2-x^2k_1 ,
.
while the scale variable and its conjugate momentum, x^0 and k_0, require the following transformation:
{ η = sgn (tanβ) (x^0 + k_0 (x^1)^2+(x^2)^2+(x^3)^2x^1k_1+x^2k_2+x^3k_3) ,
κ = |k_0| .
.
The variables we introduced tend to a finite limit as the singularity is approached by a quiescent solution:
{
J →sinθ (v_1 cosφ + v_2 sinφ) + v_3 cosθ ,
L_1 →tanβ (v_3 sinθsinφ - v_2 cosθ) ,
L_3 →tanβ sinθ (v_2 cosφ - v_1 sinφ) ,
η →sgn (tanβ) x^0 + v_0 J^-1tanβ ,
κ → v_0 ,
. as β→π2^± .
The solution identified by the initial data v_0, v_1, v_2, v_3, x^0(0), x^1(0), x^2(0), x^3(0) can be matched to a unique solution belonging to the other hemisphere with initial data -v_0, -v_1, -v_2, -v_3, -x^0(0), -x^1(0), -x^2(0), -x^3(0). It is worth noting that the second solution reaches the limit as t → -∞, i.e., the Big Bang singularity of the universe with the opposite spatial orientation is reached as t → -∞.
We have yet to discuss the electromagnetic degrees of freedom. Under the one-dimensional ansatz, there are two electromagnetic variables, A_3 and E^3. Their equations of motion w.r.t. the effective Hamiltonian (<ref>) (for n=1) are the following:
Ȧ_3 = q_3 E^3 ,
Ė^3= - q_3 A_3 .
At the singularity, A_3 and E^3 become conserved as q_3 goes to zero. The electromagnetic variables are unaffected by the orientation flip, meaning that the constant values to which these variables tend are the same regardless of whether the singularity is approached from the left or the right (β→π2^+ or β→π2^-). Thus, these variables are already regular and effectively describe the evolution of electromagnetic degrees of freedom in the entire extended configuration space.
Expressed in terms of the variables β, θ, φ, η, J, L_1, L_3, κ, A_3, E^3, and assuming the quiescence conditions are satisfied (i.e., neglecting the potential terms), the Hamiltonian constraint (<ref>) becomes:
ℋ_Kasner = 12[ κ^2 - J^2 - L_1^2tan^2βsin^2φ + L_3^2( cos^2φ - sin^-2θ)tan^2βsin^2φ - L_1 L_3tan^2βtanθtanφsinφ] ,
where we set n=1 for sake of simplicity.
The equations of motion for the variables β, θ, φ, η, J, L_1, L_3, κ, A_3, E^3 with respect to the coordinate time t can be obtained by calculating the Dirac brackets using the Hamiltonian constraint (<ref>). However, t is not a suitable choice of independent variable at the singularity as it diverges there. Instead, a natural choice of independent variable is the arc-length on the gnomonic sphere:
dℓ = √(dβ^2 + sin^2β (dθ^2+sin^2θ dφ^2)) ,
which is automatically monotonic everywhere on a solution and tends to a finite limit at the singularity.
The equations of motion give us the relationship between the two independent variables:
d ℓd t = Λ^-1cos^2β , Λ = (J^2 + sin^-2β( (L_3 sinθ)^2 + ( L_1sinφ + L_3tanθtanφ)^2 ))^-12 .
The equations of motion w.r.t. the arc-length during quiescence read:
[ dβdℓ = Λ J ,
dθdℓ = - Λ( L_1 + L_3 cosφtanθ)sin^-2βsin^-1φ ,
dφdℓ = Λ L_3 sin^-2βsin^-2θ ,
dηdℓ = - Λ Θ κ J^-2sin^-2β ,
dA_3dℓ = 0 , dJdℓ = Λ Θ cosβsin^-3β ,
dL_1dℓ = 0 ,
dL_3dℓ = 0 ,
dκdℓ = 0 ,
dE^3dℓ = 0 , ]
where:
Θ = ( L_3^2tan^2θtan^2φ + L_3^2sin^3θ + 2 L_1 L_3tanθtanφsinφ + L_1^2sin^2φ) sinθ .
It is important to notice that this model has ten degrees of freedom, but only eight of them are truly physical, as two are redundant due to the Hamiltonian constraint and its gauge-fixing. To eliminate the two remaining non-physical degrees of freedom, we need to solve the Hamiltonian constraint (<ref>) w.r.t. one of the variables and impose a gauge-fixing condition. A straightforward choice for gauge fixing the Hamiltonian constraint, which also serves as the generator of the dynamics, is to fix a specific instant of time. In our case, the natural choice is β = π2, which represent the instant of the singularity. The suitability of fixing β as a gauge for (<ref>) can be verified by calculating the Dirac bracket between β and (<ref>) and observing that it is never zero at β = π2.
At the singularity, the Hamiltonian constraint tends to the simple expression
ℋ_Kasner=1/2 (κ^2 - J^2) .
Therefore, it can be easily solved w.r.t. either the variable κ or J. Once we compute the equations of motion and their first derivatives, we can impose the condition κ = J (keeping in mind that both κ and J are positive-definite at the singularity) if we wish to eliminate this last redundant degree of freedom.
We are now prepared to present the continuation result. The equations of motion (<ref>) are regular at the singularity,meaning that they admit the same left and right limit as β→π2:
[ dβdℓ →Λ_π / 2 J ,
dθdℓ → - Λ_π / 2( L_1 + L_3 cosφtanθ) sin^-1φ ,
dφdℓ →Λ_π /2 L_3 sin^-2θ ,
dηdℓ → - Λ_π / 2 Θ κ J^-2 ,
dA_3dℓ → 0 , dJdℓ → 0 ,
dL_1dℓ → 0 ,
dL_3dℓ → 0 ,
dκdℓ → 0 ,
dE^3dℓ → 0 , ]
where:
Λ_π / 2 = lim_β→π/2Λ = (J^2 + (L_3 sinθ)^2 + ( L_1sinφ + L_3tanθtanφ)^2 )^-12 .
The assumptions of the Picard–Lindelöf theorem require the Lipshitz continuity of the right-hand side of the equations of motion. However, in our case, we can prove an even stronger condition: differentiability.
In fact, the first derivatives of the RHSs of (<ref>) w.r.t. all the variables β, θ, φ, η, J, L_1, L_3, κ, A_3, E^3 are all regular as β→π2 ^±:
∂_β(dJdℓ) → - Λ_π /2 Θ ,
∂_θ(dβdℓ) → - Λ_π /2^3 J sinθ( L_3^2 cosθ - L_3^2tanθtan^2φsin^3 θ - L_1 L_3sinφtanφsin^3 θ) ,
∂_θ(dθdℓ) →Λ_π /2^3 L_3sin^2θ( L_1 L_3 cosθsin^3 θsinφ + J^2 + 12 L_3^2 sin^2θ (3+ cos (2θ))tanφ) ,
∂_θ(dφdℓ) →Λ_π /2^3 L_3sin^2θ( L_3^2tanθtan^2φsin^2θ + L_1 L_3tanφsin^2θsinφ - 2 J^2tanθ.
. - 2tanθ( L_1sinφ + L_3tanθtanφ)^2 - 3 L_3^2 cosθsinθ) ,
∂_θ(dηdℓ) →Λ_π /2^3 κJ^2( -L_3 Θ sinθ( L_3tanθtan^2φsin^3θ + L_1tanφsin^3θsinφ-L_3cosθ) .
. + Λ_π /2^-2 L_3sinθ( L_3tanθ( 2tan^2φ + 3sinθ) + 2L_1tanφsinφ) - Λ_π /2^-2 Θcosθ) ,
∂_φ(dβdℓ) →Λ_π /2^3 Jsinφ( L_3tanθtanφ + L_1sinφ) ( L_1tanφ +L_3tanθsinφ) ,
∂_φ(dθdℓ) →Λ_π /2^3 2 J^2 + L_3^2 + L_3^2 cos(2θ)2sin^2φ( L_1 cosφ + L_3tanθ) ,
∂_φ(dφdℓ) →Λ_π /2^3 L_3sin^2θsinφ( L_3tanθtanφ + L_1sinφ) ( L_1tanφ+ L_3tanθsinφ) ,
∂_φ(dηdℓ) →Λ_π /2^3 κsin^3θJ^2 sin^2φ( 2L_1^2tanφ + 2L_3^2tanφtan^2θ + L_1L_3 ( 3+cos (2θ) )tanθsinφ)
( 2L_3^2-L_3^2sin^5θ + 2J^2sin^2θ + ( L_3tanθtanφsinθ + L_1sinφsinθ)^2 ) ,
∂_J (dβdℓ) →Λ_π /2^3 ( (L_3 sinθ)^2 + ( L_1sinφ + L_3tanθtanφ)^2 ) ,
∂_J (dθdℓ) →Λ_π /2^3 J^2sinφ( L_1+L_3cosφtanθ) ,
∂_J (dφdℓ) → -Λ_π /2^3 J L_3sin^2θ ,
∂_J (dηdℓ) →Λ_π /2^3 Θ κJ^3( 3J^2 +2(L_3 sinθ)^2 + 2( L_1sinφ + L_3tanθtanφ)^2 ) ,
(continued on next page)
(continued from previous page)
∂_L_1(dβdℓ) → -Λ_π /2^3 Jsinφ( L_3tanθtanφ + L_1sinφ) ,
∂_L_1(dθdℓ) → -Λ_π /2^3 J^2 + L_3^2sin^2θsinφ ,
∂_L_1(dφdℓ) → -Λ_π /2^3 L_3sin^2θsinφ( L_3tanθtanφ + L_1sinφ) ,
∂_L_1(dηdℓ) → - Λ_π /2^3 κsin^3θJ^2 sinφ( L_3tanθtanφ + L_1sinφ)
( 2L_3^2 - L_3^2sin^5θ + 2J^2sin^2θ + ( L_3tanθtanφsinθ + L_1sinφsinθ)^2 ) ,
∂_L_3(dβdℓ) → - Λ_π /2^3 J ( L_3sin^2θ + L_3tan^2θtan^2φ + L_1tanθtanφsinφ) ,
∂_L_3(dθdℓ) →Λ_π /2^3 ( L_1L_2sin^2θsinφ - J^2tanθtanφ) ,
∂_L_3(dφdℓ) →Λ_π /2^3 ( J^2sin^2θ + L_1L_3tanθtanφsinφsin^2θ + L_1^2sin^2θsin^2φ) ,
∂_L_3(dηdℓ) →Λ_π /2^3 κJ^2( ( L_3cosθtanθtan^2φ + L_1cosθtanφsinφ) ( 2J^2 + ( L_3tanθtanφ + L_1sinφ)^2 ) .
- L_3^3cos^2θsinθtan^2φ + L_1^2L_3sin^3θsin^2φ - L_3^3
- . L_3sin^2θ( 2J^2 + ( L_3tanθtanφ + L_1sinφ) ( L_3tanθtanφ + 2L_1sinφ) ) ) ,
∂_κ(dθdℓ) → - Λ_π /2 Θ J^-2 ,
and all the other derivatives tend to zero. Notice that imposing the asymptotic solution of the Hamiltonian constraint, κ = J, does not alter the regularity of the RHSs.
In full generality, when the potential terms cannot be neglected, the Hamiltonian constraint (<ref>) assumes the following form in the new variables:
ℋ_eff = ℋ_Kasner + 12 e^2√(3)sgn (tanβ)( η-κ J^-1tanβ) C(β,θ,φ)
+ 12 e^1√(3)sgn (tanβ)( η-κ J^-1tanβ) e^-2√(3) |tanβ| sinθsinφε ,
where C(β,θ,φ) represents the Bianchi-IX potential (<ref>) as a function of β,θ,φ. When the quiescent approximation is relaxed, the equations of motion (<ref>) acquire additional “force” terms arisig from the potential. However, these terms are strongly suppressed near the equator/singularity, due to the exponential factors in Eq. (<ref>), which tend to zero as β→π2 like exp(- const. |tanβ| ) (after solving the Hamiltonian, e.g., w.r.t. κ, and substituting the solution back into the equations of motion). In the equations of motion, the suppressing exponentials appear multiplied by powers of tanβ. Although the positive powers diverge, they do so slower than the exponentials and end up suppressed as well. As a result, the full equations of motion asymptotically tend to the quiescent ones (<ref>). This holds true for the first-derivative expressions (<ref>) as well, once again due to the presence of the suppressing exponentials.
§ GENERIC EINSTEIN–MAXWELL SYSTEM
In the most generic situation, the electromagnetic field has all three components. The Hamiltonian constraint is given by Eq. (<ref>). In this case as well, we can identify a conserved quantity (i.e., one that is first-class w.r.t. the Hamiltonian constraint):
H^3D_HO = ∑_a=1^3 ( (E^a)^2 + (A_a)^2 ) , { H^3D_HO , ℋ[N] }_* = 0 .
This can be readily proven by observing that ℋ[N] depends on the electromagnetic variables only though the six terms (E^a)^2+(A_a)^2 and M_a (the latter defined in Eq. (<ref>)), and each of these terms commutes separately with H^3D_HO. These six terms correspond to the conserved quantities of a three-dimensional Harmonic oscillator, namely, three “energies” and three components of the angular momentum. Hence, we can associate a constant of motion ε_3D to H^3D_HO. It is important to notice that, although this quantity is not explicitly present in (<ref>), it establishes bounds on the possible values of the electromagnetic field and momenta:
| A_a | ≤√(ε_3D) , | E^a | ≤√(ε_3D) .
We now demonstrate that, with the inclusion of a scalar field, as discussed in Paragraph <ref> of Section <ref>, the conditions for quiescence are still satisfied, even without the one-dimensional ansatz for the electromagnetic field.
As mentioned before, the relevant Hamiltonian constraint in this scenario is given by Eq. (<ref>), to which we add a scalar field (referred to as x^3) without mass nor potential terms:
ℋ[N] = n [ 12( - k_0^2 + k_1^2 + k_2^2 + k_3^2 ) + 12 U (x,A,E) ] ,
where k_3 represents the conjugate momentum to x^3. Since the electromagnetic variables only appear in the potential term U(x,A,E), the removal of the one-dimensional ansatz does not affect the results obtained in Paragraph <ref> of Section <ref>: during a Kasner epoch, the metric components q_a progressively decrease in time, along with any polynomial quantity derived from them. However, in the absence of the one-dimensional ansatz, the potential U acquires two additional terms (see Eqs. (<ref>) and (<ref>)), one of which is not even polynomial in q_a. Consequently, they need separate discussion.
The potential U now consists of a combination of three quantities: C, V, and W. The Bianchi-IX potential C(x^1,x^2) depends only on the metric variables and is polynomial in q_a, hence its behavior is analogous to that of the effective potential in the one-dimensional model (<ref>). The potential V(x^1,x^2,A,E) is again polynomial in q_a, but its coefficients are functions of the electromagnetic variables. The presence of the conserved quantity (<ref>) implies that E^a and A_a can only oscillate within finite and fixed values, thus the behavior of V is controlled by that of q_a. Similarly, the behavior of W(x^1,x^2,A,E) is determined by q_a. However, in this case, the dependence of W on q_a is non-polynomial. W depends on the following three functions of q_a:
q_1 q_2(q_1-q_2)^2 = e^2 |v⃗| t cosφ(e^2 |v⃗| t cosφ-1)^2 , q_2 q_3(q_2-q_3)^2 = e^|v⃗| t (cosφ+√(3)sinφ)(e^|v⃗| t (cosφ+√(3)sinφ)-1)^2 ,
q_3 q_1(q_3-q_1)^2 = e^|v⃗| t (cosφ+√(3)sinφ)(e^|v⃗| t cosφ-e^√(3) |v⃗| t sinφ)^2 ,
where q_a has been replaced by the solutions of the equations of motion during a Kasner epoch, as given by Eq. (<ref>), with the velocities expressed in polar coordinates as in Eq. (<ref>). As t → + ∞, the three quantities in Eq. (<ref>) tend to zero for all values of φ, except φ = π/6, π/2, 5π/6, 7π/6, 3π/2, 11π/6. These six directions are parallel to the three axes of symmetry of the shape potential C(x_1,x_2) <cit.>. Along these directions, two of the metric components q_a are identical, and one of the quantities in Eq. (<ref>) becomes infinite. This singularity only affects a measure-zero set of solutions (those confined along the symmetry axes), and their continuability can be discussed separately.
We have demonstrated that the removal of the one-dimensional ansatz does not hinder quiescence: all the potential terms decrease with time, allowing the solution to settle around a single Kasner epoch all the way to the singularity.
Having established that the entire system (i.e., including all six electromagnetic degrees of freedom) exhibits quiescent behavior as it approaches the Big Bang, we now proceed to demonstrate that the continuation result holds as well. To prove this, we follow the same procedure as described in Paragraph <ref> of Section <ref>. In terms of the variables β, θ, φ, η, J, L_1, L_3, κ, A_1, E^1, A_2, E^2, A_3, E^3, the dynamics governed by the Hamiltonian constraint (<ref>) is indistinguishable from that generated by (<ref>) when the quiescence conditions are satisfied. Therefore, the equations of motion can be well approximated by Eqs. (<ref>), with the addition of:
dA_1dℓ = 0 , dA_2dℓ = 0 , dE^1dℓ = 0 , dE^2dℓ = 0 ,
whose RHSs are differentiable, similar to the other equations of motion, as we have previously demonstrated.
Due to the presence of additional potential terms, Eq. (<ref>) modifies as:
ℋ_eff = ℋ_Kasner + 12 e^2√(3)sgn (tanβ)( η-κ J^-1tanβ) C(β,θ,φ)
+ 12 e^1√(3)sgn (tanβ)( η-κ J^-1tanβ) V(β,θ,φ,A,E) + 12 W(β,θ,φ,A,E) .
The equations of motion acquire additional “force” terms compared to the system under the one-dimensional ansatz; however, these terms are highly suppressed near the singularity. All the potential terms go exponentially to zero as β→π2, and the presence of a generic electromagnetic field does not affect this behavior. This is because all components of the electromagnetic field are bounded within fixed and finite values, and thus they do not lead any divergent contribution.
We are able to extend part of these results to non-Abelian gauge fields. However, this is only true under the one-dimensional simplifying ansatz, in which the first and the second components of the gauge fields (and momenta) are set to zero. At the moment we cannot prove the continuation result in full generality in the non-Abelian case, therefore we relegated the discussion of the present state of our understanding to Appendix <ref>.
§ CONCLUSIONS
In Ref. <cit.>, we conjectured that it is possible to continue Einstein's classical equations through the Big Bang singularity into another universe with an opposite time direction and spatial orientation, which preserves all the information about the state of universe on the other side of the singularity (although it might become irretrievably scrambled in the process due to a chaotic phase of the dynamics). This is intimately related to far-reaching issues such as black hole unitarity and the nature of the Big Bang.
This conjecture was proven in simplified cases, including homogeneous cosmologies <cit.>, inflationary models <cit.>, and the Schwarzshild-scalar system <cit.>. Our approach is to gradually increase the complexity of the models under consideration, test the validity of the conjecture, and gain insight into the behavior of physical fields across the singularity. As mentioned in the previous paper <cit.>, the next natural step in this process is to determine if the predicted reversal of orientation at the singularity can be physically measurable. In other words, can the inhabitants of the universe determine, through an experiment, which side of the Big Bang singularity they live in?
To answer this question, three ingredients are necessary. Firstly, we need to understand what happens to the orientation of space defined by the vielbein/frame fields. It has been established that these fields undergo a sign change at the singularity in the original paper <cit.>. Secondly, we must establish the behavior of vector (gauge) fields and fermions. If all of these fields undergo a “flipping” transition at the singularity, it might cancel out the orientation reversal effect of the vielbeins, making it unobservable. Finally, we need to investigate what happens to experimentally realizable processes, such as beta decays. Ultimately, the crucial factor is whether the parity-breaking vertices of the Standard Model remain unchanged across the singularity when considering their dependence on the spacetime vielbeins.
In the present paper, we conducted a detailed analysis of gauge fields. We first determined that the continuation result remains unchanged in the presence of Abelian gauge fields (in general) and non-Abelian gauge fields (under the simplifying assumption of the one-dimensional ansatz). Additionally, we established that the behavior of the gauge fields near the singularity is straightforward: their values freeze, with zero time derivatives at the exact instant of the singularity, and they evolve through it without flipping their orientation. The next logical step is to analyze fermion fields, which will allow us to determine the fate of the parity-breaking vertices of the Standard Model. Another interesting extension of this work would be to relax the one-dimensional ansatz for non-Abelian gauge fields, although this step has not been feasible thus far. This paper provides compelling evidence that the general case, beyond the one-dimensional ansatz, does not affect the continuation outcome nor the conclusion that gauge fields do not “flip” at the singularity. However, there is still some uncertainty around this matter, and further research is needed for confirmation.
§ ACKNOWLEDGEMENTS
This work has been partially supported by Agencia Estatal de Investigación (Spain) under grant PID2019-106802GB-I00/AEI/10.13039/501100011033, by the Regional Government of Castilla y León (Junta de Castilla y León, Spain), and by the Spanish Ministry of Science and Innovation MICIN and the European Union NextGenerationEU (PRTR C17.I1).
The authors would also like to acknowledge the contribution of the COST Action CA18108 “Quantum gravity phenomenology in the multi-messenger approach”.
utphys
§ APPENDICES
§ EINSTEIN–YANG–MILLS MODEL UNDER ONE-DIMENSIONAL ANSATZ
The results presented in this work for the Einstein–Maxwell system can be extended to the Einstein–Yang–Mills systems with SU(2) and SU(3) structure groups, under the one-dimensional ansatz. These non-abelian models are described by the action:
[ S = ∫ d^4 x √(- h)( R - 14 h^μν h^ρσ F^I_μρ F^J_νσ δ_IJ) ,; ]
with Faraday tensor F^I_μν=∂_μ A^I_ν - ∂_ν A^I_μ + c^I_JK A_μ^J A_ν^K. The structure constants are given by the three-dimensional Levi-Civita symbol c^I_JK = δ^ILε_LJK for SU(2), and, in the case of SU(3), by a totally-antisymmetric symbol c^I_JK = δ^IL f_LJK, where f_123=1, f_147=f_165=f_246=f_257=f_345=f_376=1/2, f_458=f_678=√(3)/2, and all the others (which are not permutations of these indices) are zero. The scalar product in the internal gauge space is given by the group metric δ_IJ, which is also used for raising and lowering internal indices.
Homogeneous ansatz and global constraints. In the Hamiltonian formalism, after imposing the homogeneous ansatz, a generic Einstein–Yang–Mills system undergoes time evolution governed by a Hamiltonian that is a linear combination of the following global constraints:
ℋ[N] = n ( p^abp^cdq_bc q_da - 12 (p^abq_ab)^2 + q_ab q_cd δ^bcδ^da - 12 ( q_ab δ^ab )^2 .
. + 12 q_ab δ^IJE_I^a E_J^b + 14 q q^ab q^cdδ_IJ F^I_ac F^J_bd) ,
𝒟_i [N^i] = n^d ( E_I^a A^I_bε_adcδ^cb + 2 p^ab q_ac ε_bdf δ^fc) ,
𝒢_I[A_0^I] = a^I_0 (A_a^J E^a_K c^K_JI) ,
where δ^IJ is the inverse group metric, n and n^a are defined in Eq. (<ref>), and
a_0^I=∫ d θ dϕ d ψ sinθ A_0^I(x) ,
are N^2-1 new Lagrange multipliers, corresponding to the spatial average of the scalar potential A_0^I, where dimSU(N)=N^2-1 is the dimension of the gauge group. It should be noted that unlike the Einstein–Maxwell model where the Gauss constraint is automatically satisfied by the homogeneous ansatz (as shown in Eqs. (<ref>)), in the case of the Einstein–Yang–Mills system, the Gauss constraint becomes a set of N^2-1 new proper constraints that need to be solved and gauge-fixed.
Gauge-fixing the diffeomorphism constraints.
The diffeomorphism constraints in Eqs. (<ref>) share the same functional expression as the abelian ones (the electromagnetic contribution to the diffeomorphism constraints in Eqs. (<ref>) can be rewritten as E^a A_bε_adcδ^cb). Consequently, we can use the same gauge-fixing as in Eqs. (<ref>). By solving the constraints, the following solutions are obtained:
{
p^23 = E_I^2 A^I_3 - E_I^3 A^I_2 /2 ( q_2 - q_3 ) ,
p^13 = E_I^3 A^I_1 - E_I^1 A^I_3/2 ( q_3 - q_1 ) ,
p^12 = E_I^1 A^I_2 - E_I^2 A^I_1 /2 ( q_1 - q_2 ) .
.
By applying the same procedure as outlined in Section <ref>, we derive the following on-shell Hamiltonian constraint:
ℋ[N] =n [ℋ_BIX + q_2 q_3 (M_1)^2/2(q_2-q_3)^2+q_1 q_3 (M_2)^2/2(q_1-q_3) ^2+q_1 q_2 (M_3)^2/2(q_1-q_2)^2
+ 12∑_I=1^N^2-1( q_1 ((E_I^1)^2 + (A^I_1)^2 ) + q_2 ((E_I^2)^2 + (A^I_2)^2 ) + q_3((E_I^3)^2 + (A^I_3)^2 ) )
+ f(A A A) + g (A A A A)
] ,
where in this case, the kinetic term of the gauge fields incorporates the contribution from all the gauge components, and there are also two additional terms (cubic and quartic in the vector potential A^I_a) arising from the interaction of the non-abelian gauge field with itself. The Gauss constraints, which are independent of the metric variables, remain unchanged.
One-dimensional ansatz. As we did for the electromagnetic case, we consider a gauge field with a single spatial component:
A^I_1 =A^I_2 = 0 , E_I^1 = E_I^2 = 0 , ∀ I∈{1,…,N^2-1} .
This ansatz is well-posed for the same reasons discussed at the beginning of Section <ref>.
The Hamiltonian constraint (Eq. (<ref>)) and Gauss constraints (the last equation in (<ref>)) become:
ℋ[N] = ℋ[N]|_A^I_1 =A^I_2 = 0
E_I^1 = E_I^2 = 0 = n [ ℋ_BIX + 12 q_3 ∑_I=1^N^2-1( (E_I^3)^2 + (A^I_3)^2 ) ] ,
𝒢_I[A_0^I] = a^I_0 (A_3^J E^3_K c^K_JI) .
Notice that, under the one-dimensional ansatz, the solution of the diffeomorphism constraints becomes p^12=p^23=p^13=0, similar to the one-dimensional abelian case. Additionally, the self-interaction terms f(AAA) and f(AAAA) in the Hamiltonian constraint are also zero. Therefore, the Hamiltonian constraint of a one-dimensional non-abelian system takes the same form as the abelian one under the same ansatz, viz. Eq. (<ref>). However, the Gauss constraints still need to be solved and gauge-fixed.
Gauge-fixing the Gauss constraints.
A non-abelian model with a gauge group SU(N) has N^2-1 non-zero Gauss constraints. However, under the one-dimensional ansatz, not all of these constraints are independent. In the case of the groups we are interested in, namely SU(2) and SU(3), it is found that there are only two (out of three) linearly independent Gauss constraint for SU(2), and six (out of eight) linearly independent Gauss constraints for SU(3). This observation is consistent with the number of Casimir operators of these groups: SU(2) has one Casimir, whereas SU(3) has two. The Casimir operators represent the number of free parameters used to label the group representations, while the remaining parameters are determined by the choice of gauge for the independent Gauss constraints.
By arbitrarily selecting 𝒢_1,2 as the independent gauge generators for SU(2) and 𝒢_1,2,4,5,6,7 for SU(3), we can find a well-posed gauge-fixing:
[ SU(2): 𝒢_1 , 𝒢_2 ≈ 0 ,
A_3^1 , A_3^2 ≈ 0 ,; ; SU(3): 𝒢_1 , 𝒢_2 , 𝒢_4 , 𝒢_5 , 𝒢_6 , 𝒢_7 ≈ 0 ,
A_3^1 , A_3^2 , A_3^4 , A_3^5 , A_3^6 , A_3^7 ≈ 0 . ]
Once the Gauss constraints are solved, the conjugate momenta E^3_I corresponding to the gauge-fixed components A_3^I must be zero. As a result, the remaining independent gauge components are A_3^3, E_3^3 for SU(2), and A_3^3, A_3^8, E_3^3, E_8^3 for SU(3).
Effective Hamiltonian constraint. Afters solving the Gauss constraints, the only remaining constraint is the Hamiltonian one:
[ SU(2): ℋ[N]=n [ ℋ_BIX + 12 q_3 ( (E_3^3)^2 + (A^3_3)^2 ) ] ,; ; SU(3): ℋ[N]=n [ ℋ_BIX + 12 q_3 ( (E_3^3)^2+E_8^3)^2 + (A^3_3)^2 +(A^8_3)^2 ) ] . ]
As discussed in Section <ref>, we can identify a conserved quantity, which is a first-class quantity w.r.t. the Hamiltonian constraint. In the case of SU(2), this conserved quantity corresponds again a one-dimensional harmonic oscillator:
H_HO^1D=(E_3^3)^2 + (A^3_3)^2 ,
(compare this to the abelian case, Eq. (<ref>)), while for SU(3) the conserved quantity is:
H_HO^2D=(E_3^3)^2+E_8^3)^2 + (A^3_3)^2 +(A^8_3)^2 .
H_HO^1D and H_HO^2D are constants of motion, and we can set them to a positive constant ε for both gauge groups without loss of generality. This final step allows us to describe the dynamics of both SU(2) and SU(3) gauge systems with the same effective Hamiltonian:
ℋ_eff[N] = n [ ℋ_BIX + 12 q_3 ε] .
Since this effective Hamiltonian is identical to that of the one-dimensional abelian case (see Eq. (<ref>)), the continuation result proven in Section <ref> also applies to the Einstein–Yang–Mills systems with SU(2) and SU(3) as structure groups under the one-dimensional ansatz.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.